id
stringlengths
10
16
pid
stringlengths
12
18
input
stringlengths
2.75k
221k
output
stringlengths
500
9.79k
gao_GAO-20-74
gao_GAO-20-74_0
Background The Airline Deregulation Act of 1978, which established the EAS program, specifies that if DOT determines that air service will not be provided without subsidy, DOT shall use EAS program funds to award a subsidy to a carrier willing to provide service. As of October 1, 2018, 108 communities within the contiguous United States (as well as 65 in Alaska and Hawaii) were receiving EAS (see fig.1). To be eligible for EAS, a community must: be located more than 70 miles from the nearest large or medium hub require a subsidy per passenger of $200 or less, unless the community is more than 210 miles from the nearest large or medium hub airport or unless DOT decides to issue a waiver; have a subsidy per passenger of less than $1,000 during the most recent fiscal year at the end of each EAS contract, regardless of the distance from a hub airport; have had an average of 10 or more enplanements per service day during the most recent fiscal year, unless the community is more than 175 driving miles from the nearest medium or large hub airport or unless DOT is satisfied that any decline below 10 enplanements is temporary; and have received subsidized EAS in fiscal year 2011 or were provided a 90-day termination notice by an air carrier, and the Secretary required the air carrier to continue such service to the community. EAS is funded through appropriations from a combination of discretionary funding provided through annual appropriations acts, and overflight fees, which are collected by the Federal Aviation Administration (FAA) from foreign aircraft traveling over U.S. airspace without taking off or landing in the United States. Historically, the amount of overflight fees provided to EAS has been $50 million per year, but the FAA Modernization and Reform Act of 2012 directed that all overflight fees be directed to EAS, an action that which resulted in an increased proportion of the program being funded by overflight fees (see fig. 2). The minimal level of service each community is required to receive—the minimum number of roundtrips and passenger seats that must be provided, certain characteristics of aircraft to be used, and the maximum number of permissible stops to a medium or large hub airport—are all established in law. In general, current law requires that an EAS carrier provide the following: service to a hub airport, defined as an FAA-designated medium- or large-hub airport; two daily round trips, 6 days a week, with not more than one intermediate stop to the hub; flights at reasonable times taking into account the needs of passengers with connecting flights and at prices that are not excessive compared to prices of other air carriers for like service between similar places; service in an aircraft with an effective capacity of at least 15 passengers, under certain circumstances, unless the affected community agrees in writing to the use of smaller air craft; service in an aircraft with at least two engines and using two pilots; and service with pressurized aircraft under certain circumstances. DOT awards contracts to individual air carriers to serve EAS communities on a rolling basis throughout the year. According to DOT officials, DOT takes the following steps: DOT issues a request for proposals to all carriers to provide air service to an eligible community. Air carriers submit proposals that include the size of the aircraft to be used, the frequency of service, potential hubs, and the amount of subsidy required. Air carriers request subsidies at a level to cover the difference between their projected revenues and expenses, and to provide a profit. While there are no limits on the amount of subsidy that a carrier can request in its proposal, a community can become ineligible for EAS if the annual subsidy exceeds $1,000 per passenger regardless of distance from the nearest hub airport or $200 per passenger if it is located fewer than 210 miles from the nearest large or medium hub airport. DOT reviews the proposals and selects an air carrier to provide air service to the community, generally for a contract period ranging from 2 to 5 years. When selecting air carriers to provide service to EAS communities, DOT is directed by statute to consider five factors: service reliability, contracting and marketing arrangements with a larger carrier at the hub, “interline agreements” with a larger carrier at the hub, whether the air carrier has included a plan in its proposal to market its service to the community, and user preferences. In addition, the Secretary may consider the relative subsidy requirements of the carriers. By statute, the subsidy is set at an amount to cover the difference between the carrier’s projected costs of operation and its expected passenger revenues, while providing the carrier with a profit element typically equal to 5 percent of total operating expenses. DOT awards a contract and pays air carriers based on the number of flights completed in the prior month. Air fares on EAS routes are set at the air carrier’s discretion without input from DOT. In 2003, the Vision 100—Century of Aviation Reauthorization Act established the AEAS, which allows communities to forgo subsidized EAS for a prescribed amount of time in exchange for a grant to spend on options that may better suit their transportation needs. For example, a community under AEAS may use the grant to purchase an aircraft to meet transportation needs or may receive some flexibility on operating requirements. Under AEAS, the community must still adhere to EAS eligibility requirements, and the maximum annual grant amount may not exceed the annual EAS subsidy at the time of application to the program or what DOT would pay to maintain EAS at the eligible community. For example, if an air carrier received a subsidy of $1 million per year to serve a community and the community decides to leave EAS and enter AEAS, then the grant amount to the community under AEAS may not be more than $1 million per year. As of September 2019, 8 of the 108 EAS communities in the contiguous United States were participating in the AEAS. In addition, federal funds are available to support airports—including airports that receive subsidized EAS—through the Airport Improvement Program (AIP). AIP grants are awarded to public entities to make capital improvements—such as runway and taxiway improvements. The level of AIP funding that an airport receives is based on the number of annual enplanements at the airport. For fiscal year 2018, airports with 10,000 or more passengers were entitled to at least $1 million; airports with between 8,000 and 10,000 passengers were entitled to $600,000, and airports with fewer than 8,000 passengers were eligible for $150,000. Thus, the number of enplanements at an airport receiving subsidized EAS may affect the amount of AIP funds for which the airport is eligible. EAS Can Provide a Number of Benefits to Communities Officials from the 14 communities receiving EAS that we interviewed cited several economic benefits of the local air service they receive: Economic development, including the ability to attract and retain businesses and professionals: When asked what benefits they received from local air service, officials from all 14 communities mentioned that having access to reliable air service through EAS was crucial for economic development in their community, including the ability to attract and retain businesses and professionals. In three of the communities, officials told us that the first question a business asks when deciding to locate to the area is if air service is available. Increased tourism to the community: When asked about benefits, officials in 6 of the 14 communities mentioned that EAS helps to bring tourists to the community. One community official told us that having access to air service through EAS was a key factor in the community’s being selected to host the Boy Scout Jamboree, which brought 8,000 volunteers and 45,000 Boy Scouts to the area. Creation of jobs related to air service: Officials from 4 of the 14 communities also mentioned that EAS brought jobs related to air service to the community, including TSA personnel, airport employees, airline employees, and concessionaire employees such as those at fixed-based operators and airport restaurants. In addition, some community officials told us that having air service in the community creates other types of jobs and supports area industries, such as hotels, restaurants, and rental car companies. Further, community officials told us that EAS provides other benefits in addition to economic benefits. Officials from 11 of the 14 communities mentioned that EAS allows residents to more easily travel and be connected to the rest of the world. Officials in 3 communities said that residents use EAS to travel to larger cities for medical services that are not available locally, such as procedures and appointments with specialists. Officials whom we interviewed in three communities that lost eligibility for subsidized EAS told us that losing air service has had a negative economic effect. For example, officials in one community told us that the lack of air service has decreased the ability of local businesses, hospitals, and colleges to recruit for professional-level jobs, such as physicians and professors, who have travel needs to maintain proficiency in their field. An official from another community told us that losing EAS led to decreased enplanements, which, in turn, reduced the amount of AIP funding that the airport receives. With less AIP funding, the airport is not able to pay for improvements that would attract or enable air carriers to serve the community. Most of the studies we reviewed found there to be a correlation between aviation activity and economic development. Specifically, several of the findings indicate that greater aviation activity in a region is correlated with some increase in the growth in population, employment, or per capita incomes. The size of the influence in these findings was relatively small but statistically significant. For example, one study found that a 1 percent rise in passengers per capita was associated with 0.055 percent rise in output per capita and another study found that a 10 percent increase in number of nonstop destinations served from an airport was associated with a 0.13 percent increase in employment and a 0.2 percent increase in average wage. One study that specifically examined the effect of subsidized air service found that the availability of EAS was related to a small but statistically significant increase in per-capita income in the local market. Specifically, this study found that a 1 percent increase in traffic at an airport receiving subsidized EAS was related to a 0.12 percent increase in per-capita income. Further, another study that focused solely on small airports found airport activity was associated with higher per-capita income, while another study found that more rural areas experienced an even greater benefit of nearby aviation activity than did more urban areas. However, two of the studies we reviewed found that the effect of aviation activity on local economic factors may be greater in areas with larger airports, which tend to be in larger metro areas, than in areas with smaller airports. Statutory Changes Have Limited Communities’ EAS Eligibility, but Nearly One-Third of Communities in the Program Continue to Receive Service through Waivers Since 2010, Changes Limited Communities’ EAS Eligibility and Increased Flexibility of Air Carriers’ Operations Since 2010, four statutory changes and a change in DOT’s enforcement policy have limited the number of communities that are eligible to receive EAS. (See app. II for a detailed list of statutory changes.) The Airport and Airway Extension Act of 2011 prohibited DOT from continuing to provide subsidies to communities with annual per- passenger EAS subsidies of over $1,000, regardless of their distance from the nearest hub airport. The FAA Modernization and Reform Act of 2012 removed eligibility for communities within 175 miles of a large- or medium-hub airport that do not have an average of least 10 enplanements per day during the most recent fiscal year, unless DOT grants them a waiver. The FAA Modernization and Reform Act of 2012 removed EAS eligibility for communities that did not receive EAS between September 30, 2010, and September 30, 2011, thus preventing further growth of the program. This limitation does not apply to Alaska and Hawaii. The number of communities that would otherwise be eligible for service if not for this provision is unknown. We are aware of at least one community that lost eligibility based on this requirement. However, DOT has not been able to determine how many communities fall into this category due to a number of complicating factors, including an unclear count of the number of communities that were initially eligible for EAS in January 1979 and changes in eligibility in the intervening years. The Consolidated Appropriations Act of 2014 and subsequent appropriations acts required the Secretary of Transportation to negotiate a local cost share with communities located less than 40 miles from the smallest hub airport before entering into a new contract using EAS subsidies. Two communities in the contiguous United States—Pueblo, Colorado and Lancaster, Pennsylvania—were initially subject to this provision. Currently, Lancaster, Pennsylvania is the only community in the contiguous United States subject to the provision. In October 2014, DOT issued a Notice of Enforcement Policy stating that it would start enforcing the annual subsidy-per-passenger cap of $200 for communities located less than 210 miles from a medium- or large-hub airport after September 30, 2015, thereby limiting the number of communities eligible for EAS in 2016. However, DOT may grant a waiver to communities that have not met the cap. We also identified two statutory changes since 2010 that increased the flexibility of air carriers’ operations for the EAS program, and one that automatically grants waivers for the $200 subsidy-per-passenger cap to communities that meet certain requirements. The Consolidated and Further Continuing Appropriations Act of 2012 and subsequent appropriations acts eliminated the requirement that aircraft providing service under the EAS program have a minimum 15- seat passenger capacity. Officials from about half (8 of 17) of the communities that we interviewed were in favor of the elimination of this requirement. As a result of this change, the number of EAS communities in the contiguous United States receiving service with eight- or nine-seat aircraft increased from 23 percent (25 of 107 communities) in 2010 to 47 percent (50 of 107 communities) in 2019. The FAA Reauthorization Act of 2018 explicitly allowed the Secretary of Transportation to consider the flexibility of current operational dates and airport accessibility when issuing requests for proposal of EAS at seasonal airports. DOT had already been considering seasonal service for some communities. Two of the communities that we interviewed—Bar Harbor, ME, and Cody, WY—have seasonal EAS because the number of passengers fluctuates during different times of the year. The FAA Reauthorization Act of 2018 required DOT to automatically grant waivers for annual subsidy-per-passenger cap of $200 if (1) a community’s subsidy per passenger for a fiscal year is lower than any of the previous 3 fiscal years or (2) if the subsidy per passenger for a fiscal year is less than 10 percent higher than the highest subsidy per passenger for the previous 3 fiscal years. The Secretary may only waive this subsidy cap once per community. According to DOT, it began implementing this provision in 2019 using fiscal year 2018 data. As described earlier, DOT is allowed to waive some eligibility requirements. DOT can grant waivers to communities for (1) not meeting the 10-enplanements per-day requirement or (2) exceeding the $200 subsidy-per-passenger cap in the prior fiscal year. There are several steps that DOT generally follows when granting EAS waivers: DOT collects information from the prior fiscal year to determine which communities no longer meet EAS eligibility requirements. DOT issues a “show cause” order that directs the EAS community or other interested persons to submit information to show why DOT should not terminate the eligibility of the community. The communities that are listed in the “show cause” order may provide DOT with information demonstrating that they met EAS requirements or submit a petition to DOT that demonstrates that the community’s failure to meet eligibility requirements is a temporary situation in order to retain eligibility. If the community does not provide new information to demonstrate that they met EAS requirements or submit a petition, then the community’s eligibility for EAS is terminated. DOT then issues a final order that changes its initial determination, grants a waiver to the community, or terminates the community’s eligibility for EAS. If a community disagrees with DOT’s decision to terminate eligibility, it may submit a petition for restoration. While Some Communities Lost Eligibility for EAS since 2010, DOT Granted Most Waiver Requests, Enabling Many EAS Communities to Continue to Receive EAS As a result of these changes in statute and enforcement policy, 12 communities lost eligibility for EAS since 2010 and either were not eligible for a waiver, did not apply for one, or applied for a waiver and were not granted one (see table 1). While some communities lost eligibility for EAS, many communities that did not meet eligibility requirements since 2014 continue to receive EAS because they were granted at least one waiver from DOT. From fiscal year 2014 through fiscal year 2019, DOT granted a total of 110 waivers to 37 communities—about one-third of the number of communities currently in the program (see fig. 3). The number of communities that received waivers in recent years has increased during this time period, in part due to DOT’s decision to enforce the $200 subsidy-per-passenger cap. DOT granted waivers to 15 communities because they experienced a hiatus in service during the year that resulted in the community’s not meeting the 10 average daily enplanements requirement or exceeding the $200 subsidy-per-passenger cap. Of the communities that petitioned for waivers, DOT granted waivers to all but three—Jamestown, NY; Franklin/Oil City, PA; and Hagerstown, MD. Jamestown did not meet the 10 enplanements per-day requirement and exceeded the $200 subsidy cap in fiscal year 2016. DOT officials did not grant a waiver to Jamestown because they did not think there was sufficient evidence that Jamestown would ever have enough service to meet eligibility requirements. Franklin/Oil City has not met the 10 enplanements per-day requirement in each year since fiscal year 2013 and has exceeded the $200 subsidy cap in each year since fiscal year 2015. DOT did not grant a waiver to Franklin/Oil City because of its continued non- compliance with these requirements and its proximity to a medium hub airport. Pittsburgh International Airport is 85 driving miles away. In September 2019, Franklin/Oil City filed a petition to DOT for reconsideration. DOT denied the petition. Hagerstown has not met the 10 enplanements per-day requirement since fiscal year 2013 (except fiscal year 2016), and has exceeded the $200 subsidy cap each fiscal year since fiscal year 2015. DOT did not grant a waiver to Hagerstown because of its proximity to a large hub airport— Hagerstown is less than 70 miles from Washington Dulles International Airport—and the fact that there was not sufficient evidence to indicate that Hagerstown would be able to meet eligibility requirements in the future. In August 2019, Hagerstown filed a petition to DOT for reconsideration. DOT denied the petition, and Hagerstown filed suit to challenge the decision in federal court. Athens, GA, which did not meet the 10-enplanements per-day requirement, was eligible to submit a waiver request but did not do so. While The Number of Communities Receiving EAS Has Remained Relatively Stable Since 2010, Program Expenditures Have Increased by About 70 Percent The number of communities in the contiguous United States receiving EAS changed little since the beginning of fiscal year 2010 to the beginning of fiscal year 2018—from 104 on October 1, 2009, to 109 on October 1, 2017. However, program expenditures for EAS communities in the contiguous United States have increased from approximately $161.3 million in fiscal year 2010 to $276.9 million in fiscal year 2018—an increase of nearly 72 percent (see fig.4). Some of the increased program expenditures were due to increased costs of certain critical resources over the last several years, such as pilots’ salaries. However, even when total expenditures are adjusted for the effect of inflation, expenditures still rose substantially. Notably, we found a nearly 50 percent increase in spending that is not accounted for by the general rise in prices over these years, despite a roughly consistent number of communities served by the program. According to DOT officials, some of the cost increase is related to factors that also affected the rest of the airline industry, such as increased costs for pilots, flight crew, and mechanics. For example, in 2018 we found that compensation for commercial airline pilots has increased in recent years, most noticeably in new-hire compensation at regional airlines. Our analysis of Bureau of Labor Statistics data from 2012 through 2017 showed that the median wages in the pilot occupation increased by approximately 2.4 percent per year, while wages for all occupations increased by about 1 percent per year over this period. DOT officials told us that other factors contributing to increased program costs are more specific to EAS. For example, some regional airlines that serve EAS communities have experienced financial difficulties, and in some cases, contracts with new carriers have increased in price to factor in costs associated with replacing the previous carrier’s service. DOT officials noted that larger air carriers that serve many markets have more options available to help offset industry-wide cost increases, such as increasing fares on more commercially viable routes, whereas some of the smaller carriers that primarily service EAS markets have fewer options on the revenue side to offset cost increases. EAS Program Stakeholders Cited Challenges to Retaining Eligibility and Suggested Options for Reform Communities and Air Carriers Reported Challenges That Include Maintaining Quality Air Service and Dealing with a Shortage of Qualified Pilots to Serve EAS Routes Community officials and air carriers that we interviewed described several challenges they face with regard to maintaining viable service. Many of these challenges compound each other. Quality of Service: According to officials from the communities we interviewed, an air carrier provides good quality service to an EAS community when the service is reliable (i.e., flights are on time, at convenient times, and are not frequently cancelled), offers connections to multiple locations, and includes benefits such as the ability to easily catch a connecting flight and check bags to the final location. Some community officials also said good quality service involves seamless connections to large hubs with regional jets. When a carrier does not provide what communities and passengers see as quality service, the number of enplanements decreases because people stop using the service. As a result, the carrier may decrease the number of flights per day to make the service financially viable. However, the reduction in frequency could further degrade the quality of service. Carrier representatives explained that many factors affect the quality of service carriers are able to provide and communities explained that unreliable service can result in several problems for them. Decline in Enplanements: Officials in most of the communities (15 of 17) said that a lack of quality service from the carrier had been a challenge and in many instances (14 of 17) had led residents to opt to travel to an alternative larger airport for service. The resulting decline in the number of enplanements can put a community at risk of losing EAS eligibility because it may not be able to achieve an average of at least 10 enplanements per service day or stay under the $200 subsidy-per-passenger cap. Officials from one community said that its EAS carriers’ cancelled flights and lack of interline agreements with mainline airlines had resulted in customers choosing to drive 80 miles to fly out of a large hub airport rather than use the local airport. Providing Service within Subsidy Caps: Four of the carriers we interviewed said that increased costs—such as those resulting from increased pilot wages—make it difficult to provide service within the subsidy caps, which have not been increased to account for inflation. An official from one carrier said that factors such as the increasing costs for pilots and an insufficient number of aircraft operating with less than 50 seats make it difficult for a community airport to comply with the $200 subsidy-per-passenger cap. According to representatives of the carrier, in some instances, they are paying their pilots 75 percent more than they were 5 years ago. They said that to compensate, the carrier may have to raise fares, a step that could lead to losing passengers and potentially put communities at risk of losing eligibility for EAS. Loss of Customers’ Confidence: Three of the carriers we interviewed said that when they were selected to replace carriers that had not provided reliable service to a community, it took time to regain the community’s confidence and attract people to use their EAS air service. If these air carriers had not been able to regain the community’s confidence and increase enplanements, the community may have lost eligibility for EAS. Loss of AIP Funding: A decline in the number of enplanements may also lead to a reduction in AIP funding available to the airport. AIP funding is important for small communities that have fewer financial resources than large- or medium-sized airports. AIP funding can help airports make improvements that could attract more business, such as from commercial and business aviation. Pilot shortage: Aviation stakeholders have voiced concerns that there is an insufficient supply of qualified pilots to support current and future demand from U.S. regional and mainline airlines. In May, 2017, the Working Group on Improving Air Service to Small Communities found that as a result of the pilot shortage, there were too few pilots to fly all the EAS routes. In June 2018, we found that labor market indicators for the pilot occupation were consistent with the existence of a pilot shortage. Carriers and community officials that we interviewed cited the following as issues related to the pilot shortage. Difficulty Retaining Pilots: Officials from 6 of the 10 carriers we interviewed said that it has been a challenge to retain sufficient pilots to provide the air service they have committed to providing under EAS. Pilots often start their careers with smaller air carriers that may serve EAS communities, and after a few years in the business, pilots are hired by larger airlines offering higher pay and more opportunities for advancement. Officials from 3 of the 10 carriers we interviewed said that they have responded to the pilot shortage by operating eight- or nine-seat aircraft under Part 135 regulations, which allows them to use pilots that have less flight time as first-officers. This increases the pool of pilots who can fill first-officer positions and gives these pilots the opportunity to build flight hours toward their Airport Transport Pilot license. Reduced Service Quality: Officials from 15 of the 17 communities we spoke with said that a shortage of pilots has been a challenge. Specifically, the pilot shortage has resulted in a reduction in service quality for some EAS communities because the air carrier has not been able to attract enough pilots to provide reliable service. Six of 17 communities told us that their enplanements declined and that some had lost service for a period of time due to a lack of pilots. For example, an official from one community said their carrier ended service to the community in 2014 due the industry-wide pilot shortage. Airport costs: Air carriers must pay fees to use airport facilities. Fees are charged for landing, counter and gate space, parking, and other airport facilities. These varied fees are part of carriers’ operating costs. Officials from 3 of the 10 carriers we talked to said that these airport costs may be difficult to cover because carriers serving the EAS program use relatively smaller aircraft with fewer passengers, and therefore, the carrier must charge more per passenger to cover the costs. For example, an official from one carrier we interviewed said that a community wanted to have an EAS flight that flew into Las Vegas; however, the airport in Las Vegas charged a single-aisle 9-seat aircraft the same landing fee as any other single-aisle aircraft, some of which can hold hundreds of passengers. Production and supply of small aircraft: Because there is a lack of availability of aircraft between 19 and 50 seats, in some cases, DOT, airlines, and communities have to choose service with a plane that is either too small or too large for demand. Manufacturers have said they are generally not producing this size aircraft because there is less demand and higher costs since they must certify them under Part 25 regulations for scheduled commercial service as opposed to the lower costs incurred under Part 23 regulations. Insufficient or Excess Capacity: Officials from 12 of the 17 communities we interviewed said that the declining production and supply of 19- to 50-seat aircraft has been a challenge for the EAS program. Officials from 2 communities we interviewed said they have moved to larger 50-seat aircraft, which means the communities might have too much capacity. On the other hand, officials from 11 of the 17 communities we interviewed expressed concerns about receiving service from a carrier that operates aircraft with less than 15 seats because, according to six communities we spoke with receiving air service from a carrier that only operates eight- or nine-seat aircraft may not provide sufficient capacity to allow the community to fulfill the EAS annual enplanement requirements, and thus, the community could lose eligibility for EAS. In addition, officials from 5 of the 17 communities were concerned that some people have an aversion to or difficulty getting into small aircraft that could deter them from using the service. Financial Effects on Air Carriers: Officials from 5 of the 10 carriers we interviewed said that the lack of available aircraft between 19 and 50 seats is a challenge. For example, an official from one carrier was concerned that operating eight- or nine-seat aircraft may limit their ability to serve EAS communities whose enplanements are increasing because the carriers would have to add seat capacity either through increased frequency of flights or larger aircraft they do not currently own in order to decrease the subsidy-per-passenger costs. However, if the carrier uses an aircraft with 50 or more seats, the carrier must have sufficient increasing demand to fill that plane on a regular basis to justify the capital expenditure and increased costs to operate. Furthermore, according to officials from another air carrier, eight- or nine-seat aircraft were not designed to operate with the frequency that small carriers are using them, which can reduce reliability and increase maintenance and operating costs. Driving Distance Calculation: While communities that we interviewed cited several specific benefits of the local air service they receive, as previously discussed, some expressed concerns about specific aspects of the program. Officials from 5 of the 17 communities we interviewed said that DOT’s calculation of the shortest driving distance between the community and the nearest large- or medium-hub can affect their eligibility requirements. DOT relies on the driving distance calculation to determine which communities are subject to the 10-enplanement and $200 subsidy-cap requirements. According to community officials, the easiest, safest, and quickest route from the community to the airport may be further than what DOT has calculated as the shortest driving distance, which could make the community exempt from these requirements. For example, one community official we spoke with told us that most people in the community take the expressway to the nearest hub airport, which is further from the center of the community to the airport than the two-lane route DOT uses in its calculation. An official from another community we interviewed said that DOT should take into account the time required to drive the route and the safety of the roadway when calculating the distance for EAS eligibility. The official explained that the route should take 2 hours to drive but often takes much longer due to traffic and delays, and expressed concerns that the route is very dangerous. Carrier Contracts: Contracts in the EAS program are in the form of DOT Orders announcing the carrier selected to serve a route and the subsidy awarded to the carrier. The Orders contain information such as the annual subsidy rate, the time frame for service, and various carrier requirements. Officials from 6 of the 17 communities we interviewed said that the structure of DOT’s contracts with EAS carriers can present a challenge because the communities feel they provide little to no leverage over a carrier that provides unreliable service. Officials from five communities said that EAS contracts do not include performance requirements or have penalties if the carrier does not meet service quality standards or targets. As previously discussed, officials from 15 of the 17 communities we interviewed told us that they had not received quality service at some point in the EAS program, which can result in declining enplanements and, ultimately, the community losing eligibility for the EAS program. However, if a community wants to have DOT cancel a contract, the community might lose air service if there is not another carrier interested in providing service. DOT has stated that the EAS program already provides financial incentives for carriers to provide reliable service. For example, DOT states that its “no fly, no pay” policy encourages carriers to complete flights because DOT reimburses carriers only for flights that they actually operated. Further, DOT also believes that carriers have financial incentives to increase completion rates above the rate estimated in their proposals. Because carriers frequently account for predictable flight cancellations they have an incentive to beat their estimate. Furthermore, carriers have the financial incentive to provide quality service to avoid losing enplanements and maintain a financially viable service. Stakeholders Suggested Several Options for Changing the EAS Program to Improve Service, but Some Would Likely Increase Costs The communities and air carriers we interviewed suggested potential reforms to EAS that they believed would improve service to their communities. Several of these changes would likely result in increased program costs. Change the subsidy cap: Officials from two communities and four carriers we interviewed said that the $200 per-passenger-subsidy cap should be changed, either by indexing the cap to inflation or increasing the cap temporarily for a community to allow a carrier more flexibility to develop a market for new service in a community or to account for higher labor costs. Since the subsidy cap is established in statute, revising it would require a legislative change. An official from one community said that increasing the cap for inflation would allow a carrier to use a larger aircraft, thereby improving use of the airport. One air carrier official said the cap needs to be increased to reflect rising labor costs. In its October 2014 notice of enforcement policy, DOT said that while it recognized the cap has not kept pace with inflation, the requirements of the statute did not provide DOT with the discretion to adjust the subsidy cap amount or refrain from enforcement. However, DOT issued waivers to 34 communities that did not meet the $200 subsidy cap from 2014 through 2019. If the subsidy cap were tied to inflation since its inception in 2000, the cap would be $283 in 2018. Of the 55 communities that were subject to the subsidy cap in 2018 because they are within 210 miles of a medium- or large-hub airport, 39 were under the subsidy cap and 16 exceeded it. Our analysis shows that if the subsidy cap were adjusted for inflation, an additional 10 communities would fall under the subsidy cap, and only 6 communities would exceed it. See figure 5. Renegotiate EAS agreements: Officials from 3 of the 10 carriers we interviewed said they should be permitted to request additional funds from DOT during the course of a contract. In 2009, we reported that allowing air carriers to renegotiate EAS contracts in response to rising costs would enable carriers to continue rather than file a Notice of Termination. As previously discussed, carriers we interviewed cited airport and operating costs as challenges they have encountered over the course of an EAS contract. Legislation passed in 2003 explicitly provided DOT with the option of adjusting the subsidy paid to an EAS carrier if the carrier’s expenses substantially increased. However, DOT officials said that to date no carrier has petitioned for such an increase. Revise DOT’s calculation of the driving distance: As mentioned earlier, to determine whether an EAS community is subject to the 10- enplanement-per-day and subsidy-cap requirements, DOT must determine the shortest driving distance from the center of the community to the nearest large- or medium-hub airport. Officials from four of 17 communities we interviewed suggested that DOT adjust its calculation to account for local factors, such as the time required to drive the shortest route, the condition of the road, and the most common route that members of the community use to get to the nearest large- or medium-hub airport. Considering these factors could result in communities not being subject to the limit on eligibility of requiring an annual subsidy per passenger of $200 or less, if the more commonly used or faster route is more than 210 miles from the nearest large or medium hub airport. Allow communities to regain eligibility: Officials from two communities and two carriers we interviewed suggested that subject to the availability of funds, communities that lost eligibility for the EAS program should be allowed to regain it if they are having difficulty obtaining air service without a subsidy. Officials from one community and one carrier we interviewed said communities that lost EAS eligibility as a result of unreliable service from their carrier should not be penalized by losing EAS program eligibility. According to DOT, they consider such circumstances when deciding to grant a community a waiver. In other instances, communities lost eligibility because they were not receiving EAS in fiscal year 2011. An official from one carrier suggested communities that regain eligibility could pay a co-share of the subsidy costs, possibly limiting the effect on the cost of the program. Some of the options that communities and carriers suggested, such as revising DOT’s process for carrier selection and restructuring DOT’s contracts with carriers could address the challenges in the EAS program but not necessarily increase program costs. Revise DOT’s process for carrier selection: Officials from 3 of the 17 communities and 4 of the 10 carriers we interviewed suggested that DOT adjust its method for carrier selection to account for factors such as the carrier’s financial viability, ability to comply with enplanement requirements, and agreements with mainline carriers, as well as the number of available pilots and mechanics in order to ensure that carriers are capable of providing good service to EAS communities. In addition, officials from one community also suggested that DOT give more weight to community preferences regarding carrier selection. While DOT is required to consider factors such as service reliability, interline agreements, and carrier financial and operating fitness when selecting a carrier, most of the communities we interviewed cited the quality of service they have received through the EAS program as a challenge. Include performance measures in DOT’s contracts with air carriers: Officials from four communities and one carrier suggested that DOT include performance measures in EAS contracts to ensure carriers are held accountable for providing a given level of service and subject to penalties for not meeting service quality targets. For example, one community official suggested that on-time performance and percentage of flights cancelled could be included as performance measures for EAS carriers. Officials from three communities and one carrier suggested that DOT include more requirements for service to EAS communities. For example, DOT could require that EAS carriers provide service to large-hub airports and have agreements with mainline carriers that could enhance quality of service; however, an official from one air carrier told us the carrier was reluctant to enter into agreements with smaller air carriers that serve EAS communities because they did not want their reputation to be negatively affected if the air carrier did not provide reliable service. An official from another carrier suggested that it is beneficial for carriers to enter into longer contracts because they can spend more time building the air service market for the communities they serve rather than renewing contracts. The officials said that for longer contracts DOT should include performance measures that require the carrier to provide a minimum level of reliable service or lose the route. Limit airport fees for EAS carriers: Officials from 3 of the 10 carriers we interviewed thought DOT should limit fees airports charge to EAS flights, such as landing fees and gate charges in order to increase the financial viability of EAS routes. Airport fees can be based on any number of factors including weight and number of seats on the aircraft. According to FAA’s policy on establishing airport charges, it recognizes airports are allowed to charge fees to help ensure their financial viability and at the same time those fees should be reasonable and not unjustly discriminatory. FAA’s policy further indicates that the issue of rates and charges is best addressed at the local level by agreement between users and airports. Change EAS from a carrier subsidy program to a community grant program: Officials from three communities we interviewed thought that similar to AEAS, DOT could consider providing a grant to a community in lieu of traditional EAS to allow the community more control over the service they receive. For example, an official from one community said that they liked the additional control the AEAS program has given the community over the service and that AEAS gives the community more weight with the carrier when there is a complaint about the service. Officials from three air carriers told us that a potential downside to this option is that it would be more complicated because carriers would need to work with individual communities for payment instead of just DOT. In addition, officials from three communities told us that they lack the technical expertise needed to effectively administer such a program. Agency Comments We provided a draft of this report to DOT for review and comment. DOT provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at 202-512-2834 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: List of Entities GAO Interviewed Appendix II: Federal Laws Enacted Since 2010 That Affect the Essential Air Service Program Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Cathy Colwell (Assistant Director); Stephanie Purcell (Analyst in Charge); Amy Abramowitz; David Hooper; Bonnie Pignatiello Leer; John Mingus; Dominic Nadarski; Malika Rice; Pamela Snedden; Laurel Voloder; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study Congress established EAS as part of the 1978 deregulation of the U.S. airline industry. Through the EAS program, DOT provides subsidies to airlines to make service available to communities that airlines would otherwise not serve. Since 2010, several statutory changes have limited eligibility for EAS subsidies by, among other things, changing eligibility requirements. In spite of these changes, program costs have continued to rise, prompting questions about whether additional modifications should be made. A provision in the Federal Aviation Administration Reauthorization Act of 2018 directed GAO to examine several aspects of the EAS program. This report discusses, among other objectives, (1) how federal laws enacted since 2010 have affected air service to communities funded through the program; and (2) challenges that communities and air carriers face with EAS, and options for reform. GAO reviewed relevant federal laws, DOT orders, and DOT program data. GAO also interviewed representatives, such as airport managers and local government officials, from 17 communities that have participated in EAS; representatives from 10 of the 11 air carriers that participate in the program; and DOT officials. This report focuses on the EAS program as it operates in the contiguous United States, as there are different rules for EAS in Alaska and Hawaii. What GAO Found Statutory changes since 2010 have reduced the number of communities eligible for subsidized air service through the Essential Air Service (EAS) program; however, the Department of Transportation (DOT) granted waivers to most of the communities that applied, resulting in little change in the number of communities receiving EAS. In 2012, statutory changes limited eligibility for the program in the contiguous United States to those communities receiving EAS in fiscal year 2011. Further statutory changes set a maximum average per-passenger subsidy, and a minimum number of passengers, that some communities would have to meet to retain eligibility. DOT also resumed enforcing the $200 per-passenger subsidy cap for certain communities. While these changes limited eligibility, in some cases the changes also gave DOT the option of providing waivers—most of which DOT granted. Thus, as noted, the overall number of communities receiving EAS remained about the same; however, EAS expenditures increased from $161 million in fiscal year 2010 to $277 million in fiscal year 2018 (see figure). DOT officials said this increase was due, in part, to factors affecting the entire airline industry, such as increased labor wages. Community officials and air carriers GAO interviewed cited several challenges associated with EAS and suggested options for reform. For example, some carriers said it was difficult to find and retain pilots due to an insufficient supply of qualified pilots. At the same time, pilot wages have increased, making it difficult to provide quality service without exceeding the subsidy caps. Some carriers and community officials noted that the $200 subsidy cap has not changed for several years to account for inflation or these increased costs. To address these and other challenges, stakeholders suggested a number of options, such as indexing the $200 subsidy cap to inflation or allowing communities that lost eligibility to re-apply for the program. Several of these reforms would result in additional program costs.
gao_GAO-19-653
gao_GAO-19-653_0
Background Marine debris originates from multiple sources and types of materials, entering the marine environment in a variety of ways, as shown in figure 1. Most plastics do not biodegrade, that is, decay naturally and become absorbed by the environment. Instead, plastics slowly break down into smaller and smaller fragments, eventually becoming what are known as microplastics. Microplastics are very small pieces of plastic that are generally less than 5 millimeters in size (about the size of a sesame seed). The formation of microplastics occurs when plastic debris is exposed to sunlight and the plastic begins to weather and fragment. Microplastics have been found in the stomachs of numerous aquatic organisms including insects, worms, fish, and clams, according to a 2018 study. A study from 2011 showed that once animals ingest microplastics, they can be stored in tissues and cells, providing a possible pathway for the accumulation of contaminants and potentially harming the animals. pots, and other recreational or commercial fishing equipment that has been lost, neglected, or discarded in the marine environment. According to the Global Ghost Gear Initiative, at least 640,000 tons of derelict fishing gear enters the ocean each year, a weight equivalent to two Empire State Buildings. Derelict fishing gear may entrap sea life, adversely affect marine habitats, present hazards to navigation, and cause other harmful effects (see fig. 2). For example, according to a 2015 NOAA report, derelict fishing gear threatens a variety of fish, turtles, seabirds, whales, and seals, and may be especially problematic for endangered and protected marine species. Abandoned and derelict vessels. Abandoned and derelict vessels are vessels without identified ownership, in significant disrepair, or both. There are thousands of such vessels in ports, waterways, and estuaries around the United States that have been left to deteriorate by the owner or operator or are the result of a catastrophic weather event, according to NOAA documents. Abandoned and derelict vessels can impede marine transportation by blocking navigable waterways, and, if not visible or well-marked, could pose collision risks to vessel operators. These vessels may also become sources of pollution since they may contain fuel oil or other hazardous materials that can leak into the water as the vessels deteriorate, impacting the local community, marine life, and nearby habitat. Marine debris has garnered increasing interest from the international community. In September 2015, the United Nations General Assembly unanimously adopted an agenda with a set of global sustainable development goals through 2030. One of the goals (goal 14) calls for conservation and sustainable use of the oceans, seas, and marine resources, and includes a target for prevention and significant reduction of marine pollution of all kinds, including marine debris, by 2025. In June 2018, five members of the Group of Seven and the European Union endorsed the Group’s Ocean Plastics Charter, which committed them to accelerating implementation of the Group of Seven Leaders’ Action Plan to Combat Marine Litter, previously agreed to in 2015. The United States and Japan were the two members of the Group of Seven that did not endorse the charter. Also, in May 2019, the parties to the Basel Convention on the Control of Transboundary Movements of Hazardous Waste and Their Disposal adopted a decision that would, beginning January 1, 2021, require parties to take appropriate measures to ensure that certain plastic waste is reduced to a minimum, taking into account social, technological and economic aspects, among other things. Marine Debris Act The Marine Debris Act governs the activities of the interagency committee. For example, it required the interagency committee to issue a report to Congress that included recommendations to reduce marine debris domestically and internationally. In 2008, the committee submitted an interagency recommendation report that contained 25 recommendations intended to guide the federal government’s strategies for addressing marine debris (see appendix II for a list of the 25 recommendations). The recommendations were categorized by an overarching topic, such as education and outreach or cleanup. Within each category, the committee then identified specific recommendations. For example, within the education and outreach category, the committee specified three recommendations: Demonstrate leadership by distributing educational materials to personnel on the sources and impacts of marine debris as well as methods for prevention with the goal of reducing the federal contribution to marine debris. Support public awareness campaigns by providing technical expertise and educational materials and by encouraging private sector participation, when appropriate. Engage and partner with state, local, tribal and nongovernmental entities to support coordinated events, such as Earth Day, the International Coastal Cleanup, and other activities that have relevance to marine debris. The act also requires the interagency committee to submit biennial reports to Congress that evaluate progress in meeting the purposes of the Marine Debris Act. Specifically, these biennial reports are to include: the status of implementation of any recommendations and strategies of the committee and analysis of their effectiveness, and estimated federal and nonfederal funding provided for marine debris and recommendations for priority funding needs. Starting in 2010, the interagency committee has issued five biennial reports to Congress, issuing its most recent report in March 2019. The Marine Debris Act designates six federal agencies as interagency committee members. The six agencies are NOAA, EPA, U.S. Coast Guard, U.S. Navy, Department of State, and Department of the Interior. The act also specifies that the committee shall include senior officials from other federal agencies that have an interest in ocean issues or water pollution prevention and control as the Secretary of Commerce determines appropriate. The act designates the senior official from NOAA to serve as the chair. Interagency Committee Coordinates through Meetings, but NOAA Does Not Have a Process for Determining Committee Membership and Agency Representation The interagency committee coordinates primarily through quarterly meetings where agencies share information about their individual activities related to addressing marine debris. Such activities range from education and outreach to research and technology development and are generally driven by the missions and authorities of the agencies. However, we found that NOAA has not established a process to determine the committee’s membership. In addition, the Marine Debris Act requires the interagency committee to include a “senior official” from member agencies, but NOAA has not determined the level of official it would consider senior. Interagency Committee Holds Quarterly Meetings to Share Information about Individual Agency Activities Such as Education and Outreach The interagency committee coordinates primarily through quarterly meetings where federal agencies share information about their individual marine debris-related activities. According to its charter, which was last revised in 2014, the committee is responsible for sharing information, assessing and implementing best management practices, and coordinating interagency responses to marine debris. The charter states that the interagency committee will ensure the coordination of federal agency marine debris activities nationally and internationally as well as recommend research priorities, monitoring techniques, educational programs, and regulatory action. The charter also states that the interagency committee will work to consider the interests of nongovernmental organizations, industry, state governments, Indian tribes, and other nations, as appropriate. NOAA officials said the main focus of the interagency committee has been to serve as an information-sharing body. The officials said they also seek opportunities to collaborate on individual projects, but the committee does not otherwise collaborate on activities, beyond compiling statutorily required biennial reports. NOAA officials explained that individual agencies each have a unique set of authorities and missions that largely determine their role and involvement in marine debris-related issues. For example, under its Marine Debris Program, NOAA conducts a variety of education, outreach, research, and other activities to identify sources of and address marine debris. In recent years, congressional committee reports accompanying NOAA’s annual appropriations have directed the agency to spend a certain amount of its appropriations on its marine debris program. Specifically, these reports directed NOAA to spend $7 million in fiscal year 2018 and $7.5 million in fiscal year 2019 for its Marine Debris Program. The program is also authorized to award grants to, and enter into cooperative agreements and contracts with, eligible entities to identify the sources of, prevent, reduce, and remove marine debris. In contrast, officials from other agencies on the interagency committee said their agencies have not received such direction or specific appropriations to address marine debris. Rather, the activities these agencies have conducted generally tie to their authority or agency mission. For example, EPA officials said they have relied on voluntary partnerships with states, industry, and other sources and leveraged existing funds from related programs, such as the agency’s stormwater and water quality programs, to support its Trash Free Waters Program. This is a program that encourages collaborative actions by public and private stakeholders to prevent trash from entering water. EPA officials said they also support a number of other activities related to education, outreach, and research, and these activities are a high priority for the agency, but EPA does not have a line item in its budget dedicated to marine debris activities. The interagency committee’s biennial reports describe general types of activities individual agencies reported conducting—often in coordination with nonfederal partners such as nongovernmental organizations, industry, states, Indian tribes, and other nations—to address marine debris, which include activities in the following categories: (1) education and outreach; (2) legislation, regulation, and policy; (3) cleanup; (4) research and technology development; and (5) coordination (see table 1 for descriptions of types of activities in each category; see app. III for specific examples of activities carried out by agencies). To help agencies share information, NOAA chairs quarterly meetings where agencies are invited to discuss their individual activities. In reviewing meeting minutes, we found that the meetings were generally well-attended by representatives from multiple agencies. During the meetings, officials discussed marine debris issues and some provided updates on their agencies’ activities. For example, at the April 2019 meeting, officials discussed ways in which different agencies may be meeting the sense of Congress on international engagement in the Save our Seas Act of 2018. At the May 2018 meeting, officials from NOAA and U.S. Coast Guard gave presentations on their agencies’ emergency response authorities and efforts. NOAA officials described their actions in response to Hurricanes Harvey, Irma, and Maria in 2017, which included coordinating debris removal activities across federal and state agencies, such as EPA and Florida State’s Department of Environmental Protection. U.S. Coast Guard officials also presented information on their marine debris removal activities in response to Hurricanes Irma and Maria. These activities included coordinating with multiple federal, state, and local agencies and contractors to remove or mitigate potential environmental impacts from 2,366 damaged or derelict vessels in Florida and the Florida Keys after Hurricane Irma and 377 vessels in Puerto Rico and the Island of Vieques after Hurricane Maria, according to U.S. Coast Guard officials. The interagency committee has also used its quarterly meetings to identify opportunities for collaboration among federal agencies and with nonfederal partners, according to NOAA officials. For example, during committee meetings in early 2018, NOAA, the National Park Service, and the Department of State identified an opportunity to collaborate with the German government to bring the Ocean Plastics Lab to the United States. This Lab is an international traveling exhibition that explains the role of science in helping to understand and address plastic pollution in the ocean. NOAA officials said that to collaborate on this effort, officials from three federal agencies served on a steering committee, leveraged volunteers, promoted the Ocean Plastics Lab through outreach efforts to the public and helped staff the exhibits while they were on display in Washington, D.C., during the summer of 2018. NOAA Has Not Established a Process for Determining Interagency Committee Membership and Agency Representation We found that NOAA has not established a process to determine interagency committee membership. The Marine Debris Act designates six federal agencies as members of the committee, and also specifies that committee members shall include senior officials from other federal agencies that have interests in ocean issues or water pollution prevention as the Secretary of Commerce determines appropriate. The committee’s 2014 charter lists five agencies as members in addition to the six identified in the act, for a total of 11 member agencies. The charter also states that the committee consists of representatives from “any other federal agency that has an interest in ocean issues and water pollution prevention and control,” but does not specify the process for documenting membership or how the Secretary of Commerce, or a delegate of the Secretary, will determine that such membership is appropriate, as required by the act. Various information sources, such as the committee’s biennial reports and minutes from quarterly meetings, have provided differing lists of committee member agencies. For example, the committee’s March 2019 biennial report and NOAA’s website as of July 2019 listed the 11 agencies identified in its charter as members. But, various meeting minutes from meetings held in fiscal year 2019 listed up to 13 members. One agency, the U.S. Agency for International Development (USAID), has regularly attended the committee’s quarterly meetings since early 2018 when USAID officials said they were invited to participate on the committee. USAID officials said that their understanding is that USAID is a member of the interagency committee and that this is especially important to recognize given their significant international development assistance related to marine debris over the last few years. However, USAID is not listed as a member on NOAA’s website and the agency’s marine debris-related activities are not included in the committee’s 2019 biennial report. As a result, some agencies may not be included in the required biennial reports on the committee members’ marine debris activities. In April 2019, NOAA officials told us that USAID was a contributing member to the interagency committee. The officials said that “official” member agencies are those six agencies designated by the Marine Debris Act and that they consider other participating agencies as “contributing” members. They said it has been the practice of the interagency committee to enable participation and coordination with other agencies, including those who may not be designated as official members. We found that NOAA does not have a documented process for determining membership on the interagency committee. NOAA officials were unable to locate records from 2006 or earlier documenting the addition of contributing agencies to the committee or the Secretary, or a delegate of the Secretary, making a determination of the appropriateness of such agencies being members. NOAA officials stated the need for the agency to establish a documented process to determine the appropriateness of federal agencies being committee members. The officials said they have started working with NOAA’s General Counsel to formalize and document the committee’s membership process, and that the process will include a step for the Secretary of Commerce, or a delegate of the Secretary to determine the appropriateness of additional agencies being members. However, NOAA officials did not have an estimated time frame for developing such a process. Our past work on interagency collaboration has identified the importance of ensuring that relevant participants have been included in the collaborative effort. By establishing a time frame for developing a documented membership process, NOAA and the interagency committee can benefit from capturing all members’ activities, and ensuring it provides Congress a complete picture of marine debris efforts across the federal government. In addition, the Marine Debris Act requires the interagency committee to include a “senior official” from member agencies, but NOAA has not determined the level of official it would consider senior. The interagency committee’s charter states that the committee will be composed of “federal agency managers and technical experts,” but does not define what is meant by senior official. NOAA officials said that the level of engagement from agency officials has varied over time and often depends on the specific officials participating. The officials said they have had difficulty in the past getting some member agency officials to engage during quarterly meetings and often those that do participate are not decision makers. Specifically, for some agencies, participating officials may not represent the entire agency, but rather a program within the agency, and they may not have decision-making authority, according to NOAA officials. As a result, the officials may not be able to commit agency resources, or they may be uncertain what activities their agency may be able to commit to. NOAA officials said that it may be helpful to specify the level of official needed to represent the agencies on the interagency committee. The officials said that they have been discussing potential revisions to the interagency committee’s charter, and within that broader discussion they are looking into whether the charter should specify what level of official is needed. However, NOAA officials did not have an estimated time frame for revising its charter or determining what those revisions may entail. Our past work on interagency collaboration has identified the importance of ensuring that participants have full knowledge of the relevant resources in the agency, including the ability to commit resources for their agency. By clarifying what is meant by “senior official” such as through revisions to its charter, NOAA would have greater assurance that it has the full engagement of member agency officials who can speak for their agency and commit to activities. Interagency Committee’s Reports Do Not Contain Some Required Elements While the interagency committee’s biennial reports provide information on marine debris-related activities of individual agencies, our review found that they do not contain certain required elements. As previously noted, the Marine Debris Act requires the biennial reports to include (1) the status of implementation of any recommendations and strategies of the committee and analysis of their effectiveness, and (2) estimated federal and nonfederal funding provided for marine debris and recommendations for priority funding needs. However, we found that the biennial reports did not include an analysis of the effectiveness of the recommendations implemented or recommendations for priority funding needs. Implementation of Recommendations and Analysis of Effectiveness The five biennial reports the interagency committee issued from 2010 to 2019 lay out the committee’s 2008 recommendations along with a description of activities taken by individual member agencies related to those recommendations. Specifically, each biennial report references the 25 recommendations the committee first adopted in its 2008 interagency recommendation report, organized into categories (see app. II). The reports then provide a description of activities taken by individual member agencies that fell within the recommendation categories for each preceding 2-year period. However, we found that the five biennial reports do not include an analysis of the effectiveness of the implementation of the committee’s recommendations and strategies as required by the Marine Debris Act. Some of the descriptions of agencies’ activities include information on the number of people reached through education or outreach efforts or other quantitative information related to specific activities, but the reports do not include an analysis of the effectiveness of those activities. NOAA and EPA officials confirmed that the interagency committee did not include an analysis of effectiveness in its biennial reports, stating that undertaking such an effort is beyond the scope of the information-sharing focus of the interagency committee. NOAA officials said that they have attempted to bring member agencies together to discuss how the committee could analyze the effectiveness of its collective efforts, but this has been a challenge because each member has its own priorities and legal authority related to addressing marine debris. Activities to implement the committee’s 25 recommendations occur at each individual agency, rather than at the committee level, according to the officials. As such, NOAA officials said each member agency may evaluate the effectiveness of its individual activities and pointed to measures NOAA has in place to evaluate its Marine Debris Program. For example, NOAA estimates the amount of debris removed annually and the number of students it reaches through education and outreach efforts. EPA officials said that determining a baseline and quantifying the results of specific marine debris efforts to determine effectiveness is challenging, as is the case for other broad, nonpoint sources of pollution. For example, trash enters water bodies through innumerable water and sewer system outfalls, so EPA may focus on strategies to change people’s behavior to minimize trash from entering the systems (see fig. 3). But unlike measuring emissions from a smokestack, it is difficult to determine a baseline and then measure and demonstrate progress in terms of trash reduction exiting through the system outfalls. EPA officials said they recognize the need to measure the effectiveness of their efforts related to marine debris—especially as addressing marine debris has become a high priority for the agency—but measuring progress has yet to be determined across all of its various offices and programs that carry out marine debris-related activities. Within the Trash Free Waters program specifically, EPA officials said they take steps to evaluate the effectiveness of the program through a variety of means, such as seeking feedback from stakeholders. Our past work has shown that collaborative entities—including those addressing complex, cross-cutting issues—can better demonstrate progress and identify areas for improvement if they develop a means to monitor, evaluate, and report the results of their collective efforts. Developing such a means would help the interagency committee ensure that its member agencies are using their authorities and aligning their priorities in the most effective manner possible. Moreover, developing and implementing a process to analyze the effectiveness of the interagency committee’s recommendations and strategies, and reporting the results in its biennial reports as required by the Marine Debris Act would better position the committee to determine the extent to which its efforts are making a difference in addressing the complex facets of marine debris. Estimated Funding and Recommendations for Priority Funding Needs The five biennial reports include some estimates of funding for marine debris-related activities, but do not identify recommendations for priority funding needs as required by the Marine Debris Act. Specifically, we found that the reports included estimates for some member agencies’ spending related to their marine debris-related activities and estimated nonfederal spending for certain activities. The reports also state that several member agencies conduct activities within multiple programs, offices, and projects indirectly related to marine debris efforts. These agencies do not receive annual appropriations specifically for marine debris activities but instead receive appropriations to fulfill their missions or implement programs, making it difficult to estimate exact spending related to marine debris, according to the reports. The 2019 biennial report states that the interagency committee’s recommendations for priority funding needs are reflected in the President’s budget request and operating plan for each member agency in any given fiscal year. NOAA officials said that it would be difficult to identify and communicate priority funding needs outside of these documents, particularly given the complications associated with estimating each agency’s individual spending. For example, an EPA official said that EPA’s efforts to address marine debris are decentralized and the agency does not receive an appropriation specifically for marine debris-related activities, making it difficult to determine how much the agency spends—or may need to spend—on marine debris. Moreover, NOAA and EPA officials said that because the interagency committee serves primarily as an information-sharing body and each member agency operates independently in identifying resource needs, the interagency committee has not needed to develop a process to identify recommendations for priority funding needs. However, the Marine Debris Act requires the interagency committee to include recommendations for priority funding needs in its biennial reports, and without a process to identify such recommendations, the interagency committee cannot meet that requirement. Our past work on leading collaborative practices has shown the importance of identifying and leveraging resources, such as funding, in collaborative efforts. By developing a process to identify recommendations for priority funding needs in its biennial reports, the interagency committee could provide Congress with required information about priority funding needs across the federal government to address marine debris. Experts Suggested a Range of Actions the Federal Government Could Take to Most Effectively Address Marine Debris The 14 experts we interviewed with expertise in marine debris-related issues suggested a range of actions that the federal government could take to most effectively address various types of marine debris. Their suggestions included increasing or improving actions already being taken by some federal agencies as well as taking new actions. The experts stressed that there is not one solution to the growing, multi-dimensional problem of marine debris. Rather, they said that a multitude of actions involving federal agencies and nonfederal partners—such as international, state and local governments, Indian tribes, industry, and environmental groups—will need to be taken to address the issue. Experts as well as agency officials we interviewed indicated that there would be a number of factors to consider in evaluating the suggested actions. Some of these factors are overarching, applying to most or all of the actions; others relate to specific actions. For example, several experts and agency officials said that competing priorities and limited resources would be important factors to consider related to all of the suggested actions. Several agency officials also said that their agencies may not have the authority to take some of the actions suggested by the experts, and therefore new legislation would need to be enacted before they could take those actions. Additionally, some actions could result in impacts or costs to particular industries, underserved communities, or consumer groups, and understanding and identifying ways to mitigate such impacts would be important. Moreover, several agency officials said some actions, such as those related to waste management, may be better suited for local or state governments and that those entities would be better- equipped to deal with particular aspects of marine debris. The following are examples of actions the experts suggested that the federal government could take. We organized the actions into the following five categories, which generally correspond to the categories laid out in the interagency committee’s reports: (1) education and outreach, (2) establishment of federal requirements or incentives, (3) cleanup, (4) research and technology development, and (5) coordination. Education and Outreach Seven of the 14 experts suggested actions to educate or conduct outreach to the public or specific consumer or industry groups or international governments about ways to prevent, reduce, mitigate, or clean up waste that can become marine debris. A few experts emphasized that education and outreach efforts should be focused on ways to prevent trash from entering the marine environment. Examples of education and outreach actions suggested include: Domestic education and outreach. Five experts suggested different types of education or outreach campaigns the federal government could undertake to target certain domestic groups, such as consumers. One expert suggested that the federal government develop a national campaign to educate the public about marine debris. Such a campaign would develop a single message that various entities, including federal agencies and nonfederal stakeholders, could include in advertisements, social media, and other public awareness efforts. The expert pointed to similar state-led campaigns, such as “Nobody Trashes Tennessee,” a litter campaign developed by Tennessee’s Department of Transportation. This state campaign features celebrities, such as athletes and musicians, in advertisements and involves selling stickers, hats, and other items to help spread the message. However, the expert said that securing collaboration and agreement on a single message across federal agencies and nonfederal stakeholders could pose a challenge and that a national campaign would need a long-term commitment from all parties to be successful. NOAA officials said that national campaigns can be expensive and demonstrating results from such efforts can be difficult, especially when they are broad in nature. As a result, these officials said that NOAA’s Marine Debris Program targets its education and outreach efforts to a specific audience for a particular type of behavior change or type of debris, such as educating and training high school students to lead “Zero Litter Campaigns” in their schools and communities. International outreach. Two experts suggested actions the federal government could take to conduct outreach internationally to promote programs, policies, or technologies that can reduce marine debris. For example, one expert suggested the federal government conduct outreach to government officials in countries that have limited waste management infrastructure to demonstrate effective waste management technologies. The expert said that the federal government could partner with private sector companies to demonstrate waste-to-energy technologies, such as gasification and pyrolysis that can convert plastic waste to fuel. According to the expert, demonstrating such technologies would provide information on its benefits, including reducing sources of waste and creating a source of energy to either use or sell. Several agency officials we interviewed agreed that international outreach efforts are critical to successfully addressing marine debris and that emphasis should be placed on assisting countries with improving their waste management practices. However, these officials said there are many factors to consider with regard to waste-to-energy technologies. For instance, State Department officials said such technologies may not be supported by civil organizations because of environmental concerns. Waste-to-energy technologies could also entail high upfront capital investments, and waste-to-energy facilities should adhere to strict environmental standards with monitoring and enforcement to help ensure the technology is not causing negative effects, according to agency officials. As a result, they said it may not be practical for some countries to adopt such technologies. In addition, USAID officials said that promoting waste-to-energy technology presupposes that waste is already being collected in sufficient quantity and quality to serve as a fuel for such technology, but that in some countries waste is openly dumped or burned and therefore sufficient waste may not be available. They cautioned that waste-to-energy technologies can be a part of a response to address marine debris abroad, but would not be sufficient alone. Establishment of Federal Requirements or Incentives Microfibers are a widespread type of microplastic; they have been found on the shorelines of six continents and in oceans, rivers, soils, table salt, and public drinking water, according to scientific studies. Microfibers enter the marine environment through various pathways. For example, microfibers are shed from synthetic clothing and other materials made of polyester and nylon. These microfibers pass through to waterways because washing machines and wastewater treatment plants typically do not have processes sufficiently refined to remove the fibers. Little is known about other potential sources of microfibers, such as carpet manufacturing; the rate of generation, such as how quickly materials break down and shed microfibers; and any health impacts to humans or wildlife. federal requirements for manufacturers to design certain products to minimize the chances of material becoming marine debris. For example, two experts suggested the federal government develop design standards for washing machine manufacturers to ensure filters are designed to prevent microfibers from entering wastewater systems and then the marine environment. Three experts suggested the federal government develop design standards to require or incentivize manufacturers to use specific amounts of post-consumer material in developing certain products. For example, one expert recommended requiring the manufacturers of plastic beverage bottles to produce bottles using at least a minimum amount of recycled plastic. According to the expert, this would increase the demand for recycled plastic as a raw material, which in turn would reduce the likelihood that such plastic would end up as waste. The expert said that requiring the use of recycled plastic would likely impose increased costs on manufacturers because virgin plastic—the raw material typically used in producing plastic beverage bottles—is currently less expensive than recycled plastic. Such increases would likely be short term, however, because the increased demand would decrease the price after more of the recycled material is used, according to another expert. Some federal agency officials said that establishing such proposed federal design standards could be difficult due to limited existing statutory authorities. Requirements for fishing gear. Three experts suggested the federal government establish requirements to mitigate the impact of lost or derelict fishing gear in federal waters. For example, one expert suggested requiring the use of modified fishing gear, such as crab traps with biodegradable escape mechanisms that allow entrapped marine life to escape if the trap is lost or abandoned (see fig. 4). Requiring the use of fishing gear with biodegradable escape mechanisms would likely impose increased costs to the fishing industry, according to the expert, but those costs could be minimized if the federal government offered a subsidy to help purchase required gear. NOAA officials said that it would be challenging to require the use of certain types of fishing gear in part because of the cost to the federal government in ensuring implementation of the requirement. On the other hand, NOAA officials said they promote innovation and voluntary use of certain types of fishing gear through various efforts such as their Fishing for Energy program. Restrictions on single-use plastics. Four experts suggested that the federal government establish restrictions on the manufacturing or sale of certain single-use plastics. For example, the federal government could establish restrictions on the manufacturing and distribution of plastic bags in the form of thickness or material composition requirements, or production volume limits. Two of these experts also said that the federal government could review existing local, state, and international efforts to restrict single-use plastics to identify best practices so that these types of actions could potentially be scaled appropriately at the federal level. According to the United Nations Environmental Programme, 127 countries and two states have placed various types of restrictions on the retail distribution of plastic bags as of 2018. One expert pointed to research that shows that plastic bags are one of the most abundant forms of marine debris and suggested that banning them would therefore significantly reduce the amount of debris entering the marine environment. NOAA officials agreed that restricting the sale of single-use plastic bags could help address the marine debris problem, but said that identifying an agency with sufficient legal authority to be responsible for implementing and enforcing any restriction would be important and could be a challenge at the federal level. NOAA and EPA officials said that it would be important to carefully determine and assess trade-offs or other potential impacts before considering these types of restrictions. Single-use plastics are any plastic items— such as plastic soda or water bottles—that are intended for use only once before they are thrown away or recycled as defined by the United Nations Environment Programme. Single-use plastics can have environmental impacts when they are left in the marine environment. For example, single-use plastics may be ingested by hundreds of species of marine wildlife, such as turtles and dolphins, who mistake them for food, potentially blocking their airways and stomachs, according to a 2018 report by the United Nations Environment Programme. Incentives for waste management. Four experts suggested actions the federal government could take to provide incentives to local governments to help them improve their waste management and recycling programs. The experts said that waste and water management is typically the responsibility of local governments, but that given the scope and scale of the marine debris problem, the federal government could use its resources to provide incentives to help local governments make improvements. For example, the federal government could provide grants or subsidies to help local governments implement best management practices, such as using trash traps to help remove debris from waterways and prevent it from becoming marine debris. In addition, the experts said that the federal government could provide local governments with resources to help purchase bins with lids to help prevent inadvertent loss of waste or to pay for infrastructure such as trucks and recycling facilities to improve the collection and recycling of waste. According to one expert, transporting materials from consumers to the appropriate waste management or recycling facilities is a significant barrier to achieving better waste management. EPA officials agreed with the importance of local waste management efforts. The officials emphasized that it is the agency’s mission, in part, to address management of waste to prevent trash, and management of water that carries the trash to the marine environment. The officials said that this is particularly critical for addressing marine debris since an estimated 80 percent of aquatic trash originates from land-based sources. The officials said the agency has provided some funding to local governments to implement mechanisms to capture trash before it enters waterways or to remove trash from water. They added that there is no one size fits all approach, however, to working with local governments. Rather, different localities may have differing needs—such as for funding, information, or technical assistance—and EPA tries to create a climate where localities can identify and best address those needs, according to the officials. Cleanup Five of the 14 experts suggested the federal government support marine debris cleanup and removal activities by providing resources to organizations that coordinate cleanup projects (see fig. 5). Several agency officials said that preventing waste from entering the marine environment should be the primary focus of addressing marine debris, but cleaning up existing marine debris continues to be a critical part of the multi-faceted response to the problem, especially after severe weather events such as hurricanes. According to one expert, debris deposited into the marine environment around the Florida Keys after Hurricane Irma in 2017 included construction debris from demolished buildings, household items such as refrigerators and televisions, cars, and boats, among other types of debris. The expert suggested the federal government provide funding and technical assistance to state and local governments to help locate such debris. According to the expert, after a severe weather event, the distribution of debris can vary greatly with ocean and wind currents, and the debris can extend for miles into the ocean. As a result, the expert suggested that the federal government assist with conducting aerial flyovers to locate major concentrations of debris. The flyovers would employ mapping technology, such as global positioning system equipment and cameras, to locate and map the debris for removal. NOAA officials agreed with the importance of cleanup activities, particularly after severe weather events. In 2018, NOAA provided $18 million to states for the detection, removal, and disposal of debris after the 2017 hurricanes. Research and Technology Development Ten of the 14 experts suggested actions related to research or technology development. A few experts commended federal research efforts related to marine debris to date but stressed that additional research is needed in multiple areas. Examples of research and technology development actions suggested by experts include: Research on sources, pathways, and location of marine debris. Five experts suggested the federal government support research on identifying and understanding the various sources, pathways, and location of marine debris. For example, one expert suggested that the federal government conduct a national study to identify where waste is generated, through which types of major pathways it enters the marine environment (such as rivers or stormwater), and where the waste ends up. This study could include a focus on specific pathways, such as where illegal dumping occurs, which has not been researched at the national level, according to the expert. The expert said that federal agencies and others could use the results of such a study to help target education for the public, policy makers, and law enforcement officials on how to prevent and properly dispose of the types of waste that most commonly end up as marine debris. NOAA officials said that illegal dumping tends to be localized, so it may be difficult to carry out research on a national scale, but agreed with the need to better understand sources and types of marine debris since many factors contribute to the problem. Research on effects of marine debris. Four experts suggested the federal government support research to determine the effects of debris on wildlife and the marine environment as well as on human health. For example, one expert suggested that the federal government conduct or fund research to determine the effects of microplastics on human health to help the federal government and other stakeholders identify the most appropriate solutions. EPA officials said that this type of research is one among many competing areas related to marine debris research their agency has targeted. Development of technology to address marine debris. Five experts suggested actions that the federal government could take to develop new technology to help address marine debris. For example, one expert suggested that the federal government fund the development of new technology to recycle hard-to-recycle plastic materials so that these materials are less likely to end up as waste and become marine debris. The expert said that, in particular, plastic materials such as packaging used to preserve food products are not readily recyclable because the technology to recycle these types of plastics is not available or is not economically viable. EPA officials said that even when there is technology to recycle these types of plastics, food contamination is a problem that may prevent them from being recycled. In addition, an increased capacity for recycling may not result in a behavior change on the part of the consumer, which is another factor to consider in evaluating whether to pursue this type of action, according to the officials. Coordination Nine experts suggested that the federal government coordinate with local, state, federal, and international governments and other nonfederal partners to address marine debris. Experts emphasized that because marine debris is a complex issue with domestic and international impacts, it requires contributions from and coordination across these many groups. Examples of coordination suggested by experts include: Coordination with stakeholders on management of fishing gear. Two experts suggested the federal government coordinate to identify ways to prevent fishing gear from becoming a source of marine debris and causing harm to fish and other marine species. One expert suggested the federal government coordinate with stakeholders to identify and implement best management practices for responsible management and use of fishing gear. Specifically, the expert suggested that the federal government coordinate with state agencies, gear designers and manufacturers, fishermen, and other stakeholders to adopt best practices in particular locations such as in the Chesapeake Bay or Puget Sound where there are extensive commercial or recreational fisheries. The expert said it would be important to work with industry stakeholders to avoid the best practices being perceived as unnecessary government intervention. In addition, one of the experts said that adoption of best practices could incur additional costs for activities such as replacing gear, which could be minimized through government subsidies or other incentives. NOAA officials said these types of coordination activities align with current efforts within their Marine Debris Program. For example, in 2016 NOAA partnered with California State University and other stakeholders to encourage the adoption of best practices to prevent the loss of gear used to catch spiny lobster in the Channel Islands in California. Coordination with international governments. Four experts suggested the federal government increase its coordination internationally such as through developing international agreements and participating in multinational forums. For example, one expert suggested that the United States and other countries enter into an international agreement to prevent further release of plastic into the ocean. Under such an agreement, each country would set a target to reduce the amount of plastic released into the ocean, develop strategies and approaches to meet that target, and measure and report on progress in meeting the target. The expert said that taking actions to meet the target would incur costs and that securing commitments from countries could be difficult. However, the expert said that allowing countries the flexibility to develop their own strategies for meeting their targets could help overcome these difficulties. State Department officials said that in addition to coordination with international governments, coordination is needed with other key stakeholders such as waste management and marine debris experts, local leaders, private-sector industry and retail entities, and nongovernmental organizations. This is in part because so much of the international marine debris problem stems from waste management issues at the local level. In some countries, as in the United States, the government may not have the authority to work on waste management at the local level and as a result, understanding this complexity is an important factor to consider in coordinating internationally, according to the officials. USAID officials agreed that coordination with international stakeholders beyond international governments is needed and said that given the local nature of waste management issues that contribute to the international marine debris problem, stakeholders such as local and municipal governments are also important and should be a major focus for coordination and capacity building. Conclusions Marine debris is a global, multi-faceted problem and multiple federal agencies, along with nonfederal stakeholders such as nongovernmental organizations, industry, states, Indian tribes, and others, have important roles to play in addressing the problem. The interagency committee’s sharing of information about its members’ activities is a good first step to ensure the agencies are aware of their respective marine debris-related efforts. NOAA, as chair of the committee, has recognized the need to develop a documented membership process, but has not established a time frame for doing so. By establishing a time frame for developing a documented membership process, NOAA and the interagency committee can benefit from capturing all members’ activities, and ensuring it provides Congress a complete picture of marine debris efforts across the federal government. NOAA also recognizes that it may be helpful to specify the level of the official needed to represent the agencies through revisions to its charter, but has not determined what those revisions may entail. By clarifying what is meant by “senior official” such as through revisions to its charter, NOAA would have greater assurance that it has the full engagement of member agency officials who can speak for their agency and commit to activities. The interagency committee’s biennial reports provide information on the committee’s recommendations and individual agencies’ activities to implement those recommendations, but the reports do not include an analysis of the effectiveness of the committee’s recommendations and strategies as required by the Marine Debris Act. By developing and implementing a process to analyze the effectiveness of the interagency committee’s recommendations and strategies, and reporting the results in its biennial reports as required, the interagency committee would be in a better position to determine the extent to which its efforts are making a difference in addressing the complex facets of marine debris. Additionally, the interagency committee has not identified required recommendations for priority funding needs. By developing a process to identify recommendations for priority funding needs and including such recommendations in its biennial reports, the interagency committee could provide the Congress with required information about priority funding needs across the federal government to address marine debris. Recommendations for Executive Action We are making a total of four recommendations, including two recommendations to the NOAA Administrator and two recommendations to the chair of the interagency committee, specifically: The NOAA Administrator, in coordination with interagency committee member agencies, should establish a time frame for documenting the committee’s membership process. (Recommendation 1) The NOAA Administrator, in coordination with interagency committee member agencies, should clarify what is meant by “senior official” in the Marine Debris Act, such as through revisions to its charter. (Recommendation 2) The chair of the interagency committee, in coordination with member agencies, should develop and implement a process to analyze the effectiveness of the interagency committee’s recommendations and strategies, and include the results in its biennial reports. (Recommendation 3) The chair of the interagency committee, in coordination with member agencies, should develop a process to identify recommendations for priority funding needs to address marine debris, and include such recommendations in its biennial reports. (Recommendation 4) Agency Comments and Our Evaluation We provided the Departments of Commerce, Defense, Homeland Security, Interior, Justice, and State; EPA; the Marine Mammal Commission; and USAID a draft of this report for their review and comment. The Department of Commerce and USAID provided written comments, which are reprinted in appendixes IV and V respectively, and discussed below. We also received technical comments from the Departments of Commerce, Homeland Security, the Interior, and State; EPA; the Marine Mammal Commission; and USAID, which we incorporated into the report as appropriate. The Departments of Defense and Justice indicated that they had no comments. In written comments from the Department of Commerce, Commerce and NOAA agreed with our four recommendations. Regarding our first two recommendations, NOAA stated that its Administrator will establish a time frame for documenting the interagency committee’s membership process and, in coordination with the interagency committee, will define the term “senior official” through revisions to its charter so that the term can be consistently applied across all federal agency structures. In forming its definition of “senior official,” NOAA indicated that it would consider seniority requirements of similarly situated advisory committees, along with related factors such as the ability to make decisions on behalf of an agency. Regarding our third recommendation on developing and implementing a process to analyze the effectiveness of the interagency committee’s recommendations and strategies, NOAA stated that it agreed with this recommendation to the extent it can be implemented with available budgetary resources. It indicated that the interagency committee lacks the existing resources to require and routinely evaluate the effectiveness of agency activities. Instead, individual agencies are expected to work toward implementing the interagency committee’s 2008 recommendations in accordance with each agency’s legal and programmatic authorities, mission priorities, and resource limitations. Nevertheless, NOAA stated that to the extent possible it will work with interagency committee members to identify common or easily translatable metrics for evaluating the effectiveness of its 2008 recommendations and include these in the next biennial report to Congress. Regarding our fourth recommendation, NOAA stated that it agreed with our recommendation, but noted that it does not have the authority to control the implementation of a process for identifying priority funding needs of other member agencies. It stated that the interagency committee’s recommendations for priority funding needs are already reflected in the President’s annual budget request and operating plan for each member agency. However, NOAA stated that to the extent possible, it will work with interagency committee members to develop a process for identifying priority areas, which can be reflected in each agency’s respective budgeting process and shared in the committee’s biennial reports. We agree that NOAA does not have the authority to control the implementation of a process for identifying priority funding needs of other member agencies. However, as chair of the committee, NOAA can coordinate with member agencies to develop a process that each individual member agency—under its individual authority and budgetary processes—can use to identify recommendations for priority funding needs to address marine debris. We believe that coordinating such information and providing it in the committee’s biennial reports could provide Congress with required information about priority funding needs across the federal government to address marine debris. In addition, in written comments from USAID, the agency said it is committed to addressing the challenge of marine debris through its programs and in collaboration with interagency committee partners. USAID stated that it has significant opportunities to play an important role in the international response to address marine debris and, as the lead federal agency on foreign assistance, has several programs that target mismanaged municipal waste in the developing world. For example, USAID stated that the agency’s Municipal-Waste Recycling Program has helped reduce land-based sources of ocean plastic waste in four of the top five contributing countries—Indonesia, the Philippines, Sri Lanka, and Vietnam—by providing small grants and technical assistance to a variety of local actors in towns and cities. USAID also stated that it greatly appreciates the work of its interagency committee partners in addressing marine debris and looks forward to continued collaboration with them. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, Homeland Security, Interior, Justice, and State; the Administrators of EPA and USAID; and the Commissioners of the Marine Mammal Commission. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) how the interagency committee coordinates among federal agencies and the process for determining membership and agency representation, (2) the extent to which the interagency committee’s biennial reports contain required elements, and (3) experts’ suggestions on actions the federal government could take to most effectively address marine debris. To examine how the interagency committee has coordinated among federal agencies and the process for determining membership and agency representation, we reviewed the Marine Debris, Research, Prevention, and Reduction Act, as amended (Marine Debris Act), and interagency committee documents, including the committee’s 2008 report with recommendations, charter, and five biennial reports to Congress issued as of March 2019. Specifically, we reviewed meeting minutes from the interagency committee’s quarterly meetings from November 2012 through April 2019, to understand the topics and activities the committee has coordinated on and the federal agencies that have participated. We attended five of the interagency committee’s quarterly meetings (in May, September, and December of 2018, and April and July of 2019) to directly observe committee coordination among agencies during these meetings. We also reviewed documents from committee member agencies and interviewed and reviewed written responses from those agencies to obtain information on their coordination efforts. Agencies we included were those agencies designated as members in the Marine Debris Act as well as additional agencies identified as members in the committee’s charter (see table 2). In addition, we interviewed officials and reviewed documents from the National Science Foundation, Office of the U.S. Trade Representative, and the U.S. Agency for International Development, based on suggestions from interagency committee officials. From the committee’s 2008 report with recommendations, the five biennial reports, and other member agency documents, we summarized activities conducted by member agencies. For reporting purposes, we selected examples from the 2016 and 2019 biennial reports (those most recently available) of activities the agencies have taken to illustrate interagency committee member efforts to address marine debris, to reflect a range of activities across categories of activities and member agencies. In addition, we compared information we received about the interagency committee’s coordination to leading practices we identified in our past work on implementing interagency collaborative mechanisms. To examine the extent to which the interagency committee’s biennial reports contain required elements, we compared information contained in the committee’s five biennial reports to the statutory reporting requirements in the Marine Debris Act. Specifically, two analysts independently reviewed each of the five biennial reports to evaluate information the reports included about (1) the status of implementation of any recommendations and strategies of the committee, (2) analysis of the recommendations and strategies’ effectiveness, (3) estimated federal and nonfederal funding provided for marine debris, and (4) recommendations for priority funding needs. The analysts then compared and summarized the results of their analyses. We also interviewed and reviewed written responses from National Oceanic and Atmospheric Administration (NOAA) officials (in the agency’s capacity as chair of the interagency committee) and officials from other members of the committee about steps to develop the biennial reports, including the reports’ required elements. In addition, we compared information from the reports and the information we received from the officials to leading practices we identified in our past work on implementing interagency collaborative mechanisms. To obtain suggestions on actions the federal government could take to most effectively address marine debris, we conducted structured interviews with a nongeneralizable sample of 14 experts with expertise in marine debris-related issues. We selected the experts from a list of individuals we identified through interviews with agency officials and through a snowball approach, in which we reviewed relevant literature on marine debris, such as articles the experts authored, to identify other key experts and asked experts to identify other experts for including in this review. We also identified experts through our participation in key marine debris events, such as presenting at the Sixth International Marine Debris Conference. We considered factors such as the individual’s experience with different types of debris (e.g., abandoned fishing gear or consumer debris) or association with various sectors (e.g., academia or industry). Experts selected included: (1) academics with expertise in areas such as sources, prevalence, and transport of plastic marine debris; (2) officials representing the plastic manufacturing, food and beverage, and commercial fishing industries; (3) officials from nonprofit organizations with expertise in marine debris removal from coastal areas, litter prevention, and recycling management systems and strategies; and (4) state and local government officials from the District of Columbia, Florida, and Washington with expertise in local litter prevention efforts, derelict vessels, and lost and derelict fishing gear. We asked the 14 experts to suggest up to 5 to 10 actions the federal government could take to most effectively address different types of marine debris. We defined the term “actions” to mean any policy, program, effort, or intervention that could be taken by the federal government to prevent, remove, or dispose of marine debris. Actions could include new actions that the federal government may not have implemented or actions the federal government may already have taken. We did not limit experts’ suggestions to actions that agencies currently have authority to implement. We do not take a position on the merits of, the necessary legal authority for, or the most appropriate entity for the actions suggested by the 14 experts. Prior to the interview, we provided experts with background information about our review, the interview methodology, and definitions for key terms to ensure that terminology was used consistently throughout all the interviews. We also reviewed this information with each expert at the start of the interview. For each action, we asked that the expert identify: Name of action; Type(s) of debris: (Select any or all of the following types of marine debris that may be affected by the action: consumer-based, abandoned fishing gear, derelict vessels, and/or miscellaneous. If miscellaneous is selected, please explain); Describe this action: (Briefly describe this action and how it will address (i.e. prevent, remove, or dispose) marine debris and if it is currently being implemented by the federal agencies); Federal agency(ies) (Please briefly describe the federal agency(ies) that have implemented or could play a role in implementing the action); Nonfederal partners: (Please briefly describe the nonfederal partners the federal agencies may need to coordinate with when implementing the action (such as international, state and local governments, nonprofit groups, industry, and/or researchers); Advantages: (Briefly describe the advantages of the federal agencies implementing the action in terms of the ability of this action to address marine debris, the cost of the action, and the technical and administrative feasibility of implementing the action, or any other advantage that you believe may affect implementation); Disadvantages: (Briefly describe the disadvantages of the federal agencies implementing the action in terms of the ability of this action to address marine debris, the cost of the action, and the technical and administrative feasibility of implementing the action, or any other disadvantage that you believe may affect implementation); Challenges: (Describe any factors that may hinder this action from being successfully implemented by the federal agencies and how these factors may be overcome); Examples: (In instances where the federal agencies have previously implemented the action, please provide examples of how it helped address marine debris. If other entities that are not federal agencies have successfully implemented the action, please provide examples of how the action helped address marine debris); Authorities: (Briefly describe what legal authorities these actions would be implemented under. If new authorities are needed, please describe them); and Support: (Provide any studies, reports, or research you are basing your responses on). We conducted the interviews via teleconference between July 2018 and November 2018. The experts suggested over 70 actions that we organized into five categories based on common themes. Specifically, two analysts independently reviewed each expert’s description for individual actions and identified an appropriate category using decision rules the team developed. The analysts then discussed and compared their decisions. For actions the analysts categorized differently, they reviewed the decision rules together and came to agreement on the best category for a particular action. For reporting purposes, we selected several actions within each of the broader categories to provide illustrative examples of the types of actions experts suggested. Our selection of actions was based on a variety of factors, including our analysis of the number experts that suggested similar types of actions, the detail provided by the experts, and the availability of supporting information, such as instances where an action had been taken by state or local governments. Actions suggested by the 14 experts cannot be generalized to actions that might be suggested by other experts but provide examples of actions federal agencies could take to address marine debris. We also obtained written and oral responses to questions we asked of agency officials regarding factors their agencies would need to consider in potentially implementing any of the actions identified by the 14 experts. In addition, to corroborate statements from experts and agency officials and provide additional context on marine debris, we reviewed scientific studies and documents from international organizations, such as the United Nations; academic institutions and nonprofit organizations such as the Ocean Conservancy; and federal and state agencies to understand what is known about the types, sources, and effects of marine debris. We identified these studies and documents through various means, such as recommendations from experts and agency officials and authorship by experts. We also interviewed individuals from academia, environmental groups, and industry actively working on marine debris issues and attended the Sixth International Marine Debris Conference held in San Diego, California, in March 2018, to gain an understanding of areas of emphasis in the marine debris community. We conducted this performance audit from October 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Recommendations from the Interagency Marine Debris Coordinating Committee’s 2008 Report Table 3 lists the 25 recommendations contained in the Interagency Marine Debris Coordinating Committee’s 2008 report entitled Interagency Report on Marine Debris Sources, Impacts, Strategies, and Recommendations. According to this report, these recommendations are intended to guide the federal government’s strategies with respect to addressing problems of persistent marine debris. Each of the five biennial reports the committee issued subsequent to its initial 2008 report reference the 25 recommendations; the committee has not revisited the recommendations to determine the extent to which any adjustments may be warranted. Appendix III: Examples of Interagency Marine Debris Coordinating Committee Member Agencies’ Activities The following are examples of activities members of the Interagency Marine Debris Coordinating Committee (interagency committee) reported conducting—often in coordination with nonfederal partners such as nongovernmental organizations, industry, state governments, Indian tribes, and other nations—to address marine debris based on information from the committee’s 2016 and 2019 biennial reports and agency documents and interviews. These examples include activities from the categories outlined in the biennial reports: (1) education and outreach; (2) legislation, regulation, and policy; (3) cleanup; (4) research and technology development; and (5) coordination. The examples discussed below do not represent all activities conducted by member agencies, but rather illustrate the nature and type of activities the agencies reported conducting. In addition, the examples include activities from agencies that were identified in the interagency committee’s 2014 charter and were included in the committee’s most recent biennial reports. Education and Outreach Nine of the 11 member agencies reported conducting activities to support education and outreach related to addressing marine debris, such as developing and distributing educational materials, supporting public awareness campaigns, or partnering with or funding state, local, tribal, or nongovernmental education efforts. For example: Online public education. The Trash Free Waters Program—a program established in the spring of 2013 by the Environmental Protection Agency (EPA) to encourage collaborative actions by public and private stakeholders to prevent trash from entering water— provides information to the public, including online information about actions that can be taken to reduce trash from entering waterways. For example, in 2017, the program produced a series of eight webinars with experts on microplastics with the goal of promoting increased knowledge of the sources, distribution, and impacts of plastics and microplastics in the environment. Additional topics included research on global waste management and mismanagement of plastics, potential replacements for plastic products, and ways to improve the design of materials and products to minimize their environmental impacts. Grants for public awareness projects. The National Oceanic and Atmospheric Administration’s (NOAA) Marine Debris Program awards grants to eligible entities to, among other things, develop projects to educate the public about various aspects of preventing marine debris. For example, in 2014, NOAA awarded one grant to Virginia State’s Department of Environmental Quality to develop and implement a social marketing approach to reduce balloon debris. Balloons can end up in streams, rivers, and the oceans where marine animals can ingest the balloons or become entangled by their attachments, causing injury or death. This project aimed to help educate the public about the importance of refraining from releasing balloons in parks or outside schools, churches, wedding venues, or other events where balloons may be common. Sea Partners Program. Through its Sea Partners Program established in 1994, the U.S. Coast Guard Auxiliary conducts education and outreach to waterway users such as boaters, fishermen, marina operators, marine industry, and the general public with information on protecting the marine environment. For example, its Sayreville, New Jersey unit reaches an annual average audience of about 10,000 people, according to a program document, including youth groups, primary and secondary education science classes, senior citizen groups, and others. Topics presented include an introduction to marine pollution and oil spills and environmental pollution and recreational boating. Legislation, Regulation, and Policy Nine member agencies reported conducting activities to identify noncompliance or help ensure compliance with laws and regulations and develop or encourage policies and programs to implement practices that address specific types of marine debris. For example: Notice for offshore oil and gas operators. In November 2018, the Bureau of Safety and Environmental Enforcement renewed a notice for offshore oil and gas lessees and operators in the Gulf of Mexico that clarifies and provides more detail about marine trash and debris awareness training. Specifically, the notice stated that all offshore employees and contractors active in offshore operations are to complete marine debris awareness training annually. The notice further specifies that lessees and operators are to provide the bureau with an annual report that describes their training process and certifies that the training process was followed. Criminal enforcement of environmental laws. The Department of Justice prosecuted two shipping companies in 2017 for, among other things, falsifying records regarding disposal of garbage from a ship, in violation of the Act to Prevent Pollution from Ships. Specifically, the ship’s crew was instructed to throw plastic garbage bags filled with metal and incinerator ash overboard without recording the incidents in the ship’s record book. The companies pled guilty and were, among other things, sentenced to pay a $1.5 million fine and make a $400,000 community service payment. Policies for financing waste management infrastructure in Asia. The Department of State helped convene a meeting in Japan in 2016, under the Asia-Pacific Economic Cooperation framework, to discuss policy changes needed to overcome barriers to financing waste management infrastructure in the Asia-Pacific region to prevent and reduce debris from entering the marine environment. The meeting brought together government officials from the economic cooperation, representatives from industry, international financial institutions, and experts. Ministers of the economic cooperation endorsed nine recommendations developed at the meeting. State Department officials said they have continued to work with Asian governments, industry, and nongovernmental organizations to encourage policy changes and spur financial support for increasing waste management infrastructure and addressing land-based sources of plastic and in Asian countries. For example, at a 2017 meeting on waste management, State Department officials informed Asia-Pacific Economic Cooperation officials of the social and economic impacts of marine debris resulting from mismanaged waste in the region. Officials also said they used the meeting to connect economic cooperation officials with private sector stakeholders to encourage policy changes intended to enable private investment in waste management. Cleanup Eight of the 11 member agencies reported conducting a variety of activities to support the removal and disposal of marine debris, often in partnership with others, such as state governments. For example: Debris removal grants. In 2016 and 2017, NOAA’s Marine Debris Program awarded $2.4 million in grants to 25 entities such as state and tribal governments in 17 coastal states and U.S. territories for projects including community cleanups, crab trap recovery, and derelict vessel removal. For example, in September 2017, the program awarded a grant to the Makah Indian Tribe to remove three sunken vessels from the Makah Marina within the Makah Tribe Indian Reservation on Washington’s Olympic Peninsula. National Park cleanup. National Park Service staff conducted coastal cleanups across the various regions of the National Park System during 2016 and 2017. For example, in fiscal year 2017, park officials from Biscayne National Park, located off the coast of Southern Florida and comprised mostly of water, partnered with the Coastal Cleanup Corporation, a nonprofit organization, to organize 252 volunteers in removing 14,000 pounds of debris from the park. Maintaining navigation channels. The U.S. Army Corps of Engineers has authority to remove accumulated snags, obstructions, and other debris located in or adjacent to federally-maintained navigation channels. The Corps’ operations and maintenance appropriation is available to pay for the removal of obstructions to navigation, and the Corps is sometimes directed to use this appropriation for drift removal. For instance, in fiscal year 2018, the explanatory statement accompanying the Corps’ annual appropriation directed the Corps to use about $9.9 million of its appropriation for drift removal in New York Harbor. Debris the Corps removes typically consists of lumber, trees and branches, large waste items like tires, and large plastic items, according to Corps’ officials. Research and Technology Development Five of the 11 member agencies reported coordinating activities to conduct or sponsor research to monitor, understand the sources of, prevent, mitigate, or reduce the effects of marine debris or to support developing new technologies such as using more sustainable or recyclable types of materials. For example: Research grants. Since 2006, NOAA’s Marine Debris Program has supported at least two marine debris research projects that address questions such as monitoring marine debris, identifying fishing gear improvements and alternatives, or better understanding the environmental or economic impacts of marine debris. For example, in 2016, NOAA awarded a contract to a private research and consulting firm to conduct an economic study on how marine debris affects the economies of tourism-dependent coastal communities around the United States. The purpose of the project was to evaluate changes in tourism spending based on changes in the amount of marine debris to help prioritize areas of the United States where future prevention and removal efforts may be needed. NOAA officials said they expect the final report to be issued by the end of 2019. Microplastics workshop. In June 2017, EPA hosted a Microplastics Experts Workshop that convened experts from academia and other federal agencies, including NOAA, the U.S. Geological Survey, and the Food and Drug Administration, to identify microplastics research needs. The effort resulted in a 2018 report that identified four main areas where additional research is needed: (1) standardization of research methods, (2) debris sources and fate, (3) ecological risk assessment, and (4) human health risk assessment. EPA is using the report to consider how the agency can best address these high- priority microplastics research needs as it develops the agency’s larger environmental research agenda, according to EPA officials. Development of new fishing gear. In 2016, the Marine Mammal Commission awarded a grant to the New England Aquarium to test a ropeless fishing gear prototype intended to prevent whale entanglements in fishing gear. According to a document from the Commission, entanglement in fishing gear is the number one direct cause of marine mammal injury and death, including the endangered Northern Atlantic right whale. The Commission has used the results of this effort to emphasize the potential for ropeless gear to reduce and prevent entanglement in meetings with lobster and crab fishermen on the east and west coasts. Coordination Seven of the 11 member agencies reported conducting a variety of activities to foster coordination among member agencies and with nonfederal partners, such as international, state, and local government agencies. For example: Global Partnership on Marine Litter. In 2012, the United Nations launched the Global Partnership on Marine Litter, a voluntary network of international governments, nongovernmental organizations, academia, private sector companies, and others with the goal of protecting human health and the global environment primarily by reducing and managing marine debris. Interagency committee members, including NOAA and EPA, are partners to the global partnership. For example, from 2012 through 2017, the NOAA Marine Debris Program Director served as the Steering Committee chair of the global partnership. EPA has coordinated with the global partnership in Latin American and Caribbean countries to help develop a regional strategy for addressing marine debris in those regions and through in-person meetings and with other global partnership staff and NOAA colleagues through the steering committee. Sister Cities initiative. In 2015, the State Department announced the creation of a “Sister Cities” initiative with China to share best practices related to waste management and preventing marine debris. As part of the initiative, in November 2016, a Chinese delegation, comprised of central government officials and officials from Weihai and Xiamen, visited Chicago, New York City, and San Francisco to study U.S. practices in addressing marine debris. In November–December 2017, a U.S. delegation comprised of U.S. government officials and a New York City official, visited Xiamen, Weihai, and Beijing to learn about Chinese waste management practices. The partner city relationships were formalized with a memorandum of understanding between San Francisco and Xiamen in July 2016, and New York and Weihai in December 2017 to work together to address marine debris. State emergency response guides and regional action plans. NOAA’s Marine Debris Program has coordinated with coastal managers, nongovernmental organizations, industry, academia, and other groups to develop state marine debris emergency response guides. For example, in 2016 and 2017, NOAA coordinated with Florida, Georgia, Mississippi, North Carolina, and South Carolina to develop individual guides for those states. According to NOAA officials, federal, state, and local officials used the Florida response guide during the 2017 and 2018 hurricane seasons to inform responding agencies which agency has jurisdiction and to better coordinate marine debris removal efforts after an event. In addition, NOAA coordinated efforts to develop, enhance, and implement regional action plans for the Great Lakes, the Gulf of Maine, the Gulf of Mexico, the Mid-Atlantic, the Southeast, California, Florida, Hawaii, Oregon, and Washington regions. The purpose of the action plans is to bring stakeholders together to prevent and reduce marine debris throughout the United States, according to NOAA documents. For example, NOAA officials said that under the Hawaii action plan, several federal agencies and nongovernmental organizations worked together to purchase and maintain bins to collect used fishing line for recycling. Appendix IV: Comments from the Department of Commerce Appendix V: Comments from the U.S. Agency for International Development Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Mark Braza, Jeanette Soares, Jason Trentacoste, and Lisa Vojta made key contributions to this report. Eric Charles; Kim Frankena; Ellen Fried; Karen Howard; Edward J. Rice, PhD.; Dan C. Royer; Anne Stevens; and Sarah Veale also contributed to the report.
Why GAO Did This Study Marine debris—waste such as discarded plastic and abandoned fishing gear and vessels in the ocean—is a global problem that poses economic and environmental challenges. The Marine Debris Act, enacted in 2006, requires the committee to coordinate a program of marine debris research and activities among federal agencies. The act also requires the committee to submit biennial reports to Congress that include certain elements such as an analysis of the effectiveness of the committee's recommendations. GAO was asked to review federal efforts to address marine debris. This report examines (1) how the committee coordinates among federal agencies and the process for determining membership, (2) the extent to which the committee's biennial reports contain required elements, and (3) experts' suggestions on actions the federal government could take to most effectively address marine debris. GAO examined the Marine Debris Act and committee reports, compared committee practices with leading collaboration practices, interviewed federal agency officials, and interviewed a nongeneralizable sample of 14 marine debris experts selected to reflect various sectors and experiences with different types of marine debris. What GAO Found The Marine Debris Research, Prevention, and Reduction Act, as amended, (Marine Debris Act) designated six agencies as members of the Interagency Marine Debris Coordinating Committee and specifies that members shall include senior officials from certain other agencies as the Secretary of Commerce determines appropriate. Within Commerce, the National Oceanic and Atmospheric Administration (NOAA) serves as the committee chair. The committee coordinates through sharing information about members' activities to address marine debris, but GAO found that NOAA has not established a process for determining committee membership for agencies not specifically designated in the act. As a result, such agencies may not be included in the biennial reports required by the act which discuss committee members' marine debris activities. NOAA officials said they plan to develop a membership process but have not established a time frame to do so. By establishing a time frame, the committee can more fully benefit from capturing all members' activities. The committee's biennial reports provide information on members' activities such as education and cleanup, but they do not contain some information required by the Marine Debris Act. Specifically, the reports do not include (1) an analysis of the effectiveness of the committee's recommendations and strategies to address marine debris and (2) recommendations for priority funding needs. Our past work has shown that collaborative entities can better demonstrate progress if they develop a way to monitor and report the results of their collective efforts and identify and leverage resources. By doing so, the committee would be in a better position to know the extent to which it is effectively addressing marine debris and provide Congress with required information about priority funding needs. Experts suggested a range of actions—from research to cleanup—the federal government could take to most effectively address marine debris. They stressed that there is not one solution to the growing problem (see figure). Committee officials noted factors to consider, such as cost, when evaluating these actions. What GAO Recommends GAO is making four recommendations, including that NOAA establish a time frame for documenting membership and the committee develop processes to analyze the effectiveness of its efforts and identify priority funding. The agency agreed with GAO's recommendations.
gao_GAO-19-300
gao_GAO-19-300_0
Background The Constitution gives Congress the power to coin money, and under that authority, Congress has specified the coins that can be produced and the metal composition of circulating coins, including the penny, nickel, dime, quarter, and half-dollar. Congress has also passed legislation prohibiting the use of appropriated funds to redesign the $1 note. Within the Department of the Treasury (Treasury), BEP produces notes and the Mint produces coins. To ensure that notes and coins are available in sufficient quantities to meet public demand, the Federal Reserve orders new notes from BEP and new coins from the Mint. The Federal Reserve pays BEP for the cost of producing the notes; the Mint pays for the cost of producing coins and the Federal Reserve pays the Mint for the face value of the coins. The Federal Reserve distributes the notes and coins to approximately 8,400 depository institutions—banks, savings and loans, and credit unions—in the United States through cash offices operated by its 12 regional Reserve Banks. The Reserve Banks also are responsible for ensuring the quality and integrity of notes in circulation by assessing the condition of each note and destroying any that are unfit. When a depository institution deposits currency with a Reserve Bank, each currency note is verified on high-speed processing equipment using electronic authentication and fitness sensors. During the “piece- verification” process, the deposited currency is counted, suspect counterfeit notes are identified and segregated, and unfit notes are destroyed. The fit currency is packaged and used to fill future orders for currency from depository institutions. The destroyed notes are replaced with new notes from BEP as there is public demand for cash. The federal government spent about $1.3 billion to produce, process, and circulate notes and coins in 2017. These costs are offset by the financial benefit the government realizes when it issues notes or coins because currency usually costs less to produce than its face value. This benefit, which is known as seigniorage, is the difference between the face value of currency and its cost of production; this difference provides a financial benefit to the government when the government issues currency. In calendar year 2017, the Federal Reserve reported transferring about $81 billion to the Treasury, and the Mint reported transferring about $269 million in fiscal year 2017. The seigniorage the Federal Reserve and the Mint pay into the Treasury reduces the need for the government to borrow money, and as a result, the government pays less interest over time. Other countries have taken steps to reduce currency costs by replacing notes with coins of the same value and eliminating the smallest value coin. For example, Canada introduced a $1 coin in 1987 and a $2 coin in 1996 that replaced corresponding-valued notes, and the United Kingdom replaced its £1 note with a £1 coin in 1983. These countries expected a cost reduction because, while coins are generally more expensive to produce than notes, the coins can last substantially longer in circulation. For example, in both countries, the $1 and £1 notes, respectively, lasted 18 months or less while coins, according to experts, can be expected to last more than 30 years. As a result, these countries’ governments expected to save money because over 30 years, the number of coins they would produce was far less than the number of notes they otherwise would have made. These countries may have realized further financial benefits by replacing notes with coins because the public may hold more cash if a note is replaced with a coin and, as a result, the government would achieve a greater benefit from seigniorage. As we reported in 2011, because of differences in how people use coins and notes, the public may hold more than one coin for each note being replaced. Since people often store coins at home and store notes in their wallets, coins, as a result, circulate less frequently than notes and therefore more coins are needed to meet public demand. Thus, for a given denomination of currency, a larger number of coins would need to be maintained in circulation to meet the public’s demand for cash than would be needed if that denomination were provided in notes. For example, we previously reported that when Canada replaced its $1 note and the United Kingdom replaced its £1 note with a coin, both countries anticipated they would need to produce 8 coins to replace 5 notes, or a 1.6-to-1 replacement ratio. In previous work, we reported a positive annual net benefit to the government of replacing the $1 note with a $1 coin. In 2011, we reported a 30-year net benefit of $5.5 billion. Based on these results and the experiences of other countries, we have previously recommended that Congress consider and pass proposals to replace the $1 note with a $1 coin and, to ensure success of the coin, also provide for the elimination of the $1 note. While the production of the $1 coin has been authorized in law, elimination of the $1 note has not, and the U.S. has continued producing it. The U.S. has not eliminated any coins or altered any coin’s metal composition since 1982. Some countries have also eliminated their low- denomination coins to reduce currency costs. In 2013 Canada eliminated its one-cent coin because the cost to make it was more than it was worth and the coin’s usefulness had declined due to inflation. Over time, the costs of making these coins has increased due, in large part, to increases in the costs of metals used in coins—copper, zinc, and nickel. Since fiscal year 2006, both the penny and nickel have cost more to produce than their face value, according to our analysis of Mint data. (See fig. 1.) For example, in 2017, the Mint spent approximately 1.8 cents to produce each penny and approximately 6.6 cents to produce each nickel. Because the Mint sells coins to the Federal Reserve at face value, both coins cost more to produce than the Mint receives for them. As a result, in 2017, the Mint incurred net losses of about $69 million to produce the penny and about $21 million to produce the nickel. The dime and the quarter, however, cost less to produce than their face value. The combined cost to produce all widely circulating coins (the penny, the nickel, the dime, and the quarter) is less than their combined face value, so the government continues to realize positive seigniorage overall from producing circulating coins. The Coin Modernization, Oversight, and Continuity Act of 2010 authorized the Secretary of the Treasury to conduct research on alternative materials that could be used in coins. In response, the Mint conducted research on alternative metals, identified metal alloys that offered the potential for cost savings, and reported its results to Congress in 2012, 2014, and 2017. Replacing the $1 Note with a Coin Would Likely Result in a Net Loss, and Selected Stakeholders Identified Little Benefit from Replacement Replacing the $1 Note with a $1 Coin Would Likely Result in a Net Loss According to our analysis, the government would likely incur a net loss over 30 years if it replaced the $1 note with a $1 coin. We conducted a number of simulations that used different sets of assumptions to estimate the net benefit to the government of replacing the $1 note with a $1 coin. In almost every simulation, the net benefit to the government from switching to a $1 coin was negative, or an overall net loss (see app. I). For each set of assumptions, we simulated the status quo scenario in which notes are not replaced by coins, as well as two replacement scenarios. Under “gradual replacement,” the Federal Reserve would replace $1 notes with $1 coins as the notes became unfit for circulation. Under “active replacement,” notes would be replaced by coins more quickly because the Federal Reserve would destroy unfit notes as well as some fit notes each year and replace them with $1 coins. In both replacement scenarios, we assumed that the public would increase its holdings of cash when coins are used instead of notes and that the replacement ratio would be 1.5 coins for each note. We found that the present value of the net loss incurred by the government over 30 years would be about $2.6 billion with gradual replacement and about $611 million with active replacement (see fig. 2). Each simulation we conducted accounts for both costs and benefits to the government. The costs include production and processing costs for $1 coins and $1 notes, as appropriate. The coin replacement scenarios each include one-time startup costs that would be incurred upfront, in addition to recurring increased costs of producing higher-denomination notes when the $1 note would no longer made. In each simulation, we calculated benefits to the government as interest savings on debt that would be avoided because of seigniorage, or the difference between the face value of the currency that would be produced and the cost of producing it. These simulations represent the first time we have found that replacing the $1 note with the $1 coin would result in a net loss to the government rather than a net benefit. The simulations are based on current data and projections from CBO and the Federal Reserve, among others, that have changed over time. For example, the increased lifespan of the $1 note relative to that of the $1 coin and the decreased cost to the Federal Reserve for processing currency are key factors in these estimates and substantially reduced the relative costs of the status quo scenario. For our 2011 report, we assumed a median lifespan of 3.3 years for the $1 note based on Federal Reserve data. Since then, the $1 note lifespan has increased, and our current simulations assume a median lifespan of 7.9 years based on the most recent data from the Federal Reserve. Due to this substantially longer note lifespan, fewer $1 notes need to be produced over a 30-year period, which reduces the cost of producing them and diminishes the relative advantage of the long coin life. In our 2011 simulations, a $1 coin was assumed to last about 10 times as long as a $1 note (34 years to 3.3 years); in our current simulations, the lifespan of the coin remains the same but is now only about 4.3 times as long as that of the note (34 years to 7.9 years). Meanwhile, the relative cost of producing coins and notes has remained about the same. According to the Federal Reserve, the increased lifespan of the $1 note is largely attributable to a series of improvements in Federal Reserve currency processing procedures and equipment that has reduced the number of notes destroyed each year. For example, prior to April 2011, depository institutions were required to deposit currency in stacks of like- notes with the portrait side of the note facing up. After discovering it was destroying many notes that were otherwise fit for circulation because they were “misfaced,” the Federal Reserve undertook an effort to increase the percentage of notes that were properly faced by manually checking and correcting notes’ orientation. Subsequently, during 2010 and 2011, the Reserve Banks installed new sensors on their high-speed processing equipment, which enabled the Reserve Banks to authenticate notes regardless of facing. In addition to increased note life, the costs that we anticipate the Federal Reserve would incur for processing notes has decreased since our 2011 analysis because it is processing fewer $1 notes. Although the cost per note for processing has remained the same—$0.003 per note, based on Federal Reserve data—the number of notes processed in 2017 was about 1.6 billion less per year than at the time of our 2011 analysis. According to Federal Reserve officials, the public may be handling and using $1 notes less and holding on to them longer. This could cause notes to circulate less frequently, reducing the number of notes processed. Our simulations show that the losses to the government from replacing the $1 note with a $1 coin would not be incurred evenly over the 30-year period. Much of the cost of producing coins to replace notes would be borne by the government in the earlier years of our simulations, while the benefits to the government would accrue gradually and become relatively more important in later years. For example, in the gradual replacement scenario, more than half of the net loss to the government occurs in the first 10 years of the 30-year period. The large net losses in the early years largely reflect the upfront costs of replacing $1 notes in circulation with $1 coins and meeting increased demand for currency. In our simulations, the interest savings then accrue over a relatively long period of time due to the 34-year median lifespan of the coin. Our simulations reflect uncertainty in the underlying projections and assumptions. In general, however, projections that are closer in time are more certain. For example, an estimate over a 10-year period would be more certain than an estimate over a 30-year period. Consequently, within our results, the estimated net loss in the first 10 years is more certain than the estimated net loss over the 30-year period. Most Stakeholders We Interviewed Said Switching to a $1 Coin Would Result in Added Costs without Providing a Benefit Representatives from 7 of the 10 stakeholder industries we met with would be negatively affected by a switch to a $1 coin because they stated they would incur additional costs as a result of such a change. For example, representatives from the armored carrier industry told us that they anticipate increased costs because of the additional weight of transporting $1 coins compared to $1 notes as well as the need to modify or procure additional coin-processing equipment. Representatives of the gaming industry, which includes casinos and companies who make electronic games found in casinos, said a switch to the $1 coin would be costly because the industry has generally moved away from the use of coins in favor of notes and casinos would incur additional costs for transporting and storing coins. Of the 7 stakeholder industries that said they would incur additional costs, 3 provided us with estimates of these costs. All 3, which represent industries with machines that would require modification to accept $1 coins, approximated these costs by multiplying an estimated number of units affected by an estimated per-unit cost of changing the machines. For example, a representative of the gaming industry estimated that about 98 percent of the approximately 1,000,000 electronic gaming machines in the U.S. and Canada were manufactured with no provision for accepting coins. According to this representative, the costs to convert machines to accept $1 coins could range from $130 to $175 per unit because the level of modification needed would vary. Some machines would require, for example, a newly designed faceplate, a coin acceptance mechanism, and a box for collecting coins. Most representatives from stakeholder industries said there would be no benefit to them from a switch to a $1 coin, but 3 of the 10 representatives acknowledged some benefits of doing so. Two representatives said that coins are generally less likely to jam or be rejected by the payment mechanisms than notes. The other representative—from the bulk vending industry, which sells products such as gum balls and small toys through coin-operated equipment—said a $1 coin would help the industry increase sales and offer higher-quality products than it offers now for 25 or 50 cents. According to this representative, virtually all these machines accept quarters but some require two or three quarters for a purchase. A $1 coin would increase the likelihood that consumers would have the necessary change to use these machines thus increasing their sales, according to this representative. Representatives from the remaining 3 stakeholder industries reported that switching to a $1 coin would have little or no impact on their operations. For example, a representative of operators of toll roads and bridges said that all major toll operators have adopted some form of cashless, electronic collection system. The use of cash, including coins, for toll payment has declined to 18 percent of all toll revenue in 2015, down from 29 percent in 2010, and most existing coin collection machinery currently accepts $1 coins. Similarly, a representative from the parking industry noted a trend toward increased use of cashless transactions along with a decrease in the number of coin-operated parking meters. A switch to a $1 coin would have minimal effect on the industry because virtually all parking meters take quarters. The remaining representative said additional information, such as whether a new $1 coin would be issued and whether it would have the same properties as currently circulating $1 coins, would be needed to determine whether it would incur costs from a switch to a $1 coin. A representative of an organization that advocates replacing the $1 note with a $1 coin said that switching to the $1 coin could make it easier for people with visual impairments to identify the denomination. We have previously reported that different denominations of US currency are identical in size, making it difficult for the blind or visually impaired to distinguish among them. Moreover, according to the representative, eliminating the $1 note would reduce the number of note denominations, and the $1 coin may be easier to recognize by its physical difference from other coins. Although anyone who uses currency could be affected by a switch to a $1 coin, the extent of public support for making such a change is unclear, particularly when doing so would not provide a benefit to the government. Our most recent work on public perceptions of $1 coins in 2002 found few survey respondents were using $1 coins and 64 percent opposed replacing the $1 bill with a $1 coin. A majority of survey respondents favored replacing the $1 note with a $1 coin when told that doing so could save about half a billion dollars per year—our then-current estimated net benefit to the government; we did not seek to gauge public perceptions about the same action if it were to cause a loss. Similarly, the organization advocating in support of $1 coins has reported increased public interest in a change from the $1 note when substantial cost savings are factored in. However, according to Federal Reserve officials, the public continues to express its preference for the $1 note because both the $1 coin and $1 note are available and the public overwhelmingly uses $1 notes. Moreover, Reserve Banks currently hold more than 1-billion $1 coins because there is little demand for them from the public, further demonstrating public preference for the $1 note, according to these officials. The Mint Estimates Suspending Penny Production and Changing the Nickel May Result in Savings, but Some Stakeholders Expressed Concerns about Suspending the Penny The Mint Estimates That Suspending the Penny Would Save Over $250 Million and That Changing Metal Content of the Nickel Would Save from $21 Million to $85 Million over 10 Years The Mint estimates that it would save about $27 million annually, or about $252 million in present value over 10 years if Congress directed it to suspend the production of the penny (see table 1). However, the Mint’s estimated savings are based on its penny production data from a single fiscal year—2017. Specifically, since the Mint lost $27.3 million from making 8.4-billion pennies that year, this amount would also represent the savings to the Mint through cost avoidance if it had not produced any pennies. Because the number of pennies produced and the base metal costs vary from year to year, future changes to production volumes and costs could alter the estimated savings. The present value of the estimated savings could also be affected by the choice of discount rate. The Mint has suspended production of some coins in the past due to a lack of demand for those coins. Specifically, the Mint suspended production of the half-dollar coins for circulation in fiscal year 2006 and the Presidential $1 coins for circulation in 2011. The Mint suspended production of these coins because demand for them was low. In contrast, demand for the penny remains strong, as the Mint produced about 8.4 billion pennies in fiscal year 2017 in response to orders from the Federal Reserve. Penny inventories at Federal Reserve Banks can meet demand for about 1 month, according to Federal Reserve officials. According to Mint officials, the Mint has not taken a position on proposed legislation introduced in the 115th Congress that would suspend production of the penny for 10 years, among other things. However, the Mint has developed a preliminary plan to implement a penny suspension if required to do so by law. According to this plan, suspending penny production would take place over a 2.5-year timeframe: the first year would be devoted to planning and preparing for penny suspension, and the next 1.5 years would be devoted to ending the Mint’s contracts with its suppliers, addressing the disposition of affected Mint personnel, and deciding what to do with excess production equipment and physical space. The Mint would also conduct outreach and communication to the public, Congress, and Mint employees during this time. The Mint is also taking steps to reduce the financial loss from producing the penny. According to Mint officials, they and the Federal Reserve are working with industry stakeholders specifically to identify alternative practices that would reduce dependency on the manufacture of additional pennies. For example, the Federal Reserve and Mint met with stakeholders to discuss these practices in January 2019. According to Mint officials, the Mint would not need to produce as many pennies if the pennies currently in circulation were more actively circulated. Mint officials stated that billions of pennies are held by banks, armored carriers, or the public. According to Mint officials, if pennies were to circulate more quickly, the demand for new pennies would be reduced, and production of new pennies could decrease and would reduce the financial losses from penny production. The Mint also estimates it would save between $2.2 million and $9.1 million annually, or between $21 million and $85 million in net present value over 10 years, by changing the metal composition of the nickel (see table 2). The Mint’s estimated savings are based on fiscal year 2017 production of 1.3-billion nickels at a cost of $86 million. The nickel currently consists of about 75 percent copper and 25 percent nickel. Based on research, the Mint reported it would achieve cost savings by changing the metal composition to about 80 percent copper and 20 percent nickel (80/20) or by changing the metal composition to a copper, nickel, manganese, and zinc combination (C99750T-M). Because the number of nickels produced and their cost varies from year to year, future changes to production volumes and costs could alter the estimated savings. Both changes in the composition of the nickel are seamless changes because nickels made of these alloys would have the same weight and electromagnetic signature as the current nickel, according to the Mint. As a result, these nickels would function the same for the public and in vending machines. However, according to Mint data, even if the Mint changes to one of these alternative metal compositions, the unit cost of producing the nickel would likely remain greater than the face value of the coin. In fiscal year 2017, the Mint spent approximately 6.6 cents to produce each nickel, which would have been reduced to about 6.4 cents if the Mint had produced the 80/20 nickel and 5.9 cents for the C99750T-M nickel. Based on authorities granted in the Coin Modernization, Oversight, and Continuity Act of 2010, the Mint has conducted research and identified potential alternative metal compositions for the dime and quarter. This research shows that the same alloys that could reduce the cost of producing the nickel could be used to reduce the costs of producing the dime and quarter. Specifically, this research indicated potential savings of $74 million over 10 years by using the C99750T-M alloy in the dime and quarter, although additional testing of the alloy is required. Changing the metal composition of circulating coins could help the Mint achieve more effective and efficient operations by reducing production costs, resulting in savings to the government and the taxpayer. The Secretary of the Treasury and Mint officials do not have the authority to alter the metal content of coins—except the penny—as metal content is determined in statute. The Mint has sought authority from Congress to change the metal composition of the nickel, dime, and quarter, if those changes meet certain requirements. Specifically, in its fiscal year 2019 budget proposal, the Mint proposed a legislative change to its authorities that would enable the Secretary of the Treasury to alter the metal composition of coins, if those changes did not affect the weight or electromagnetic signature of the coins. This proposed change is consistent with the Treasury’s 2018–2022 strategic plan, which includes a goal to introduce efficiencies to lower the unit costs of coins produced by the Mint. Legislation supporting this proposal has not been introduced. Without the authority to change the metal composition of coins, the Mint cannot fully realize operational efficiencies, even though it has identified methods to reduce the cost of coins without altering their characteristics. Some Selected Stakeholders Expressed Concerns about Penny Suspension but None Expressed Concerns about Changes to the Nickel Government officials we spoke with raised concerns about the potential effects of a penny suspension, such as regional penny shortages or other unintended consequences. Specifically, Federal Reserve officials noted that suspending production could create a shortage of pennies if demand is greater than the supply of pennies. These officials explained that even if there are enough pennies to meet overall demand, the distribution of pennies across the country may be uneven and not matched to the location of greater demand. In this case, the Federal Reserve could incur additional costs to transport pennies to balance supply and demand across the country. Federal Reserve officials also said a suspension could potentially be successful if there was a reduction in penny demand and steps taken to mitigate potential disruptions to the penny supply. Mint officials expressed concern about potential unintended consequences of a penny suspension and effects on Mint operations. Specifically, according to Mint officials, suspending penny production may cause an increase in the number of coins returned to circulation because the public may react to a suspension by using its pennies in addition to the other coins in its coin jars. The resulting influx of coins into circulation may be sufficient to satisfy some or all of the demand for new coins for a period of time and cause the Mint to decrease or suspend production of coins. Mint officials said that costs the Mint would incur due to a disruption of coin production operations and loss of income from seigniorage could be as high as $3 billion over 7 to 10 years. These officials also raised concerns about the ability to securely store larger-than-usual quantities of all coins because the existing infrastructure, particularly vault storage, may be insufficient. Mint officials noted that, while other countries have stopped producing coins, suspending penny production may have a similar impact as not producing the penny. When Canada stopped producing its penny, it began to actively take the coins out of circulation, and the public knew the penny would eventually no longer be used. While the proposal to suspend penny production does not remove the penny from circulation or use in commerce, Federal Reserve and Mint officials told us that the results of suspending penny production are uncertain, partly because a suspension has not been tried before. Representatives from 9 of the 10 stakeholder industries said they do not anticipate incurring costs if the penny were suspended; most said they were not concerned about this action because the coins are either not used or minimally used in their industry. Three selected stakeholders said they would be affected by a penny suspension—associations representing armored carriers, banks, and retailers—as well as the company that manufactures the penny blanks for the Mint. They expressed uncertainty about how the suspension would be carried out and effects it might cause, such as penny shortages, and provided the following views and information: Armored Carriers – A penny suspension may not have a significant effect on operations since a suspension would not necessarily reduce the number of coins processed or transported, according to armored carrier representatives we spoke to. However, if penny shortages occurred, the carriers may have to move pennies from one geographic region to another to satisfy variations in demand from their customers, incurring additional transportation costs. Alternatively, suspension of the penny may cause the public to turn in pennies, along with coins of other denominations, which could exceed the secure storage capacity of carriers and coin terminals. Bankers – According to an association representing banks, bankers are unclear if the government would issue any guidance about rounding cash transactions to avoid inconsistent approaches. Because banks have received questions from customers about changes to currency in the past, the association emphasized the need for public education before suspending the penny. Retailers – Retailers have not determined the impact of suspending the penny on their industry, according to a retailer association. However, many retailers sell items priced below $1 as an important part of their business and merchandising strategy, according to these representatives, so it is important for retail businesses to be able to continue to make change down to the penny at the end of cash transactions. Vendor – a representative of the company that supplies the Mint with penny coin blanks told us that a penny suspension would force a decision whether to sell or deactivate the penny blank production equipment during the 10-year suspension. If sold, the vendor may then not have the equipment if the government decided to produce the penny again. None of the representatives from stakeholder industries raised concerns about changes to the nickel as long as the changes to the nickel are seamless. Conclusions Producing money for use in commerce is an important function of the U.S. government. The Federal Reserve, along with the Treasury’s BEP and the Mint, work together to ensure that there is an adequate supply of U.S. coins and notes for use around the world. In addition to ensuring an adequate supply of these coins and notes, it is also important to ensure that the government is producing these items efficiently. Because our current estimate shows the federal government would likely incur a net loss from replacing the $1 note with a $1 coin, we are no longer recommending that Congress consider replacing the $1 note with the $1 coin. The Treasury cannot alter the metal content of coins unless Congress provides that authority to the Treasury. If Congress were to grant the Treasury the authority to change the metal composition of coins, as the Mint has proposed, then it could use the results of its research to lower the costs of coin production while producing coins that look, feel, and function the same as current coins. Further, the Mint could decrease its production costs without affecting the characteristics of the coins. Without this authority, the Mint cannot provide the best value to the taxpayer and produce coins in the most efficient and cost-effective manner possible. Matter for Congressional Consideration Congress should consider amending the law to provide the Secretary of the Treasury with the authority to alter the metal composition of circulating coins if the new metal compositions reduce the cost of coin production and do not affect the size, weight, appearance, or electromagnetic signature of the coins. (Matter for Consideration 1) Agency Comments We provided a draft of this report to Treasury, including the Mint and BEP, and the Federal Reserve for their review and comment. In comments, reprinted in appendix III, the Mint agreed with our matter for congressional consideration and clarified its position on the potential cost impact of a penny suspension. The Mint’s comments stated that if the penny were suspended, consumers may return large amounts of all coins, not just pennies, which would decrease the need for future coin production. Without demand for coin production, the Mint estimated costs from idle production capacity and loss of seigniorage from coins to be up to $3 billion over 7 to 10 years. The Mint also commented that the effect of suspending penny production could be the same as the effect of stopping penny production. We revised our report to reflect the Mint’s perspective. The Department of the Treasury concurred with comments provided by the Mint. BEP did not have any comments. The Federal Reserve provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Director of the U.S. Mint, the Director of the Bureau of Engraving and Printing, the Chair of the Board of Governors of the Federal Reserve System, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: GAO’s Economic Simulations and Alternative Analyses This appendix describes the economic simulations discussed in this report, including the assumptions we used and their sources, as well as the alternative simulations we conducted. Economic Simulations To estimate the net effect on the government of replacing the $1 note with a $1 coin, we simulated the benefits and costs to the government of issuing currency—including both notes and coins—under different scenarios and assumptions over a 30-year period. For each set of assumptions we considered, we simulated three scenarios—the status quo scenario, in which the $1 note would continue to be produced, and two replacement scenarios, in which the $1 coin would replace the $1 note. In the gradual replacement scenario, $1 notes are replaced as they become unfit for circulation, while under the active replacement scenario, $1 notes are replaced more quickly. We then compared the net benefit to the government in each replacement scenario to the net benefit under the status quo. As part of our analysis, we also ran alternative simulations with different sets of assumptions, to examine how the assumptions underlying our analysis would affect the estimated net benefit to the government. The various assumptions include the extent to which the public would increase its holdings of cash when coins are used instead of notes, the expected rate of growth in the demand for currency over 30 years, the costs of producing and processing both coins and notes, and the life span of both forms of currency (see table 3). In our replacement scenarios, we assumed that the replacement would be implemented starting in 2018, and during that year the U.S. Mint (Mint) would invest in new equipment to establish its production capability for $1 coins. We also assumed that production of the paper note would stop as soon as $1 coins were introduced. A key assumption in our analysis is the extent to which the public may hold more cash when notes are replaced by coins. Because of differences in how people use notes and coins, the public may need more than one coin for each note than would otherwise have been demanded. For example, people may take coins out of their pockets and store them at the end of each day, rather than retain them in their wallets as they do notes. These factors cause coins to circulate more slowly than notes, and more $1 coins would need to be maintained in circulation to meet the public’s demand for $1 notes. Consistent with simulations in our previous reports, we assumed in our economic simulations that the public would hold more $1 coins, requiring that more than one coin would be needed to replace each note. Therefore our replacement scenarios use a replacement ratio of 1.5 – that is 1.5 $1 coins for each $1 note to be replaced. For our alternate simulations we allow the replacement ratio to vary, to include a case in which no additional currency is demanded when coins are used (i.e., the replacement ratio is 1.0). As part of this sensitivity analysis, we found that a key driver of the estimated net benefit is the extent to which the public would hold more cash when $1 coins are used instead of notes. Alternate Simulations We altered some assumptions to simulate how the change would affect our estimate of the net benefit or loss to the government. See table 4. We present our analysis to show the effect of changes under both gradual and active replacement, and we show the results both with and without gains from seigniorage. To assess the effect of the public’s holding more or less cash as a result of needing fewer or greater numbers of coins to replace each note in circulation, we conducted separate simulations in which we: decreased the replacement ratio from our current estimate of 1.5 coins per note to 1 coin per note, and increased the replacement ratio from our current estimate of 1.5 coins per note to 2 coins per note. To assess the effect of the Board of Governors of the Federal Reserve System (Federal Reserve) not releasing into circulation the $1 coins it currently holds, we: assumed that the approximately 1.2-billion $1 coins held by the Federal Reserve would not enter circulation and would continue to be held by the Federal Reserve. To assess the effect of changing production costs for notes and coins, we conducted separate simulations in which we: increased the costs of producing notes from our current estimate of 3 cents to 4.9 cents without changing the costs of producing coins; increased the costs of producing coins from our current estimate of 14.6 cents to 17.5 cents without changing the costs of producing notes; and increased the costs of producing both notes and coins from our current estimates of 3 cents to 4.9 cents for notes and 14.6 cents to 17.5 cents for coins. To assess the effect of decreased demand for currency if people switched to electronic means of payment, we conducted separate simulations in which we assumed: demand for currency grows at a slower rate—75 percent of the growth in demand in the replacement scenarios—after fiscal year 2028, and demand for currency grows at a slower rate—50 percent of the growth in demand in the replacement scenarios—after fiscal year 2028. Appendix II: Objectives, Scope, and Methodology This report: (1) determines the estimated net benefit to the government, if any, of replacing the $1 note with a $1 coin and selected stakeholders’ views on this change, and (2) examines what is known about potential cost savings to the government from suspending production of the penny and changing the metal composition of the nickel coin as well as selected stakeholders’ views on these changes. To estimate the net benefit or loss to the government of replacing the $1 note with a $1 coin, we conducted economic simulations under different scenarios and assumptions over a 30-year period. We simulated a “status quo” scenario and two “replacement” scenarios. In the status quo scenario, notes remain the dominant form of $1 currency. In each replacement scenario, notes are replaced by $1 coins under various assumptions. We then compared each replacement scenario to the status quo scenario with respect to net benefits to the government. As part of our analysis, we also ran alternative simulations with different sets of assumptions, to examine how the assumptions underlying our analysis affect the estimated net benefit to the government. The various assumptions underlying our simulations include the extent to which the public holds more cash when coins are used instead of notes, the cost to produce $1 notes and $1 coins, and the lifespan of notes and coins, among others. Our analyses are projected over 30 years because that period roughly coincides with the life expectancy of the $1 coin. We interviewed relevant officials from the Board of Governors of the Federal Reserve System (Federal Reserve), the Bureau of Engraving and Printing (BEP), and the U.S. Mint (Mint). We also obtained data for our assumptions from these agencies and economic projection data from the Congressional Budget Office. More detailed information on the structure, assumptions, and inputs of our economic simulations are found in appendix I. To determine how the Federal Reserve estimates the life-span of the $1 note (a key input to our economic simulations), we reviewed work papers and analyses from prior work. We interviewed knowledgeable Federal Reserve officials about the methodology for calculating a note’s life-span and reviewed data on a note’s estimated life from calendar years 2005 through 2017. We also observed note-processing operations and equipment at the Federal Reserve’s Cash Technology Office (located in the Federal Reserve Bank of Richmond), reviewed Federal Reserve’s and Treasury Department’s cash-processing policy and procedure manuals, and interviewed knowledgeable officials about technological innovations in Federal Reserve note processing since 1998. We took steps to assess the reliability of data used, such as interviewing knowledgeable agency officials, and determined that the data were sufficiently reliable for the purposes of this report. To determine selected stakeholder views on changes to currency, we identified 91 entities that could potentially be affected by reviewing prior GAO, Mint, and Federal Reserve reports and the results of a literature search. We eliminated some of these entities from further consideration because we could not identify a way to contact them or they did not respond to our efforts to contact them. We sought entities with the broadest representation so we generally eliminated individual companies, with the exception of those that are primary suppliers of raw material for the production of notes or coins. Of the remaining 36 entities, we selected and interviewed 10 organizations representing potentially affected industries, primarily based on the entities’ role with respect to currency and the currency change likely to affect it most. We also selected and interviewed a private company involved in the production of materials used in coins and two organizations that advocate for a switch to a $1 coin and for continued use of the penny, respectively. We categorized each entity’s role with respect to currency as a maker (involved in, or represents those involved in, supply of materials for production of coins or notes); a mover (involved in, or represents those involved in, transporting, processing, or facilitating use of coins or notes); or a user (involved in, or represents those involved in, transactions where coins or notes are exchanged). We also categorized each entity as being most affected by, or most interested in, changes to the $1, nickel, or penny. We used information we collected or had used in prior work about these stakeholders and also used professional judgement and logic to determine in which role category they belonged. In some cases, we assigned an entity to more than one category. In addition to categorizing stakeholders, when making our selection, we also considered the extent an entity’s area of representation overlapped with another to avoid duplication. If a selected entity did not respond to our request for an interview, we sought to replace that entity with a similar one, if available. Since our selection is comprised of a non-representative sample, the results are not generalizable to all stakeholders. The stakeholders we selected are: American Bankers Association, aba.com Americans for Common Cents, pennies.org Association of Gaming Equipment Manufacturers, agem.org Coin Laundry Association, coinlaundry.org Dollar Coin Alliance, dollarcoinalliance.org International Bridge, Tunnel and Turnpike Association, ibtta.org International Parking & Mobility Institute, formerly the International Parking Institute, parking-mobility.org Jarden Zinc Products, jardenzinc.com National Armored Car Association, nationalarmoredcar.org National Automatic Merchandising Association, namanow.org National Bulk Vendors Association, nbva.org Retail Industry Leaders Association, rila.org We also reviewed information on public perceptions and opinions about the use of a $1 coin from prior GAO work and publicly available information from an organization that advocates for a transition to a $1 coin. To examine what is known about potential cost savings to the government from suspending production of the penny coin and from changing the metal composition of the nickel coin, we analyzed penny and nickel production cost data from the Mint for fiscal years 2003 through 2017 to include a range of the number of coins produced and cost changes from metal price fluctuations and reviewed Mint studies on potential alternative metals and on coin production cost savings that could result from changing coin metal composition for these coins. We reviewed and analyzed the Mint’s preliminary plan if Congress were to authorize suspending production of the penny. We took steps to assess the reliability of the Mint data we used, such as reviewing relevant documentation, and determined that the data were sufficiently reliable for the purposes of this report. We also interviewed Mint and Federal Reserve officials, and the same set of selected stakeholders noted above. To understand the rationale and steps Canada implemented for eliminating the Canadian penny, we reviewed documents from the Canadian Senate, Department of Finance, and the Royal Canadian Mint. To understand the results of the elimination of the Canadian penny, we interviewed an official from the Royal Canadian Mint. We also conducted a literature search of relevant English language articles published from 2011 to May 2018 to provide information on the rationale and potential benefit to governments of making changes to coins and notes, along with information about the experiences of other English-speaking countries that have made such changes. We conducted this performance audit from December 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: U.S. Mint Comments Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, John W. Shumann (Assistant Director); Travis Thomson (Analyst-in-Charge); Amy Abramowitz; Lindsay Bach; Dave Hooper; Delwyn Jones; Malika Rice; Oliver Richard; Ardith Spence; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study The U.S. spent about $1.3 billion in 2017 to produce, process, and circulate coins and paper notes for use in the economy. Since 2006, both the penny and nickel have cost more to make than their face value. Other countries have replaced notes with coins of the same value to reduce costs. Since 1990, GAO had estimated replacing the $1 note with a $1 coin would provide a benefit to the federal government. GAO was asked to examine the potential cost savings to the government from making changes to currency. This report (1) estimates the net benefit to the government, if any, of replacing the $1 note with a $1 coin and selected stakeholders' views on this change; and (2) examines what is known about potential cost savings from suspending penny production and changing the metal composition of the nickel, and selected stakeholders' views on these changes. GAO conducted economic simulations of continued use of $1 notes and replacing notes with $1 coins, examined cost data from the U.S. Mint, and interviewed officials from the Federal Reserve, U.S. Mint and Bureau of Engraving and Printing as well as 10 selected stakeholders representing industries that could potentially be affected by currency changes. What GAO Found GAO's analysis found that replacing the $1 note with a $1 coin would likely result in a net loss to the government over 30 years. GAO found the government would incur a loss of about $611 million if notes were actively replaced and about $2.6 billion if $1 notes were replaced gradually (see figure). These simulations represent the first time GAO has found that replacing the $1 note with a $1 coin would result in a net loss to the government rather than a net benefit. GAO's estimates are based on current data and economic projections, which have changed over time. For example, the lifespan of the $1 note has more than doubled since a 2011 GAO analysis, from 3.3 years to 7.9 years, largely due to changes in note processing technology. Stakeholders generally identified few benefits from replacing $1 notes with $1 coins. Seven of 10 stakeholders GAO met with said that replacing the $1 note with a $1 coin would result in additional costs. For example, armored carriers told GAO that their transportation costs would increase because coins weigh more than notes. The U.S. Mint estimates that it could save approximately $250 million over 10 years by suspending penny production and between $2 million and $9 million per year by changing the metal composition of the nickel. It also estimates that it could save about $74 million over 10 years by changing the metal composition of the dime and quarter. However, Federal Reserve officials and some stakeholders expressed concern about temporarily suspending the penny due to the potential for external effects, such as penny shortages. Stakeholders were unconcerned about changes to the nickel as long as the changes would not affect how the coin functioned, for example, in vending machines. Since Congress specifies in law which coins are made and their metal composition, the Mint has proposed legislation to enable the Secretary of the Treasury to change the metal content of coins as long as the weight or machine acceptance of the coins is unaffected. Without such authority, the Mint might not be producing coins as cost-effectively as possible. What GAO Recommends Congress should consider taking steps to authorize the Secretary of the Treasury to adjust the metal content of circulating coins.
gao_GAO-20-93
gao_GAO-20-93_0
Background Roles and Responsibilities in the Army’s Marketing and Advertising Program The Secretary of the Army has the responsibility to recruit personnel, subject to the authority, direction, and control of the Secretary of Defense. The Assistant Secretary of the Army for Manpower and Reserve Affairs serves as the principal advisor to the Secretary for the Army’s management of its manpower and personnel and provides overall governance for marketing, advertising, and research. The Deputy Chief of Staff of the Army, G-1, is the principal military advisor to the Assistant Secretary of the Army for Manpower and Reserve Affairs and the Chief of Staff of the Army for all matters related to manpower across the Army. As of August 2019, the Deputy Chief of Staff of the Army, G-1, has responsibility for overseeing the new Office of the Chief Army Enterprise Marketing (AEMO) once it is fully established, as described later in this report. In addition, multiple other Army organizations from across the accessions enterprise—the collection of Army organizations involved in efforts to recruit and train soldiers for the Army—have roles and responsibilities in carrying out the Army’s marketing and advertising program, as summarized in table 1. The Army contracts with a primary advertising agency to develop and implement its marketing and advertising program. The advertising agency is responsible for providing a range of services from the development of the Army’s marketing and advertising strategy to the production of marketing and advertising activities, including television and print advertisements, event marketing, and social media. In November 2018, the Army awarded a contract to a new advertising agency for up to 10 years with a value not to exceed $4 billion. The contract with the previous advertising agency was awarded in March 2011, and from March 2011 through the end of fiscal year 2018, the Army issued 702 task orders and obligated about $1.6 billion on this contract, according to our analysis of data from the Federal Procurement Data System-Next Generation. Types of Marketing and Advertising Activities The Army conducts a variety of marketing and advertising activities at the national and local levels in support of the Army’s recruiting goals. Figure 1 shows examples of the various types of Army marketing and advertising activities, such as mobile assets used at recruiting events and digital advertising on social media. Different marketing and advertising activities are useful for supporting the three phases—awareness, engagement, and activation—of an individual’s decision-making process, sometimes referred to as the consumer journey. The goal of marketing and advertising is to move a potential recruit through each of the three phases and, ultimately, to the decision to enlist: Awareness: Individuals learn about the opportunity to serve in the Army and the distinct characteristics of serving in the Army. The Army pursues awareness through marketing and advertising activities such as television commercials, print advertisements, banners at events, and billboards. Engagement: Individuals who are aware of the opportunities for service in the Army begin considering the possibility of joining the Army. During this phase, the Army seeks to provide recruits with additional information to aid in their decision-making process. Often this phase of advertising takes place in the digital environment, as the Army seeks to provide informative social media posts and use banner advertisements to attract individuals to visit its website for more information. Activation: Individuals have considered the Army and are ready to talk to a recruiter about enlistment. As such, activation activities seek to encourage these individuals to provide their contact information to schedule an opportunity to meet with a recruiter. Activation is often conducted in person, such as through recruiters’ presence at events like career fairs. Activation may also be conducted through other means—such as direct mail and online or print classified advertisements—as long as the advertisement prompts viewers to provide their contact information. Further, the Army employs the use of mobile assets, such as large trucks and trailers fitted with equipment and activities intended to draw crowds and encourage and facilitate public interaction with a recruiter at an event to generate leads. Figure 2 shows the awareness, engagement, and activation phases and examples of marketing and advertising activities that are used in support of each phase. The Army Has Taken Steps to Develop Processes to Improve Its Oversight of Its Primary Marketing and Advertising Contract To implement AAA’s recommendations, the Army has developed processes designed to improve its oversight of the primary contract for executing the Army’s marketing and advertising program. In April 2018, AAA reported that AMRG did not sufficiently evaluate the performance of its primary contractor, effectively oversee deliverables included in its three main marketing and advertising contracts, or effectively oversee the negotiation process of task orders for its primary marketing and advertising contract. AAA made seven recommendations to AMRG to improve its contract oversight, with which AMRG agreed. As shown in table 2, as of September 2019, AAA considered three of the recommendations implemented but not closed, with AMRG still taking steps to address the other four recommendations; as a result, it is too soon to assess the extent to which the Army’s steps have improved contract oversight within the marketing and advertising program. AMRG officials stated that the implementation of these recommendations has been slowed, in part, because of the recent award of its primary marketing and advertising contract to the new advertising agency. Among AMRG’s actions to improve its contract oversight are steps to develop processes for overseeing contractor performance, deliverables, and price negotiations. Specifically, AMRG has taken steps in the following areas: Created a new project management office. AMRG established a project management office to help address the challenges identified by AAA and to serve as a coordinating body that centralizes contract oversight. According to its charter, the office is responsible for maintaining cost, schedule, and performance for Army marketing and advertising programs to help ensure that they are completed on time and within reasonable costs to support the accessions mission. Documents outlining the contract management process indicate that the project management office reviews key contract documents, such as the statement of objectives and the quality assurance surveillance plan, before the documents are submitted to the contracting officer who is responsible for administering the contract. After the contractor submits its proposal for providing the requested product or service, the project management office reviews the proposal to ensure it meets the requirements of the statement of objectives. Also, the project management office coordinates and submits the technical evaluation form for review by the contracting officer’s representative and the contracting officer. Within the newly-formed AEMO, there will be a project management office with six authorized personnel, according to the organizational chart for AEMO. In July 2019, AMRG officials stated that they were developing standing operating procedures and continuity plans that AEMO could use as it establishes its project management office. Implementing training and tools to evaluate contractor performance. AAA found that AMRG did not sufficiently evaluate the performance of its primary contractor and recommended that AMRG require that all program managers receive contracting officer’s representative training and ensure that individual quality assurance surveillance plans are developed for each task order over $150,000. As of September 2019, AAA reported that this recommendation had not been fully implemented. In response to AAA’s recommendations, in April 2019, AMRG reported that all program managers had completed contracting officer’s representative training. Further, as of September 2019, AMRG reported that the Army had developed individual quality assurance surveillance plans for each task order issued in fiscal year 2019. According to AAA officials, AMRG provided them with examples of these quality assurance surveillance plans, and AAA provided AMRG with feedback on additional information that should be included, which AMRG officials stated they were taking actions to address. In September 2019, AMRG also issued standing operating procedures for program managers to provide internal policy and instruction for executing the Army’s marketing and advertising program as well as evaluating its performance, including overseeing contractor performance. We found that the standing operating procedures require contracting officer’s representative training for program managers and that the Director of Marketing, Director of Research, and AMRG contracts team are to monitor compliance quarterly. In addition, the procedures include steps outlining contract oversight mechanisms, such as information on the purpose and contents of quality assurance surveillance plans. Implementing a standardized technical evaluation form. AAA found that AMRG had not effectively negotiated prices for its primary marketing and advertising contract; AAA recommended that AMRG (1) define and implement a well-structured policy for conducting technical evaluations of contractor proposals and (2) establish a standardized form to ensure consistency during the evaluation process. As of September 2019, AAA reported that these two recommendations were implemented but not closed. The standing operating procedures that we reviewed outline the process for program managers to complete a standardized form for evaluating contractor proposals. The form is intended to ensure that program managers are consistently evaluating contractor proposals for performing work under the contract. For example, the form requires program managers to conduct a comparative price analysis by comparing the contractor’s proposed price to total task order cost in prior years. According to the instructions on the form, program managers are to submit the completed form to their supervisor or director for approval. The project management office then coordinates and submits the technical evaluation to the contracting officer’s representative and contracting officer for review. Overseeing contract deliverables. AAA found that AMRG did not effectively oversee deliverables in its marketing and advertising contracts and recommended that AMRG develop procedures to ensure that contracts or task orders do not contain deliverables already provided in other contracts. As of September 2019, AAA reported that this recommendation had not been fully implemented. AMRG and U.S. Army Mission Installation and Contracting Command officials have implemented processes to prevent duplicative deliverables (i.e., services or products to be provided through a contract) in the future, such as the standardized technical evaluation form noted above. We found that the standardized technical evaluation form requires program managers to certify that they have reviewed other tasks and contracts within their purview and to validate that the task order being requested does not duplicate existing or other requested work. In addition, AMRG and U.S. Army Mission Installation and Contracting Command officials stated that they consolidated all contract actions under one team at the U.S. Army Mission Installation and Contracting Command and that both AMRG’s contracting officer’s representative and budget office must verify that contract requests are not duplicative. AAA also found overlapping deliverables between AMRG’s primary marketing and advertising contract and a contract for creative technology support and recommended that AMRG use the creative technology support contract for all of the creative technical services within its scope. As of September 2019, AAA reported that this recommendation had not been fully implemented. AMRG officials told us that they plan to issue a modification to the contract for creative technology support to remove services duplicated in the primary marketing and advertising contract. As of September 2019, an AMRG official told us that the Army expected to issue the modification in November 2019. Revising process for contract award fees. AAA found that AMRG had minimal support to justify its higher award fee ratings for its primary contractor and recommended that AMRG update its award fee plan and award fee review process to include soliciting feedback from program managers, maintaining supporting documentation, and obtaining objective performance data, among other things. As of September 2019, AAA reported that this recommendation had not been fully implemented. We found that AMRG revised its process in fiscal year 2018 for determining award fee incentives for its advertising agency. For example, according to the documentation associated with the award fee decision for the agency’s performance from April 2017 through April 2018, AMRG reported that it, among other things, included feedback from program managers on the advertising agency’s performance and obtained objective performance data from an independent entity, DOD’s Joint Advertising, Market Research & Studies. In future work with the new advertising agency, AMRG officials stated that the Army plans to offer award fees for specific task orders rather than one fee for performance in a given year. According to AMRG officials, this change allows greater flexibility in deciding which programs should be incentivized with an award fee. As of September 2019, AMRG officials stated that they had not issued any task orders with an award fee under the new marketing and advertising contract. The Army Has Taken Steps to Improve Its Approach for Measuring the Effectiveness of Its Marketing and Advertising Program To implement AAA’s recommendations, the Army has taken steps to improve how it measures the effectiveness of its marketing and advertising program; these steps are consistent with commercial best practices for assessing the effectiveness of advertising identified in our prior work. In its 2018 report on return on investment, AAA found that AMRG had deficiencies in how it measured the effectiveness of its marketing and advertising efforts and made seven recommendations to AMRG, with which AMRG agreed. Of these recommendations, AAA considered four implemented but not closed as of September 2019, with AMRG still taking steps to address the other three recommendations, as shown in table 3. Since the Army’s steps were recently implemented or are ongoing, it is too early to determine if they will achieve their desired results. Based on our analysis of the Army’s actions, the Army’s steps to implement AAA’s recommendations fall into the following five areas: (1) revising strategic goals, (2) updating and documenting its assessment process, (3) improving the reliability and capabilities of data systems, (4) integrating national and local marketing and advertising efforts, and (5) obtaining new tools to determine required marketing and advertising resources. The steps the Army has taken in these areas thus far are consistent with commercial best practices for assessing the effectiveness of advertising we identified in our prior work. As the Army takes additional steps to establish the newly-formed AEMO, it will be important for the Army to continue to align its efforts with these commercial best practices for assessing the effectiveness of advertising to ensure advertising dollars are used efficiently to help meet stated recruiting goals. Revising strategic goals. AAA found that AMRG did not have specific goals to measure the long-term effects of investments in marketing and advertising efforts to support the Army’s accessions mission and recommended that AMRG develop such goals. As of September 2019, AAA reported that this recommendation was implemented but not closed. AMRG has revised its strategic marketing goals from tracking changes in individuals’ attitudes toward the Army, such as support for the Army among the general population, to tracking the behaviors of these individuals, such as the number of visits to GoArmy.com. For fiscal year 2018, AMRG had seven strategic marketing goals that tracked the attitudes of the general population and prospects toward the Army. For fiscal year 2019, AMRG revised the goals to four that track attitudes, two that track behaviors, one that tracks effectiveness, and one that tracks efficiency. Goals that track attitudes are aligned with the awareness phase of the consumer journey, whereas goals that track behaviors are more aligned with the engagement and activation phases. Figure 3 shows how the fiscal year 2018 and 2019 strategic marketing goals align with the three phases of the consumer journey. Looking ahead, AMRG officials told us that, consistent with feedback they received from marketing industry experts, the strategic goals in the fiscal year 2020 marketing plan will all be behavioral and will target the different stages of what AMRG refers to as a lead nurturing funnel. AMRG officials stated that their goal is to use information so they can quickly shift attention and funding to different stages of the funnel that are not meeting their goals, so as to ensure that those stages get the attention needed to reach mission success. AMRG’s recent and ongoing steps to revise its marketing goals are consistent with the commercial best practice to develop an evaluation framework that identifies the target audience and includes measurable goals. To institutionalize AMRG’s updated processes, AAA recommended that AMRG update Army Regulation 601-208 to reflect the new goals and processes it would implement to improve its program effectiveness. As of September 2019, AAA reported that this recommendation had not been fully implemented. The Army is in the process of revising its marketing and advertising regulation to reflect the updated strategic marketing goals and process. AMRG had drafted a revision to the regulation; however, AMRG officials had put this process on hold while senior Army leaders were making the decision about AMRG’s placement within the Army. Now that AMRG has been redesignated as AEMO and reassigned within the Army, as of August 2019 AMRG officials stated that the Army was revising the regulation to reflect the new organization’s relationship to other entities within the accessions enterprise. AMRG expects the Army to publish the updated regulation in 2020. Updating and documenting its assessment process. AAA found that only three of AMRG’s 23 national events during fiscal year 2016 provided the best value for their intended purpose and recommended that AMRG discontinue efforts that were not cost-effective in comparison to other options and assess the cost-effectiveness of current marketing and advertising efforts. As of September 2019, AAA reported that this recommendation was implemented but not closed. According to AMRG officials, AMRG discontinued all of the events that were deemed to not be cost-effective in AAA’s report. In its report, AAA also found that AMRG’s assessment process did not include USAREC’s and USACC’s marketing and advertising efforts and that AMRG did not formally document that process; AAA recommended that AMRG establish and formally document a process with roles and responsibilities to assess the effectiveness and efficiency of all Army marketing and advertising efforts. As of September 2019, AAA reported that this recommendation was implemented but not closed. We found that AMRG formally documented how it assesses the effectiveness of the marketing and advertising program, consistent with the commercial best practice of seeking to develop an understanding of how outcomes can be attributed to advertising. In January 2019, AMRG issued guidance that outlines its assessment process, including the types of information that are reviewed in each assessment. The guidance we reviewed outlines three levels of assessments: Level I is a review by program managers of their individual programs; level II, to be conducted on a quarterly basis, is a review at the operational level in which the directors of research and marketing review the results of marketing and advertising efforts across multiple marketing channels; and level III, also to be conducted on a quarterly basis, is a review at the strategic level in which the AMRG Director reviews the Army’s progress in meeting its strategic marketing goals. Further, in January 2019, AMRG reviewed a summary of the number, cost, and performance of USAREC’s and USACC’s local marketing and advertising activities as part of the level II assessment process. In addition, to facilitate comparison of options, AMRG developed a tool for comparing different programs within a given marketing channel. The tool, which AMRG calls a decision support matrix, allows officials to comparatively rank different programs based on weighting different factors. For example, based on our review of the decision support matrix, we found that AMRG assigned a higher weight to a program’s effectiveness than its timing. The tool also incorporates qualitative feedback based on how program managers, USAREC, and USACC rank the programs, and quantitative analysis on the cost per lead, impression, or engagement, depending on the type of program. Improving the reliability and capabilities of data systems. AAA identified discrepancies between information in the Enterprise Marketing Management (EMM) system and supporting documentation and recommended that AMRG establish and formally document a process to ensure that all Army marketing and advertising performance and cost data were regularly recorded in an official marketing system of record on a regular basis. As of September 2019, AAA reported that this recommendation had not been fully implemented. The Army has taken steps to improve the reliability of the data in EMM since AAA’s report. In August 2019, the Army issued a task order on its primary marketing and advertising contract covering EMM system support to include overseeing and improving the quality of data in EMM. According to the performance work statement, the Army’s new advertising agency is responsible for, among other things, accurately documenting current data, systems, and business processes, as well as analyzing EMM reports and documentation for completeness and accuracy. Further, the advertising agency is responsible for identifying and documenting business problems and recommending areas for improvement and technology solutions. During the focus groups we held with AMRG personnel, participants told us that AMRG leadership was focused on demonstrating the effectiveness of the Army’s marketing and advertising through reliable and readily-available data. For example, the Army implemented an electronic business reply card, which is a digital form to capture a potential recruit’s eligibility and contact information and a means of identifying the event where the recruit learned about the Army. USAREC and USACC officials told us that prior to the electronic business reply card, recruiters collected prospects’ information at events by using a paper card. Although that card reflected the event where the potential recruit heard about the Army, it would often take several days before the potential recruit’s information appeared in the recruiting system, according to these officials. As a result, recruiters would sometimes not send in the paper card and would instead enter the prospect’s information directly into the system. In these cases, the marketing and advertising event would not receive credit for generating the lead. USAREC and USACC officials stated that the electronic business reply card’s quicker turnaround time for leads showing up in the system should improve data reliability by ensuring that recruiters and recruiting operations officers consistently enter an individual’s demographic data into the system, along with the marketing and advertising activity they interacted with. These steps to better identify the number of leads generated by marketing and advertising activities are consistent with the commercial best practice of conducting ongoing analysis of performance using industry standard measures appropriate for the purpose of the advertising activity. In addition, in August 2019, the Army issued a task order on its new marketing and advertising contract for the maintenance and optimization of its system that tracks analytics on the Army’s marketing and advertising activities, which AMRG refers to as the Intelligence Hub. The advertising agency is responsible for monitoring this system and producing reports that track the effectiveness of marketing and advertising activities based on key performance indicators. The advertising agency is also responsible for upgrading the system to track the multiple marketing and advertising resources that a potential recruit interacts with. AMRG officials told us that the upgrade of this system is intended to equip the Army with more complete data to demonstrate the effectiveness of the Army’s marketing and advertising activities— consistent with the commercial best practice of seeking to develop an understanding of how outcomes can be attributed to advertising. Integrating national and local marketing and advertising efforts. AAA found that AMRG did not integrate and leverage both national and local marketing and advertising efforts to support the Army’s accessions mission and recommended that AMRG revise the Army’s marketing performance framework to include marketing and advertising efforts at both the national and local levels. As of September 2019, AAA reported that this recommendation was implemented but not closed. We found that the Army has created programs and instituted procedures designed to increase coordination of national and local marketing and advertising efforts. For example, AMRG reported that it included other Army components, including USAREC and USACC, in developing the fiscal year 2019 marketing goals and planned to include those organizations in its fiscal year 2020 process. In addition, TRADOC established the Army Accessions Resource Fusion Board, which brings together organizations from across the accessions enterprise for quarterly meetings at which they make operational resource sharing plans for marketing and recruiting assets. For example, according to a March 2019 briefing for an Army Accessions Resource Fusion Board meeting, representatives from USAREC brigades discussed their planned marketing and advertising activities for the first quarter of fiscal year 2020, including any requests they had for support from other Army stakeholders for those planned activities. Further, according to its charter, the Army Accessions Resource Fusion Board is responsible for assessing the effectiveness of local marketing and advertising efforts in the previous quarter. In fiscal year 2018, the Army also created a pilot program designed to improve how the Army’s marketing and advertising program coordinates with its recruiting components and to produce marketing and advertising messages that resonate more effectively with target populations. The Army began implementing the program in Chicago in fiscal year 2019 and as of April 2019 was planning to expand the program to Boston and four other cities. As of July 2019, AMRG had observed positive results from the program in Chicago. For example, AMRG reported an increase of 11 percent in the number of leads and an increase of about 7 percent in the number of recruits who signed contracts with the Army when compared to the prior year in that region. Officials from TRADOC, USAREC, and USACC told us that coordination with AMRG on marketing and advertising efforts has improved since the time AAA conducted its audits. For example, TRADOC officials stated that AMRG senior leaders have supported the accessions enterprise by providing analytic support to USAREC and USACC. Further, USAREC officials stated that in fiscal year 2018 AMRG started to provide funding for local marketing and advertising activities near USAREC’s requested levels, and that this change had been carried forward into fiscal year 2019. Obtaining new tools to determine required marketing and advertising resources. AAA found that AMRG did not use a resource requirements projection model that supported and linked to planned marketing efforts and recommended that AMRG develop such a model. As of September 2019, AAA reported that this recommendation had not been fully implemented. AMRG has contracted with the RAND Corporation and a consulting firm to develop tools to determine the resources AMRG needs to conduct its marketing and advertising activities. The RAND tools include three planned models, one of which is the recruiting resource model recommended by AAA. The recruiting resource model has been partially completed and is being updated with additional data with full completion scheduled for September 2020. According to the Army’s fiscal year 2020 budget request, the Army used the RAND report that developed this model as a justification for increasing its advertising budget for fiscal year 2020. Further, in consultation with a consulting firm, AMRG developed a channel allocation simulator that allows AMRG officials to test different funding levels for its marketing and advertising channels to see potential outcomes. For example, based on our review of the simulator, AMRG can enter a specific amount of funding for events to estimate how many leads it can expect to obtain from that level of funding. AMRG officials stated that they can use this tool to help them plan for their required level of resources for the upcoming fiscal year. The development of this simulator is consistent with the commercial best practice of using sophisticated marketing mix modeling to determine an appropriate spending strategy. The Army Has Taken Steps to Improve Its Workforce Practices and to Reorganize the Organizational Structure of Its Marketing and Advertising Program ASA(ALT) and OPM Conducted Reviews of AMRG’s Workforce and Recommended Areas for Improvement ASA(ALT) and OPM conducted reviews of AMRG’s workforce and made recommendations to improve the workforce practices within the marketing and advertising program. From January to May 2018, ASA(ALT) conducted a review of AMRG’s business processes and found high-risk issues that contributed to organizational inefficiencies within five areas: (1) internal communications, (2) business performance, (3) training, (4) program performance and accountability, and (5) personnel. For example, ASA(ALT) found that AMRG personnel were unclear about AMRG’s core mission, objectives, and program priorities. Further, ASA(ALT) found that AMRG’s personnel, skills, training, and physical locations were not aligned to support AMRG’s mission. In addition, OPM conducted an assessment from March to September 2018 to identify organizational inefficiencies and propose solutions intended to transform AMRG into a high-performing organization and improve its workforce morale. Similar to ASA(ALT), OPM identified issues with a lack of mission clarity and insufficient communication and collaboration throughout AMRG’s workforce and with its stakeholders. In addition, OPM identified a number of organizational design issues within AMRG, including workforce acquisition, management, and optimization of its operational components and staff. ASA(ALT) and OPM made multiple recommendations to address these issues within AMRG. For example, ASA(ALT) recommended that AMRG establish and disseminate standard operating procedures and process charts; clarify roles and responsibilities of the various organizational components; and clearly communicate to staff the final annual marketing strategy. OPM recommended a multiphased approach to implementing its overall recommendations, identifying key actions to take in each phase. For example, within the first phase, OPM recommended that AMRG determine the new functional structure for AMRG because it would improve management and accountability, collaboration, and stakeholder satisfaction. In addition, within the second phase, OPM recommended that AMRG develop a human capital management plan and review and update its position descriptions regularly to ensure they align with changing goals, staffing needs, and the organizational structure of AMRG. AMRG took some steps to address ASA(ALT)’s and OPM’s recommendations to improve its workforce practices. Within its report, ASA(ALT) noted that AMRG had started to take actions to implement several recommendations, such as disseminating AMRG’s mission statement, priority objectives, strategic goals, and fiscal year 2019 annual marketing plan guidance to all AMRG personnel. Similarly, OPM noted that AMRG had established task groups to coordinate with stakeholders, participated in meetings with Congress and stakeholders, and was developing a new vision for AMRG. However, as of April 2019, senior AMRG officials stated that they had not taken steps to address all of the reports’ recommendations because the Army was considering broader organizational changes to the placement of AMRG within the Army. While the recommendations in ASA(ALT) and the OPM reports were generally specific to AMRG’s organization and workforce at that time, senior AMRG officials stated that the Army would take additional steps to incorporate ASA(ALT)’s and OPM’s recommendations, as appropriate, after senior Army leadership made decisions about those changes. In our review, AMRG personnel we met with continued to identify poor internal communications and morale as key challenges within AMRG, consistent with the findings from ASA(ALT) and OPM. During our focus groups with AMRG personnel, participants repeatedly stated that senior AMRG leadership did not communicate key information to staff. For example, participants told us that senior AMRG leadership did not communicate information about AMRG’s mission, strategic priorities, or pending organizational changes. As described below, subsequent to our focus groups, the Army began taking steps to fundamentally change the organizational structure, workforce, and leadership of its marketing and advertising program. In light of the timing of these substantial changes, we did not comprehensively assess the extent to which communication issues have been resolved in the reorganization of the marketing and advertising program. It will be important for the new leadership to focus on communication at the outset of this organizational change to establish positive morale within the workforce. The Army Reorganized Its Marketing and Advertising Program to Improve Its Structure, and Early Steps Are Consistent with Some Key Reform Practices The Army recently reorganized its marketing and advertising program to improve its organizational structure; the Army’s early steps to implement the reorganization are consistent with some key practices for agency reform efforts we identified in our prior work, as described below. In May 2019, the Secretary of the Army reassigned, redesignated, and stated that the Army planned to relocate AMRG. The Secretary of the Army redesignated AMRG as AEMO and reassigned the office to the Deputy Chief of Staff of the Army, G-1. The effective date of this reassignment was August 1, 2019. AEMO’s mission is to coordinate the Army’s national marketing and advertising strategy; develop and maintain relationships with the marketing and advertising industry; and develop marketing expertise and talent within the Army to support the Army, Army National Guard, and Army Reserve accessions. The offices will be moved from Arlington, Virginia, and Fort Knox, Kentucky, to Chicago, Illinois. Consistent with the key practice to designate a leader to be responsible for the implementation of the proposed reforms, the Secretary of the Army designated the Assistant Secretary of the Army for Manpower and Reserve Affairs as responsible for establishing AEMO and overseeing the transition. The Assistant Secretary stated that he expected AEMO to be fully operational by early 2020. The Army identified several reasons for transitioning from AMRG to AEMO and reassigning the office to the Deputy Chief of Staff of the Army, G-1, consistent with the key practice to define and articulate a succinct and compelling reason for the reforms. According to the execution order establishing AEMO, the Army needed an organization strategically positioned to: support Army senior leadership in advertising, marketing, and analysis; coordinate with the Army’s primary advertising agency; be talent diverse; provide effective marketing analysis; and be able to provide consistency of message and brand across the Army accessions enterprise. The Assistant Secretary of the Army for Manpower and Reserve Affairs also stated that AEMO is being assigned to the Deputy Chief of Staff of the Army, G-1, in part, because of the continuity in leadership that having a military officer lead the organization will provide. Previously, AMRG was assigned to the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, whose leader is politically appointed and had been vacant for two years until January 2019. In addition, the Assistant Secretary stated that AEMO will be located in Chicago to increase its coordination with the new advertising agency, which is also headquartered in Chicago. The Assistant Secretary also told us that the Army hoped to recruit civilian staff and to leverage the marketing and advertising expertise at academic and other marketing and advertising institutions in the region. The Army has taken some initial steps to establish AEMO and its operations. The Army has outlined a three-phased plan with specific tasks and associated dates within each of these phases, which is consistent with the key practice to establish implementation goals and a timeline to build momentum and show progress for the reforms. Phase 1, which was to be completed by August 1, 2019, prioritized tasks to initially establish AEMO, such as publishing the Army directive establishing AEMO, issuing the execution order outlining roles and responsibilities for the transition from AMRG to AEMO, and identifying office space in Chicago. Phase 2, which is to be completed between August 1, 2019, and February 1, 2020, includes tasks to transition AEMO to being fully operational, such as establishing new position descriptions and equipping the permanent office space. Lastly, phase 3 identifies those tasks to be implemented after February 1, 2020, when AEMO is fully operational and conducting daily operations, such as updating roles and responsibilities in the Army’s regulation for its marketing and advertising program and developing policies to direct commission military personnel to the office. The plan also identifies offices and officials who are accountable for implementing specific tasks during the transition. The Assistant Secretary of the Army for Manpower and Reserve Affairs told us that the Army established an operational planning team to execute the transition from AMRG to AEMO. The execution order also identifies key stakeholders, including officials from TRADOC, Office of the Chief, Army Reserve, and National Guard Bureau, who are to participate in weekly working group meetings led by the Assistant Secretary of the Army for Manpower and Reserve Affairs. Looking forward, by January 2020 the Army plans to develop metrics to assess the effectiveness of the new AEMO organization, including the purpose, expectations, and desired outcomes, which is consistent with the key practice that calls for clear outcome- oriented goals and performance measures. In addition, the Army has taken some initial steps to establish AEMO’s workforce. As of September 2019, the Army had authorized 53 positions—31 military and 22 civilian—for AEMO and identified a Brigadier General from the Army Reserve with marketing and advertising experience as its leader. Senior Army leadership stated that they expected to fill almost all of the positions with new military and civilian personnel, in part, because the civilian position classifications in AMRG do not generally align with those in AEMO. As of June 2019, the Assistant Secretary of the Army for Manpower and Reserve Affairs told us that the Army was identifying Active Duty, Reserve, and National Guard officers with marketing and advertising education or experience to fill the military positions. In addition, the Assistant Secretary stated that they were working with OPM to develop position descriptions for the AEMO civilian personnel and to identify the skills and expertise needed within AEMO to fulfill its mission, consistent with the key practice to determine if the agency will have the needed resources and capacity, including the skills and competencies, in place for the reorganization. The Army has also established a working group led by the U.S. Army Office of Economic and Manpower Analysis at West Point to develop a new marketing career path that senior Army leadership stated is intended to create a pool of military personnel who could serve in AEMO and other Army accessions organizations in the future. As the Army carries out its steps to fully establish AEMO and reorganize the marketing and advertising program, it will continue to be important for the Army to consider and use the key practices for agency reform efforts to guide the transition. Doing so will help ensure the success of the new marketing and advertising organization. Agency Comments We provided a draft of this report to DOD for review and comment. The Army provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Army Actions to Improve Its Marketing and Advertising Program as Stated in the Army’s Report to Congress This appendix summarizes an Army report to Congress on actions taken to improve its marketing and advertising program. Section 599 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 required the Army to submit a report to Congress that addressed several elements, such as the mitigation and oversight measures implemented to assure improved program return and contract management, and the establishment of a review process to regularly evaluate the effectiveness and efficiency of marketing efforts. The Army submitted the report on February 5, 2019. Table 4 identifies the required elements of the report and the actions that the Army has reported taking to address these elements. Appendix II: Army Marketing and Research Group’s Reported Actions to Address Army Audit Agency’s Recommendations This appendix provides a summary of actions that the Army has reported taking to address the recommendations in the April 2018 U.S. Army Audit Agency (AAA) reports about the Army’s marketing and advertising program. One report focused on contract oversight, and the other report focused on what AAA termed “return on investment.” Each report contained seven recommendations, with which the Army Marketing and Research Group (AMRG) concurred. AMRG and AAA officials stated that they have communicated about AMRG’s actions to implement the recommendations and that AAA has provided feedback, as appropriate, on actions taken. AAA officials stated that AAA may conduct a follow-up audit in fiscal year 2020 to determine if the actions have led to improvements in the marketing and advertising program. Table 5 summarizes the recommendations from the AAA report on contract oversight, the actions AMRG has taken to implement them, and the status—as of September 2019—of their implementation as reported by AAA. Table 6 identifies the recommendations from the AAA report on return on investment, the actions AMRG has taken to implement them, and the status—as of September 2019—of their implementation as reported by AAA. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Margaret Best (Assistant Director), Kendall Banks, Timothy J. DiNapoli, Jacob Fender, Alexandra Gonzalez, Amie Lesser, Kristen Kociolek, Steven Lozano, Jonathan Meyer, Eve Nealon, Julia Kennon, Carol Petersen, Richard Powelson, Jerome Sandau, Jared Sippel, and Andrew Stavisky made key contributions to this report.
Why GAO Did This Study The Army requested nearly $335 million for fiscal year 2020 to conduct marketing and advertising activities intended to increase awareness of Army service and ultimately generate leads for potential recruits. In April 2018, AAA made recommendations in two reports to improve the contract oversight and return on investment of the Army's marketing and advertising program. Further, in May 2018 and October 2018, respectively, the Army and OPM made recommendations to improve the workforce practices and organizational structure of the program. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to review the actions taken to implement AAA's recommendations and the effects of these actions on AMRG's leadership, workforce and business practices, and return on investment. This report assesses the extent to which the Army has taken steps to address recommendations (1) from AAA to improve the contract oversight and measurement of the effectiveness of the Army's marketing and advertising program and (2) from the Army and OPM to improve the workforce practices and organizational structure of the marketing and advertising program. GAO analyzed Army marketing and advertising data from fiscal year 2018; reviewed marketing and advertising plans and guidance; conducted focus groups with AMRG personnel; interviewed cognizant officials; and compared the Army's efforts to GAO-identified best practices. The Army provided technical comments, which GAO incorporated as appropriate. What GAO Found The Army has recently taken steps to improve the oversight of its primary marketing and advertising contract and measurement of the effectiveness of its marketing and advertising program in response to two U.S. Army Audit Agency (AAA) reports. In April 2018, AAA found that the Army Marketing and Research Group (AMRG)—the component responsible for conducting the Army's national-level marketing and advertising program—did not fully evaluate the performance of its contracted advertising agency or track the effectiveness of its marketing and advertising efforts. GAO found that AMRG has taken or is taking actions to address AAA's recommendations: Contract Oversight . AMRG has developed processes for overseeing the advertising agency's performance and services. For example, AMRG developed a form program managers use to validate that proposed advertising services are not already provided through other contracts. Program Effectiveness . AMRG has taken steps in several areas related to revising its strategic marketing goals to support Army recruiting, updating how it assesses marketing and advertising effectiveness, and improving the reliability of data systems. AMRG's steps are consistent with commercial best practices that GAO identified for assessing the effectiveness of advertising, such as identifying outcomes from advertising. The Army has also taken steps to improve the workforce practices and organizational structure of its marketing and advertising program in response to two workforce reviews. The two reviews—by an Army organization and the U.S. Office of Personnel Management (OPM)—found that AMRG, among other things, did not have regular communication throughout its workforce and with its stakeholders, and had a poor workforce climate. AMRG took initial steps to address the reviews' recommendations. The Army then established a new office effective August 2019—the Office of the Chief Army Enterprise Marketing—to replace AMRG and to assume all marketing and advertising activities. Some of the Army's early steps to establish the new office are consistent with key practices for agency reform efforts identified previously by GAO. For example, the Army outlined a three-phased plan with specific tasks and associated dates to fully establish the new office by early 2020 consistent with the key practice to establish implementation goals and a timeline.
gao_GAO-19-288
gao_GAO-19-288_0
Background Individuals engage in countless online transactions every day—from checking their bank accounts and making retail purchases to signing up for federal benefits and services. However, securing such transactions is a complex endeavor. A key part of this process is verifying that the person who is attempting to interact for the first time with an organization, such as a federal agency or a business, is the individual he or she claims to be. This process, known as identity proofing, is essential to prevent fraud, which could cause harm to both individuals and organizations. Identity proofing may occur in person or through a remote, online process. In the case of in-person identity proofing, a trained professional verifies an individual’s identity by making a direct physical comparison of the individual’s physical features and other evidence (such as a driver’s license or other credential) with official records to verify the individual’s identity. Verification of these credentials can be performed by checking electronic records in tandem with physical inspection. In-person identity proofing is considered a strong method of identity proofing. However, it may not always be feasible to require that all applicants appear in person. In such cases, remote identity proofing is performed. Remote identity proofing is the process of conducting identity proofing entirely through an online exchange of information. When remote identity proofing is used, there is no way to confirm an individual’s identity through their physical presence. Instead, the individual provides the information electronically, or performs other electronically verifiable actions that demonstrate his or her identity. Because many federal benefits and services are offered broadly to large numbers of geographically dispersed applicants, agencies often rely on remote identity proofing to verify the identities of applicants. Overview of the Remote Identity Proofing Process Remote identity proofing involves two major steps: (1) resolution and (2) validation and verification. During the resolution step, an organization determines which specific identity an applicant is claiming when they first attempt to initiate a transaction, such as enrolling for federal benefits or services, remotely. The most common form of remote interaction is through an organization’s website. The organization starts the identity resolution process by having the applicant provide identifying information, typically through a web-based application form. Examples of information that an organization may collect for identity resolution include name, address, date of birth, and Social Security number. The organization then electronically compares the applicant’s identifying information with electronic records that it already has in its databases or with records maintained by another entity, such as a CRA, to determine (or “resolve”) which identity is being claimed. For example, if an individual named John Smith were applying, the organization would obtain enough identifying information about him to determine which “John Smith” he is from among the thousands of John Smiths that it may have in its records or that may be documented in the records of the CRA that it is using for this process. Once the resolution process is complete, the process of validation and verification occurs. In this process, steps are taken to verify whether the applicant is really who they claim to be. For example, in the case of John Smith, it is not enough simply to determine which John Smith is being claimed, because the claimant may not really be John Smith at all. Organizations need to obtain electronic evidence from the remote applicant to verify their identity. Organizations can use a variety of techniques to accomplish this goal. Knowledge-based verification is a technique that commonly has been used for this purpose. With knowledge-based verification, organizations ask applicants detailed and personal questions, under the presumption that only the real person will know the answers to these questions. To do this, the organization poses a series of multiple choice questions through an online web form, and the applicant selects the appropriate responses and submits the answers through the web form. If the applicant has chosen the correct responses, through the remotely accessed web form, their identity is considered to be verified, and the validation and verification step is complete. Figure 1 depicts the typical process that organizations use for remote identity proofing (including the use of knowledge-based verification). The Role of CRAs in Knowledge-Based Verification As previously mentioned, to perform knowledge-based verification for remote identity-proofing, federal agencies and other organizations often use services provided by CRAs. The CRAs assemble and evaluate consumer credit and other information from a wide variety of sources. Equifax, Experian, and TransUnion—the three nationwide CRAs—use the personal information they obtain about individuals from organizations, such as financial institutions, utilities, cell phone service providers, public records, and government sources, to compile credit files containing detailed records about individuals. They then use the information in these files to offer a variety of services to federal agencies and other entities. These services can include identity verification, as well as verification of income and employment of a candidate for a job or an applicant for benefits or services. To support organizations that rely on knowledge-based verification, CRAs generate multiple choice questions that organizations can use to test applicants’ knowledge of information in their credit files. The organizations using the CRA services do not generate the questions themselves, because they do not have access to the credit history information maintained by the CRAs. Rather, the CRAs’ remote identity proofing systems transmit the questions and multiple choice answers to the organization through an automated electronic connection with the organization’s website. The organization’s website then displays the questions and multiple choice answers to the applicant through the web application that the applicant is using to apply for access to benefits or services. Typically, the questions generated by CRA identity proofing systems ask about lenders, mortgage details, current and past home addresses, or credit card accounts. Once the applicant has selected answers to the questions and enters them in the online application, the organization’s automated system electronically relays the applicant’s responses to the CRA’s remote identity proofing system; this system then compares the responses with information in the applicant’s credit file. If this comparison determines that the applicant correctly responded to the questions, then the applicant’s identity is considered to be verified. The CRA’s identity proofing system electronically transmits the results of its comparison to the organization’s website to allow the applicant, whose identity is now considered verified, to proceed with applying for benefits or services. OMB and NIST Provide Guidance to Agencies on Information Security Management The Federal Information Security Modernization Act of 2014 (FISMA) is intended to provide a comprehensive framework for ensuring the effectiveness of security controls over information resources that support federal operations and assets, as well as the effective oversight of information security risks. FISMA assigns responsibility to the head of each agency to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information systems used or operated by an agency or on behalf of an agency. FISMA assigns responsibility to NIST for developing comprehensive information security standards and guidelines for federal agencies. These include standards for categorizing information and information systems according to ranges of risk levels and guidelines for establishing minimum security requirements for federal information systems. To fulfill its FISMA responsibilities, NIST has issued technical guidance on many different aspects of information security, including identity proofing. NIST issued its first guidance related to identity proofing in 2011. In 2017, NIST released an updated version of its guidance, which included guidance on identity proofing that outlines technical requirements for resolving, validating, and verifying an identity based on evidence obtained from a remote applicant. OMB requires agencies to implement NIST’s technical guidance on information security subjects within one year of issuance. In the case of NIST’s updated guidelines for remote identity proofing, agencies would have needed to implement the guidance by June 2018 to meet OMB’s time frames. FISMA assigns responsibility to OMB for overseeing agencies’ information security policies and practices. OMB, in turn, has established requirements for federal information security programs and has assigned agency responsibilities to fulfill the requirements of statutes such as FISMA. OMB policies and guidance require agencies to employ a risk- based approach and decision making to ensure that security and privacy capabilities are sufficient to protect agency assets, operations, and individuals. OMB has not issued guidance to agencies specifically on identity proofing. However, OMB developed a draft policy document in April 2018 that is intended to provide guidance to agencies on strengthening the security of information and information systems to ensure safe and secure access to federal benefits and services. While it has not yet been issued, the draft policy indicates that OMB intends to provide policy- level guidance for agencies to identify, credential, monitor, and manage user access to information and information systems and adopt sound processes for authentication and access control. Selected Agencies Use a Variety of Remote Identity Proofing Techniques, Including Knowledge- Based Verification The six agencies that we reviewed rely on a variety of remote identity proofing techniques, including knowledge-based verification, to ensure that the individuals who enroll for federal benefits and services are who they claim to be. These agencies typically use knowledge-based verification services offered by CRAs, which generate questions for the individuals applying for benefits or services and check the applicants’ answers to verify their identity. However, to the extent that they use knowledge-based verification, these agencies face risks because an attacker could obtain and use an individual’s personal information to answer knowledge-based verification questions and successfully impersonate that individual. Knowledge-Based Verification Poses Risks, but Alternative Techniques Have Been Developed That Are More Secure Although commonly used by federal agencies for remote identity proofing, knowledge-based verification techniques pose security risks because an attacker could obtain and use an individual’s personal information to answer knowledge-based verification questions and successfully impersonate that individual. As such, NIST’s 2017 guidance on remote identity proofing effectively prohibits the use of knowledge-based verification for sensitive applications. The guidance states that the ease with which an attacker can discover the answers to many knowledge- based questions and the relatively small number of possible responses cause the method to have an unacceptably high risk of being successfully compromised by an attacker. In its guidance, NIST states that the agency no longer recommends using knowledge-based verification because it tends to be error-prone and can be frustrating for users, given the limitations of human memory. According to NIST officials, private-sector providers of remote identity proofing solutions and officials at the agencies we reviewed, alternative methods for verifying an individual’s identity are available that are not knowledge-based and can provide stronger security assurance than knowledge-based verification. Specific examples of such techniques include: Remote assessment of physical credentials. Recently developed technology allows an agency to remotely examine a physical credential, such as a driver’s license or a passport, to verify an individual’s identity. For example, an agency may have the individual use their mobile device, such as a cell phone, to capture and submit an image of their driver’s license to an agency or commercial provider of identity proofing services. The agency or commercial provider can then compare the image to documentation on file to confirm the authenticity of the credential. Technological advances in how images are captured and processed by mobile devices, such as cell phones, can provide improved assurance that the photos transmitted by these devices are genuine and that the credentials are authentic. Verification of mobile device possession. Many individuals use their cell phones on a near-continuous basis and keep their phones with them. These actions create a record of the owner’s connection with these mobile devices that is difficult for an imposter to falsify. Accordingly, an organization can query records maintained by cell phone carriers to verify the identity of an individual who is in possession of a specific mobile device and phone number. By doing this, the organization can determine how long the individual has had that particular device, compare unique identifiers, and determine if the location matches the individual’s billing information. The organization can be confident that the individual legitimately possesses the device if the device has been in use for some time and its current location corresponds to one where the device has been known to be used by its owner. Since an individual’s location information is obtained directly from the device and compared with cell phone carrier records, data entry errors by the individual, such as mistyping a phone number, are minimized and the risk of impersonation is reduced. Verification through mobile device confirmation codes. An additional method that organizations use to help verify an individual’s identity is to verify that an individual possesses a telephone number that they have supplied as part of the remote identity proofing process. Organizations perform verification of an individual’s possession of a phone number by sending a code to that phone number through the short message service (SMS) or another protocol, and ask the individual to enter the code into the online identity proofing application. This process can provide additional assurance about the individual’s identity because the verification code is transmitted through a separate electronic channel (specifically, the telephone system) from the online application where the remote identity proofing process was initiated. However, unlike the process for verifying the possession of a mobile device, the use of these codes may not prevent an imposter from using a stolen phone or stolen phone number. An imposter may be able to successfully complete the identity verification process if the applicant’s possession of the physical device has not been independently verified. In its remote identity proofing guidance, NIST requires federal agencies to use confirmation codes as a supplement to other identity proofing measures. Verification through postal mail confirmation codes. Another method that organizations use to help verify an individual’s identity is to send a confirmation code, such as a personal identification number (PIN), through the mail system to the individual’s address of record. The individual then enters the PIN in the organization’s online application to confirm that they received the code in the mail. Like the use of mobile device confirmation codes, the use of postal mail codes can provide additional assurance about the individual’s identity because the code is sent through a separate medium from the online application where the remote identity proofing process was initiated. Even with these alternatives to knowledge-based verification, however, there are limitations to the security assurances that can be provided. One way to overcome these security limitations is for a trained professional to conduct identity proofing in person. This is generally considered to be a strong approach because it allows for direct physical comparison of an individual’s documentation, including photographic evidence, to the individual attempting to enroll. Verification of the credentials being submitted can be performed by checking electronic records in tandem with physical inspection. Figure 2 provides examples of alternative identity verification and validation methods that federal agencies have reported using. Each of the alternatives to knowledge-based verification has other limitations, including implementation challenges. For example, in-person identity proofing is expensive to implement because it requires organizations to staff and maintain offices or other physical access points in multiple locations, and it can be inconvenient for applicants because it requires travel to one of these locations. Mobile device verification may not always be viable because not all applicants possess a mobile device that can be used to verify their identity. In addition, fraudsters can manipulate or “spoof” phone numbers that redirect phone calls and SMS confirmation codes to an attacker. Sending confirmation codes by postal mail can result in a delay in an individual being able to gain access to the services or benefits he or she is seeking. Several of the Selected Agencies Have Taken Steps to Better Ensure the Effectiveness of Their Remote Identity Proofing Processes, but Only Two Have Eliminated the Use of Knowledge-Based Verification As previously discussed, in 2017, NIST released an updated version of its technical guidance on remote identity proofing. NIST’s 2017 guidance effectively prohibits the use of knowledge-based verification for sensitive applications because of the security risks associated with this technique. For applications where identity verification is important, the guidance prohibits agencies from providing access to online applications based solely on correct responses to knowledge-based questions. Rather, the guidance provides detailed specifications regarding the required features of the identity evidence (such as driver’s licenses and birth certificates) that an individual is to provide and how agencies are to verify that evidence. Agencies are restricted to using knowledge-based verification only for the very limited role of linking a single piece of identity evidence to an individual and only for applications where the identity verification process is not of critical importance. As a result, agencies are effectively prohibited from using traditional knowledge-based questions—the type of questions typically used in identity verification services provided by CRAs—as part of their processes. Thus, in order for agencies to ensure the effectiveness of their remote identity proofing processes, they are required to find ways to eliminate the use of knowledge-based verification. Three of the six agencies we reviewed—GSA, IRS, and VA—have taken steps to enhance the effectiveness of their remote identity proofing processes. GSA and IRS recently eliminated knowledge-based verification from their Login.gov and Get Transcript services, respectively. VA has implemented alternative methods, but only as a supplement to the continued use of knowledge-based verification. Among the other three agencies, two of them–SSA and USPS–are investigating alternative methods and have stated that they intend to reduce or eliminate their use of knowledge-based verification sometime in the future; however, these agencies do not yet have specific plans for doing so. One other agency, CMS, has no plans to reduce or eliminate knowledge-based verification for remote identity proofing. General Services Administration Eliminated Knowledge-Based Verification from its Login.gov Service and Is Implementing Additional Verification Techniques GSA has implemented alternative methods to knowledge-based verification for Login.gov. While GSA used knowledge-based verification on its Login.gov service in the past, the agency has recently implemented alternative verification techniques that do not rely on knowledge-based verification. Specifically, GSA conducts independent verification of an applicant’s possession of a mobile device, an alternative technique we previously discussed. GSA contracts with a third-party vendor to compare status information about the phone number provided by an individual with telephone company records to confirm the individual’s identity. Further, GSA officials responsible for Login.gov stated that they plan to include additional alternative verification methods to Login.gov in the near future. Specifically, by the end of May 2019, the agency plans to implement software capable of analyzing and validating photos of documentation, such as driver’s licenses, provided by applicants to further enhance the verification of their identities. In 2018, the agency tested this technology through a pilot program. GSA officials responsible for Login.gov stated that they are pursuing several other initiatives to further enhance the verification techniques they use for Login.gov. For example, they are researching new software methods for confirming the authenticity of face images and other biometric information that could be transmitted by applicants to confirm their identity. According to the officials, additional work is needed to ensure that a fraudulent image, such as a photo of a mask, is not being provided in lieu of a live image—a threat known as a “presentation attack.” The GSA officials also said they would like to work with other federal agencies to leverage data that have already been verified, such as USPS-validated mailing addresses, passport and visa information maintained by the Department of State, and IRS tax data. However, the officials cited legal and regulatory restrictions to sharing agency data as a challenge to being able to make use of resources such as these. GSA’s recent elimination of knowledge-based verification from its Login.gov identity proofing process is consistent with NIST’s 2017 guidance on remote identity proofing and reduces the risk of fraud associated with using Login.gov. The additional enhancements and coordination that the agency is working on, if successful, will likely further improve the effectiveness of its remote identity proofing processes. Internal Revenue Service Eliminated Knowledge- Based Verification and Is Examining Additional Verification Techniques IRS has implemented alternative methods to knowledge-based verification for Get Transcript. While IRS used knowledge-based verification on its Get Transcript service in the past, the agency has recently implemented alternative verification techniques that do not rely on knowledge-based verification. Specifically, IRS conducts independent verification of an applicant’s possession of a mobile device and uses mobile device confirmation codes, alternative techniques we previously discussed. IRS contracts with a CRA to compare status information about the phone number provided by an individual with telephone company records to confirm the individual’s identity. Further, IRS officials responsible for Get Transcript’s identity proofing and authentication services stated that they plan to continue to add alternative verification methods to Get Transcript in the future. They stated that, in June 2017, in response to the release of NIST’s updated digital identity proofing requirements, the agency started a task force to examine the updated requirements and make recommendations on possible changes to IRS’s processes to meet the updated guidance. According to the officials, the task force developed a digital identity risk assessment process that the agency started using to assess external facing online transactions in October 2018. IRS’s recent elimination of knowledge-based verification from its Get Transcript identity proofing process and the additional enhancements that the agency is working on, if successful, will likely further improve the effectiveness of its remote identity proofing processes. Department of Veterans Affairs Has Implemented Some Alternative Methods, but Has No Plans to Reduce or Eliminate its Remaining Use of Knowledge-Based Verification VA has taken steps to better ensure the effectiveness of its remote identity proofing processes, but it continues to rely on knowledge-based verification for certain categories of individuals. As previously mentioned, VA relies on two different providers, a commercial identity verification service (called ID.me) and DOD’s DS Logon, to conduct identity proofing for its benefits systems. These providers use a mix of knowledge-based verification and alternative techniques. DOD’s DS Logon verifies applicants using knowledge-based verification, while the commercial provider uses both knowledge-based verification processes as well as stronger alternative techniques. For example, the commercial provider uses cellular phone data to verify an applicant’s identity based on the device subscriber’s relationship to a claimed identity and the subscriber’s tenure with the carrier. VA’s commercial provider can also remotely authenticate identity documents. In this regard, applicants can scan the front and back of driver’s licenses, state identification, and passports, and upload the images to the commercial provider, which then analyzes the images to ensure that the documents meet standards and contain valid information. Further, the provider verifies applicants by having them take photos of themselves and then using facial recognition technology to match the applicants’ images with their identity documents. VA officials in the agency’s information technology and benefits program offices believe that the alternative forms of identity proofing used by its commercial provider as a supplement to knowledge-based verification provide an acceptable level of assurance. Nevertheless, the officials acknowledged that it is important to eventually eliminate knowledge- based verification from the agency’s identity-proofing processes. However, the agency does not have specific plans with time frames and milestones to eliminate the use of knowledge-based verification. VA officials stated that it has not yet established plans for doing so because of its reliance on DOD’s DS Logon service, which still uses knowledge- based verification. Until it develops a specific plan with time frames and milestones to eliminate its reliance on knowledge-based verification, VA and the individuals it serves will continue to face a degree of identity fraud risk that could be reduced. Social Security Administration Intends to Eliminate Knowledge- Based Verification, but Does Not Yet Have Specific Plans for Doing So SSA continues to rely on knowledge-based verification for its My Social Security service, but SSA officials stated that the agency intends to eliminate knowledge-based verification in the future. According to the SSA Chief Information Security Officer, in fiscal year 2019, the agency intends to pilot alternative verification methods, such as using the commercial ID.me service. In addition, the official said SSA plans to research other alternatives that could be used to replace knowledge- based verification, including modernizing its legacy systems so that they can use Login.gov or another shared identity management platform. The agency has set a goal of eliminating the use of knowledge-based verification in fiscal year 2020. As an interim measure to reduce the risks associated with knowledge- based verification, SSA officials stated that they limit the period of time and the number of attempts that an individual has to answer the knowledge-based verification questions. These limitations are designed to prevent a potential fraudster from researching the answers to the questions. In addition, SSA also sends a confirmation code via email or SMS, which individuals must enter online before being given access to their account. SSA does not yet have specific plans and milestones to achieve its goal of implementing enhanced remote identity proofing processes by fiscal year 2020. SSA officials stated that they cannot develop specific plans until they are able to identify an alternative method or methods that can be used successfully by all members of the public with which the agency interacts. Until SSA develops specific solutions for eliminating knowledge- based verification, the agency and the individuals that rely on its services will remain at an increased risk of identity fraud. United States Postal Service Intends to Eliminate Its Use of Knowledge-Based Verification, but Does Not Yet Have Complete Plans and Time frames for Doing So USPS has not yet fully implemented alternative methods to better ensure the effectiveness of its remote identity proofing processes. According to officials responsible for the agency’s identity proofing program, USPS mitigates the risk of using knowledge-based verification by sending a written confirmation to the physical address associated with each identity- proofing transaction and provides instructions for what to do if the transaction is unauthorized or fraudulent. In addition to this mitigation measure, the officials reported that they regularly evaluate new capabilities to further increase confidence in their identity-proofing processes and are planning several additional measures to supplement the use of knowledge-based verification. Specifically, in September 2018, USPS began allowing customers to request a confirmation code via the mail to allow them to enroll in Informed Delivery. In addition, the agency is planning on implementing verification of mobile device possession and SMS enrollment code verification in 2019 and other techniques at a subsequent time. According to USPS officials, these alternative techniques are expected to reduce the agency’s use of knowledge-based verification. The officials said that USPS has not completely eliminated the use of knowledge-based verification because available alternatives to the agency’s current processes do not satisfactorily address critical factors that they consider when deciding whether to implement alternative processes. These factors include cost, projected ability to reduce fraud and protect consumers, projected extent of the population that could be covered, and the burden on customers to complete the process. The officials stated that the agency intends to implement alternative methods in the future for its Informed Delivery service but does not yet have specific plans with time frames and milestones. The officials noted that part of the reason for the slow implementation of alternative methods is that NIST technical guidance does not provide direction on how alternative methods are to be implemented and that additional guidance from NIST would be helpful to the agency for developing and implementing a plan to eliminate knowledge-based verification for the Informed Delivery service. While the supplemental processes implemented by USPS to date may help to reduce the risks associated with using knowledge-based verification, they do not eliminate such risks. Until it completes a plan with time frames and milestones to eliminate its reliance on knowledge-based verification for Informed Delivery, USPS and its customers will remain at increased risk of identity fraud. Centers for Medicare and Medicaid Services Has No Plans to Reduce or Eliminate its Use of Knowledge-Based Verification CMS has not implemented alternative methods to better ensure the effectiveness of the remote identity proofing processes used for its Healthcare.gov service. CMS officials in the Office of Information Technology and the Office of Consumer Information and Insurance Oversight stated that the agency uses a two-step email verification process to reduce the risks associated with knowledge-based verification. Specifically, individuals applying for an account on Healthcare.gov provide basic information (e.g., name, email address, password) and then are asked to acknowledge an email confirmation they receive from CMS. The email confirmation is intended to prove that the individual applying for a Healthcare.gov account is in possession of the email address that same individual provided to CMS. However, this process confirms only the email address that was used to create the account; it does not confirm the identity of the individual who is applying for the account. CMS stated that it uses this process because other mitigating measures are not cost effective. However, NIST’s guidance does not permit agencies to use knowledge-based verification on the basis of cost effectiveness. Further, the agency does not have specific plans with time frames or milestones to eliminate its use of knowledge-based verification for Healthcare.gov. CMS officials acknowledge that they do not have a plan to reduce or eliminate the use of knowledge-based verification because they have not yet identified any effective alternatives to knowledge-based verification for Healthcare.gov. According to these officials, based on a user study they conducted, individuals who use the agency’s services prefer knowledge- based verification over any available alternatives. In addition, the officials stated that certain alternatives, such as mobile device verification, may not always be suitable for the population they serve. As one example, not all applicants have a mobile device that could be used to remotely verify the individual’s identity. The CMS officials noted that NIST technical guidance does not provide direction on how alternative methods are to be implemented, given that they may not always be suitable for the population served by Healthcare.gov. However, until CMS takes steps to develop a plan with time frames and milestones to eliminate the use of knowledge-based verification, CMS and Healthcare.gov applicants will remain at an increased risk of identity fraud. NIST and OMB Have Not Provided Sufficient Guidance to Ensure Agencies Move to More Secure Forms of Remote Identity Proofing While NIST has issued guidance to agencies related to identity proofing and OMB is drafting identity management guidance, these efforts are not sufficient to ensure that agencies adopt secure methods for remote identity proofing. As previously discussed, NIST’s guidance effectively prohibits the use of knowledge-based verification during the validation and verification phases of the remote identity proofing process, but does not provide direction to agencies on how to successfully implement alternative methods for remote identity proofing for large and diverse segments of the population. Further, OMB has not issued guidance requiring agencies to report on their implementation of remote identity proofing processes, which is essential for monitoring agencies’ progress. NIST Guidance Does Not Provide Sufficient Direction to Agencies on How to Implement Alternative Methods for Remote Identity Proofing Best practices in IT management state that organizations should provide clear direction in order to achieve objectives. Specifically, the Control Objectives for Information and Related Technologies (COBIT), a framework of best practices for IT governance, states that organizations should provide clear direction for IT projects, including relevant and usable guidance, and ensure that those implementing the technology have a clear understanding of what needs to be delivered and how. However, NIST has not issued any supplemental implementation guidance to its 2017 technical guidance to ensure that agencies have a clear understanding of what needs to be done to implement alternative methods of remote identity proofing, as called for in the technical guidance. For example, NIST’s technical guidance provides abstract descriptions of identity evidence that individuals must provide, such as a credential containing a photograph or other biometric identifier as well as anti-counterfeiting security features. The guidance states that such credentials can be provided in person or remotely but does not detail the processes needed for providing credentials remotely. For example, the guidance does not discuss the advantages and limitations of currently available technologies that agencies could successfully use to remotely verify credentials provided by individuals or to make recommendations to agencies on which technologies should be adopted. As previously discussed, several potential limitations could make choosing an alternative method difficult. Technologies such as secure, remote verification of a physical credential may not be commercially available. Also, some alternative technologies require that individuals use cell phones and maintain a verifiable record of having them in their possession. The NIST guidance does not discuss how agencies should accommodate segments of the public who do not possess advanced technological devices, such as cell phones, that may be needed for successful remote verification. Because the guidance does not include specific advice or direction on implementing alternative technologies, agencies may be unable to determine what alternative methods are viable for the populations they serve. As previously discussed, several of the agencies we reviewed send confirmation codes to applicants via cell phone, email, or postal mail, as ways that they believe compensate for risks associated with using knowledge-based verification. However, NIST officials do not consider such methods for remote verification to be effective in compensating for the risks associated with knowledge-based verification. Instead, the NIST technical guidance requires agencies to send confirmation codes by mail when they use any remote identity proofing method, including more advanced, alternative verification methods, such as verification of mobile device possession. Officials from CMS, SSA, and USPS stated that they have not eliminated their use of knowledge-based verification in part because the existing NIST technical guidance does not provide direction on how alternative methods are to be implemented, given the various limitations of those alternative methods that agencies have identified. The officials stated that federal agencies could benefit from additional guidance on implementing the alternative verification techniques called for in the NIST technical guidance. In response to these agencies’ comments about being unable to fully implement the remote identity proofing guidance, NIST officials stated that agencies were expected to use their own judgment to determine how to meet the remote identity proofing requirements. The officials added that it was NIST’s position that the updated guidance was comprehensive enough for agencies to follow. Thus, at the time of our review, NIST did not have plans to assist agencies by issuing implementation guidance to supplement its existing technical guidance. NIST officials stated that they are available to provide assistance on an individual basis to agencies that seek their advice. Without additional guidance from NIST on how agencies are to implement the alternative identity proofing methods specified in an agency’s existing technical guidance, agencies may not be using the most effective and secure identity-proofing methods, thus exposing their systems to risk of fraud. OMB Guidance Does Not Include Reporting Requirements to Facilitate Monitoring of Agencies’ Implementation of Secure Remote Identity Proofing FISMA requires the Director of OMB to oversee agency information security policies and practices. However, OMB has not provided agencies with guidance establishing reporting requirements for OMB to use in monitoring agencies’ progress in implementing secure remote identity proofing processes. For example, OMB has not proposed including reporting requirements for remote identity proofing in its draft policy on identity, credential, and access management, nor has it included reporting requirements in its FISMA reporting guidance to agencies for fiscal year 2019. According to OMB staff, OMB plans to issue guidance to agencies on the implementation of identity, credential, and access management. OMB issued a draft of this guidance for public comment in April 2018. However, the draft guidance does not include a requirement for agencies to report on progress in implementing secure remote identity proofing processes. Because it does not require agency reporting on progress in implementing secure remote identity proofing processes, OMB does not have visibility into the extent that agencies rely on insecure methods, particularly knowledge-based verification. Without establishing effective oversight measures, OMB cannot adequately monitor agency progress in implementing the secure identity proofing methods called for in NIST’s 2017 technical guidance. As a result, agencies may be at risk of implementing weak methods of remote identity-proofing for individuals who seek access to services and benefits from the federal government, which may put both the federal government and individuals at risk for fraud. Conclusions The six agencies that we reviewed rely on a variety of remote identity proofing techniques to help ensure that the individuals who enroll for federal benefits and services are who they claim to be. Several of the selected agencies use knowledge-based verification processes that rely on CRAs to pose questions to individuals and check their answers as a way of verifying their identities before granting them enrollment in a federal benefit or service. However, given recent breaches of sensitive personal information, these agencies face risks because fraudsters may be able to obtain and use an individual’s personal information to answer knowledge-based verification questions and successfully impersonate that individual to fraudulently obtain federal benefits and services. Two agencies we reviewed, GSA and IRS, recently implemented remote identity proofing processes for Login.gov and Get Transcript that allow individuals to enroll online without relying on knowledge-based verification. However, four agencies (CMS, SSA, USPS, and VA) were still using knowledge-based verification to conduct remote identity proofing. Moreover, none of the four agencies have developed specific plans to eliminate knowledge-based methods from their processes. Without such plans, these federal agencies and the individuals that rely on such processes will remain at risk for identity fraud. NIST has issued technical guidance regarding remote identity proofing, but it may not be sufficient to help ensure that federal agencies adopt more secure methods. NIST’s guidance does not provide direction on how agencies can adopt more secure alternatives to knowledge-based verification while also addressing issues of technical feasibility and usability for all members of the public. In addition, OMB has not issued guidance setting agency reporting requirements that OMB could use to track implementation of more secure processes across the federal government. Without additional guidance, federal agencies are likely to continue to rely on risky knowledge-based verification that could be used to fraudulently gain access to federal benefit programs and services. Recommendations for Executive Action We are making a total of 6 recommendations to CMS, NIST, OMB, SSA, USPS, and VA. Specifically: The Administrator of the Centers for Medicare and Medicaid Services should develop a plan with time frames and milestones to discontinue knowledge-based verification, such as by using Login.gov or other alternative verification techniques. (Recommendation 1) The Director of the National Institute of Standards and Technology should supplement the agency’s 2017 technical guidance with additional guidance to assist federal agencies in determining and implementing alternatives to knowledge-based verification that are most suitable for their applications. (Recommendation 2) The Director of the Office of Management and Budget should issue guidance requiring federal agencies to report on their progress in adopting secure identity proofing processes. (Recommendation 3) The Commissioner of Social Security should develop a plan with specific milestones to discontinue knowledge-based verification, such as by using Login.gov or other alternative verification techniques. (Recommendation 4) The Postmaster General of the United States should complete a plan with time frames and milestones to discontinue knowledge-based verification, such as by using Login.gov or other alternative verification techniques. (Recommendation 5) The Secretary of the Department of Veterans Affairs should develop a plan with time frames and milestones to discontinue knowledge-based verification, such as by using Login.gov or other alternative verification techniques. (Recommendation 6) Agency Comments and Our Evaluation We requested comments on a draft of this report from the eight agencies included in our review. In response, we received written comments from six agencies—Commerce (on behalf of NIST), HHS (on behalf of CMS), IRS, SSA, USPS, and VA. Their comments are reprinted in appendices II through VII, respectively. Of the six agencies to which we made recommendations, four of them (Commerce, SSA, USPS, and VA) agreed with our recommendations, and one agency (HHS) did not concur with our recommendation. One agency (OMB) did not state whether it agreed or disagreed with our recommendation. In addition, multiple agencies (GSA, IRS, OMB, USPS, and VA) provided technical comments on the draft report, which we have incorporated, as appropriate. The following four agencies agreed with the recommendations that we directed to them: Commerce agreed with our recommendation. The department stated that it will develop additional guidance to assist federal agencies with alternatives to knowledge-based verification and expects to do so within one year from issuance of this report. Comments from Commerce are reprinted in appendix II. SSA agreed with our recommendation. The agency stated that it will continue to seek improvements in its existing remote identity proofing process. SSA also stated that, in addition to a roadmap it developed in fiscal year 2019 to update its knowledge-based verification process to a more secure multi-factor authentication technology, it will take steps to ensure compliance with NIST standards for remote identity proofing. SSA’s comments are reprinted in appendix V. USPS agreed with our recommendation. The agency stated that it will be developing a roadmap to implement additional identity-proofing tools and techniques through 2020. Comments from USPS are reprinted in appendix VI. VA agreed with our recommendation. The department stated that it will develop a specific plan with time frames and milestones to eliminate knowledge-based verification from the aspects of the remote identity proofing process that it controls. Further, in its response, VA requested that GAO direct a recommendation to the Department of Defense (DOD) to discontinue DS Logon and consider using Login.gov instead. However, we are not issuing any recommendations to DOD because our scope of work did not include auditing DOD’s remote identity proofing processes. Nevertheless, we have adjusted our recommendations to CMS, SSA, USPS, and VA to clarify that Login.gov is one option for identity proofing that they should consider when developing their plans to discontinue the use of knowledge-based verification. VA’s comments are reprinted in appendix VII. One agency did not concur with our recommendation. Specifically, HHS raised several issues related to our findings. The agency stated that it uses a risk-based approach to designing systems controls and that a unilateral prohibition on the use of knowledge-based verification without alternatives is not a feasible solution. We agree with this comment and strongly support a risk-based approach to designing security controls, as required by FISMA. However, we believe that alternatives to knowledge- based verification exist that should be assessed and incorporated as appropriate. Similarly, HHS noted that for other applications across the department, it has considered factors such as consumer user experience, cost, and operational feasibility in addition to NIST guidelines. We agree that many factors need to be considered in assessing what method or methods of identity proofing are most appropriate for any given application but believe it is important for agencies to develop plans for addressing those factors that also eliminate the use of risky techniques, such as knowledge-based verification, that could have a negative impact on consumers and agencies. In response to our specific recommendation to CMS, HHS stated that it does not believe that suitable alternative methods exist that would work for CMS’ population of users, such as those in the rural community, due to distance or individuals without cell phones. However, we continue to believe that CMS should develop a plan to discontinue the use of knowledge-based verification. We recognize that there are members of the population that may not be reached with certain identity proofing techniques; however, a variety of alternative methods to knowledge- based verification are available that CMS can consider to address the population it serves. Comments from HHS are reprinted in appendix III. In addition, OMB did not state whether it agreed or disagreed with our recommendation. Further, in an email response, OMB staff from the office of the Federal Chief Information Officer provided a technical comment, which we incorporated. However, OMB did not otherwise comment on the report findings or our recommendation made to the agency. The IRS also provided written comments on the draft report. In its comments, the agency described the status of its efforts to strengthen identity verification processes, including the fact that it has eliminated the use of knowledge-based verification. Comments from IRS are reprinted in appendix IV. Finally, GSA provided only technical comments on the draft report, as previously mentioned. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, to the Administrators of the Centers for Medicare and Medicaid Services and General Services Administration; the Commissioners of Internal Revenue and Social Security; Director of the Office of Management and Budget; the Postmaster General of the United States; and the Secretaries of the Departments of Commerce and Veterans Affairs. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Nick Marinos at (202) 512-9342 or marinosn@gao.gov, or Michael Clements at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe selected federal agency practices for remote identity proofing of individuals seeking access to major web-based applications using services provided by consumer reporting agencies and the risks associated with those practices, (2) assess selected federal agencies’ actions to ensure the effectiveness of agencies’ remote identity proofing processes, and (3) assess the sufficiency of federal identity proofing guidance developed by OMB and NIST in assuring the security of federal systems. To address the first objective, we made an initial, non-probability selection of federal agencies that (1) maintained major public-facing web applications to provide access to federal benefits or services and (2) relied on identity proofing solutions provided by the three nationwide consumer reporting agencies (CRAs)—Equifax, Experian, and TransUnion—to verify the identities of individuals applying for such benefits or services. We considered a “major” application to be one that could involve interaction with millions of individuals across the entire country. To select six agencies from this group, we reviewed prior GAO reports to identify potential agencies for review. We then interviewed officials at these agencies and at CRAs to confirm that these agencies use CRAs as part of their identity proofing processes and to obtain information about additional federal agencies that also employ CRAs for identity proofing for major applications. We included GSA in these interviews because its mission is to support federal agencies and it was likely to be aware of additional federal agencies that fit our criteria. From the information we gained from our interviews and research, we selected these six agencies: the Centers for Medicare and Medicaid Services (CMS), General Services Administration (GSA), Internal Revenue Service (IRS), Social Security Administration (SSA), United States Postal Service (USPS), and the Department of Veterans Affairs (VA). At each of these agencies, we reviewed documentation that described the current remote identity proofing processes the agencies are using for their major public-facing web applications. In addition, we interviewed agency officials responsible for identity proofing to obtain details of the techniques used to verify remote users of these applications. To the extent that these entities used CRAs to conduct knowledge-based verification as part of their remote identity-proofing processes, we discussed the risks associated with using knowledge-based methods as well as the potential advantages and limitations of using alternative methods that are not knowledge-based. We also obtained information from officials at NIST about the risks of knowledge-based methods and the availability of alternative methods. To address the second objective, we assessed remote identity proofing processes used by the selected agencies to determine the extent that they rely on knowledge-based verification to enroll online applicants for federal benefits and services. We also identified alternative methods used by these agencies, either in place of or in addition to knowledge-based verification, and assessed the extent to which agencies had implemented these methods to mitigate the risk of using knowledge-based methods. We compared the remote identity proofing processes at these agencies with the requirements as specified in NIST Special Publication 800-63, Digital Identity Guidelines, to determine whether the processes met the requirements of the NIST guidance. We also interviewed officials responsible for these identity proofing programs to obtain information about agencies’ plans, if any, to eliminate the use of knowledge-based verification from their remote identity proofing processes in the future and obtained relevant documentation of such plans. To address the third objective, we reviewed NIST Special Publication 800-63, Digital Identity Guidelines, to identify federal requirements for remote identity proofing. We compared the guidance to the Control Objectives for Information and Related Technologies (COBIT), a framework of best practices for IT governance, to determine whether the NIST guidance contained clear direction, including relevant and usable guidance, to ensure that those implementing the technology have a clear understanding of what needs to be delivered and how. To assess the sufficiency of this guidance, we consulted with subject matter experts at NIST, ID.me, a private-sector provider of remote verification technologies, and relevant officials at the selected federal entities. Based on information we had obtained about available alternative methods, we determined the extent to which gaps existed in the NIST guidance with regard to implementation of alternative technologies. We also obtained the views of federal agency officials on the extent to which NIST guidance provided sufficient direction to assist them in implementing appropriate remote identity proofing methods. Further, we reviewed OMB’s draft Identity, Credential, and Access Management policy and compared it to the requirements in FISMA and identified shortfalls. We also interviewed OMB staff to discuss the sufficiency of the office’s current guidance and to determine whether the office planned to issue additional guidance establishing reporting requirements for federal entities or conduct other forms of oversight of federal entities’ implementation of the NIST identity proofing guidance. We conducted this performance audit from November 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Commerce Appendix III: Comments from the Department of Health and Human Services Appendix IV: Comments from the Internal Revenue Service Appendix V: Comments from the Social Security Administration Appendix VI: Comments from the United States Postal Service Appendix VII: Comments from the Department of Veterans Affairs Appendix VIII: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, John de Ferrari and John Forrester (assistant directors); Tina Torabi (analyst-in-charge); Bethany Benitez, Christina Bixby, Chris Businsky, Kavita Daitnarayan, Nancy Glover, Andrea Harvey, Thomas Johnson, David Plocher, Rachel Siegel, and Winnie Tsen made key contributions to this report.
Why GAO Did This Study Many federal agencies rely on CRAs, such as Equifax, to help conduct remote identity proofing. The 2017 breach of data at Equifax raised concerns about federal agencies' remote identity proofing processes. GAO was asked to review federal agencies' remote identity proofing practices in light of the recent Equifax breach and the potential for fraud. The objectives of this review were to (1) describe federal practices for remote identity proofing and the risks associated with those practices, (2) assess federal agencies' actions to ensure the effectiveness of agencies' remote identity proofing processes, and (3) assess the sufficiency of federal identity proofing guidance. To do so, GAO identified remote identity proofing practices used by six agencies (CMS, GSA, IRS, SSA, USPS, and VA) with major, public-facing web applications providing public access to benefits or services. GAO compared the agencies' practices to NIST's remote identity proofing guidance to assess their effectiveness, and compared NIST's and OMB's guidance to requirements in federal law and best practices in IT management to assess the sufficiency of the guidance. What GAO Found Remote identity proofing is the process federal agencies and other entities use to verify that the individuals who apply online for benefits and services are who they claim to be. To perform remote identity proofing, agencies that GAO reviewed rely on consumer reporting agencies (CRAs) to conduct a procedure known as knowledge-based verification. This type of verification involves asking applicants seeking federal benefits or services personal questions derived from information found in their credit files, with the assumption that only the true owner of the identity would know the answers. If the applicant responds correctly, their identity is considered to be verified. For example, the Social Security Administration (SSA) uses this technique to verify the identities of individuals seeking access to the “My Social Security” service, which allows them to check the status of benefit applications, request a replacement Social Security or Medicare card, and request other services. However, data stolen in recent breaches, such as the 2017 Equifax breach, could be used fraudulently to respond to knowledge-based verification questions. The risk that an attacker could obtain and use an individual's personal information to answer knowledge-based verification questions and impersonate that individual led the National Institute of Standards and Technology (NIST) to issue guidance in 2017 that effectively prohibits agencies from using knowledge-based verification for sensitive applications. Alternative methods are available that provide stronger security, as shown in Figure 1. However, these methods may have limitations in cost, convenience, and technological maturity, and they may not be viable for all segments of the public. Two of the six agencies that GAO reviewed have eliminated knowledge-based verification. Specifically, the General Services Administration (GSA) and the Internal Revenue Service (IRS) recently developed and began using alternative methods for remote identity proofing for their Login.gov and Get Transcript services that do not rely on knowledge-based verification. One agency—the Department of Veterans Affairs (VA)—has implemented alternative methods for part of its identity proofing process but still relies on knowledge-based verification for some individuals. SSA and the United States Postal Service (USPS) intend to reduce or eliminate their use of knowledge-based verification sometime in the future but do not yet have specific plans for doing so. The Centers for Medicare and Medicaid Services (CMS) has no plans to reduce or eliminate knowledge-based verification for remote identity proofing. Several officials cited reasons for not adopting alternative methods, including high costs and implementation challenges for certain segments of the public. For example, mobile device verification may not always be viable because not all applicants possess mobile devices that can be used to verify their identities. Nevertheless, until these agencies take steps to eliminate their use of knowledge-based verification, the individuals they serve will remain at increased risk of identity fraud. NIST has issued guidance to agencies related to identity proofing and OMB has drafted identity management guidance, but their guidance is not sufficient to ensure agencies are adopting such methods. Sound practices in information technology (IT) management state that organizations should provide clear direction on how to implement IT objectives. However, NIST's guidance does not provide direction to agencies on how to successfully implement alternative identity-proofing methods with currently available technologies for all segments of the public. For example, the guidance does not discuss the advantages and limitations of currently available technologies or make recommendations to agencies on which technologies should be adopted. Further, most of the agencies that GAO reviewed reported that they were not able to implement the guidance because of limitations in available technologies for implementing alternative identify proofing methods. NIST officials stated that they believe their guidance is comprehensive, and at the time of our review they did not plan to issue supplemental implementation guidance to assist agencies. The Federal Information Security Modernization Act of 2014 ( FISMA) requires that OMB oversee federal agencies' information security practices. Although OMB has the authority under this statute to issue guidance, OMB has not issued guidance requiring agencies to report on their progress in implementing NIST's identity proofing guidance. OMB staff plan to issue guidance on identity management at federal agencies, but their proposed guidance does not require agencies to report on their progress in implementing NIST guidance. Until NIST provides additional guidance to help agencies move away from knowledge-based verification methods and OMB requires agencies to report on their progress, federal agencies will likely continue to struggle to strengthen their identify proofing processes. What GAO Recommends GAO is making recommendations to six agencies to strengthen online identify verification processes: GAO recommends that CMS, SSA, USPS, and VA develop plans to strengthen their remote identity proofing processes by discontinuing knowledge-based verification. GAO recommends that NIST supplement its technical guidance with implementation guidance to assist agencies in adopting more secure remote identity proofing processes. GAO recommends that OMB issue guidance requiring federal agencies to report on their progress in adopting secure identity proofing practices. Four agencies—Commerce (on behalf of NIST), SSA, USPS, and VA—agreed with GAO's recommendations. These agencies outlined the additional steps they plan to take to improve the security of their remote identity proofing processes. One agency, HHS (on behalf of CMS), disagreed with GAO's recommendation because it did not believe that the available alternatives to knowledge-based verification were feasible for the individuals it serves. However, a variety of alternative methods exist, and GAO continues to believe CMS should develop a plan for discontinuing the use of knowledge-based verification. OMB provided a technical comment, which GAO incorporated, but OMB did not provide any comments on GAO's recommendation.
gao_GAO-19-287
gao_GAO-19-287_0
Background Civilian Marksmanship Program In 1903, the War Department under President Theodore Roosevelt established the National Board for the Promotion of Rifle Practice, today known as CMP, with the general purpose of promoting the development of marksmanship skills and preparing individuals in the event that they were called upon to serve in the military. For the next several decades, the Army managed and operated CMP. In 1990, we reported that CMP was of limited value to military preparedness because, among other things, CMP’s objectives and goals were not linked to Army mobilization and training plans and program-trained personnel were not tracked. The NDAA for Fiscal Year 1996 moved CMP out of the Army and established CMP as a federally chartered, nonprofit corporation. The act also required the Secretary of the Army to transfer all firearms, ammunition, and funds from sales previously under the control of the Army program to CMP. The governing statutes for CMP and Army support of CMP activities are generally found in chapter 407 of Title 36, U.S. Code. Among other things, these provisions provide for the organization, governance structure, and functions of CMP. These functions include instructing U.S. citizens in marksmanship, promoting practice and safety in the use of firearms, and conducting competitions. For purposes of training and competition, CMP may issue or loan certain rifles, ammunition, repair parts, and other supplies necessary for activities related to CMP to affiliated organizations that provide firearms training to youth, the Boy Scouts of America, 4-H Clubs, Future Farmers of America, and other youth-oriented organizations. CMP is required to give priority to activities that benefit firearms safety, training, and competition for youth and that reach as many youth participants as possible. As one of its functions, CMP conducts rifle and handgun marksmanship competitions such as the annual National Matches. The National Matches is open to members of the armed forces, the National Guard, the Reserve Officers’ Training Corps, and rifle clubs, among other entities, as well as to civilians. Additionally, CMP may sell certain surplus rifles and M1911 handguns to affiliated organizations that provide training in the use of firearms, such as gun clubs. Finally, CMP is authorized to sell to U.S. citizens who are members of affiliated gun clubs, at fair market value, surplus .22 caliber rimfire rifles, .30 caliber rifles, and .45 caliber M1911/M1911A1 handguns, as well as ammunition, repair parts, and other supplies necessary for target practice. Agreements between the Army and CMP and the Role of the Army The Army and CMP have entered into various agreements governing their relationship. An MOU from 2016 currently delineates Army and CMP responsibilities for, among other things, the transfer of surplus firearms and associated parts and ammunition. Appendixes to the MOU identify approximately 170 surplus rifles and handguns that may be transferred to CMP. These surplus firearms include the M1 Garand .30 caliber rifle and other rifles, such as the 1903 Springfield and 1917 Enfield. See figure 1 for a photograph of a surplus M1 .30 caliber rifle packed for shipment. The Army provides a variety of support to CMP, including identifying and reserving certain surplus firearms, ammunition, and parts. At CMP’s request, the Army can transfer these firearms, ammunition, and parts to CMP under procedures established in the MOU. TACOM is the executive agent for small arms. Per the MOU between the Army and CMP, TACOM provides various forms of support to CMP, including facilitating transfers of surplus firearms from U.S. sources and recovery of firearms from foreign countries before transfer to CMP. For surplus firearms transfers within the United States, CMP reimburses the Army for the cost of preparation and transportation, including for the Army’s standard depot operations costs. The costs of the recovery of firearms, ammunition, and parts from foreign countries are treated as incremental direct costs of Army logistical support, and are also to be reimbursed by CMP. The MOU contains further provisions related to Army and CMP responsibilities and procedures, such as provisions regarding Army support for competitions and the Small Arms Firing School, and CMP’s role in the Army’s Ceremonial Rifle Program. The MOU also contains procedures and responsibilities related to funding. Finally, the MOU specifies certain management internal controls to be undertaken by CMP, including those related to the sale of firearms and the accountability of transferred materiel. CMP’s implementation and management of these controls is to be assessed and documented in an audit report to the Army required by the agreement. To implement the transfer of surplus M1911 handguns, parts, and accessories, the Army and CMP entered into a Memorandum of Agreement in January 2018, and the Army began transferring the surplus M1911 handguns to CMP the same month. The Memorandum of Agreement establishes procedures and requirements for the Army and CMP additional to those in the 2016 MOU. Among other things, it requires CMP to provide the Army with transaction data for all surplus handguns received and sold on a quarterly basis, including the number transferred to CMP, the number sold, a listing of the serial numbers for handguns sold, and any information CMP has regarding crimes committed with a purchased M1911 handgun. The Memorandum of Agreement further required CMP to take certain actions with respect to security and accountability procedures for surplus M1911 handgun processing and storage. See figure 2 for a photograph of surplus M1911 handguns. Federal Requirements for Selling Firearms Chapter 407 of Title 36, U.S. Code authorizes CMP to sell firearms to individuals who (1) are U.S. citizens, (2) are legally of age, and (3) are members of CMP-affiliated gun clubs. CMP’s sales of surplus firearms are generally subject to applicable federal, state, and local law. For example, the minimum age to purchase a rifle from a federal firearms licensee (FFL) is 18, while the minimum age to purchase a M1911 handgun from an FFL is 21. Additionally, CMP must establish procedures to obtain a criminal records check with federal and state law enforcement agencies. Certain federal requirements and restrictions related to the sale of firearms are contained in section 922 of Title 18, U.S. Code. Among other things, section 922 prohibits selling or otherwise disposing of firearms to certain prohibited persons. Generally, only FFLs may engage in the business of dealing in firearms. Additionally, FFLs generally may not sell firearms directly to out-of-state customers other than another FFL. However, these restrictions do not apply to CMP for the sale of surplus .22 caliber rimfire and .30 caliber rifles. Specifically, CMP may sell these rifles without operating as an FFL and ship these rifles directly to customers around the country, unless prohibited by that customer’s state or local law. With respect to the sale of the surplus M1911 handguns, CMP must obtain a license and operate as an FFL. The Army and CMP Have Established Procedures for the Transfer and Sale of Surplus Firearms The Army and CMP Have Procedures to Address Requirements for the Transfer of Surplus Firearms The MOU between the Army and CMP delineates a number of responsibilities for both organizations regarding the transfer of surplus firearms. Furthermore, we found that both organizations have established procedures to carry out these responsibilities. Appendixes to the MOU list approximately 170 firearms that the Army has identified as surplus to its needs. If any of the surplus firearms described in the MOU are identified by the Army in a domestic location, the Army reserves those firearms for transfer to CMP pending a formal written request from CMP for the transfer of the surplus firearms in question. For example, in fiscal year 2017 the Army identified and reserved for transfer to CMP more than 1,000 surplus rifles that various Department of Defense museums found to be surplus to their needs. Under the MOU, once TACOM informs CMP it has reserved surplus firearms that may be transferred, CMP can submit a transfer request for the surplus firearms in writing to TACOM. CMP’s written request must acknowledge that the requested materiel is on the list of firearms approved for transfer; certify that CMP will provide all security, oversight, and accountability—as required by law—of the materiel; and describe how CMP will use the requested materiel. TACOM facilitates the transfer of surplus firearms to CMP as required by the MOU. In some instances, the Army directly ships surplus firearms within the United States to CMP. In other instances, TACOM relies on the Defense Logistics Agency (DLA) to ship the surplus firearms to CMP. If DLA transfers the surplus firearms, the firearms are either shipped directly to CMP or to the DLA facilities located in Anniston, Alabama, where they are released to CMP. Under the MOU, CMP reimburses the Army for certain costs associated with transportation, supply depot operations, and administrative support. Both CMP and DLA officials told us that surplus firearms located at DLA’s facilities in Anniston, Alabama, did not incur shipping cost to the Army because CMP arranges the transfer from the DLA facilities directly to CMP’s facilities also located in Anniston, Alabama. For example, according to TACOM and CMP officials the Army did not incur any transportation costs for transferring 8,000 surplus M1911 handguns to CMP in January 2018. According to an Army official, this was because CMP transported 6,736 M1911 handguns from the DLA facility, an additional 1,242 M1911 handguns from the Center of Military History Museum Support Center, and 22 M1911 handguns from TACOM facilities—all located in Anniston, Alabama—back to its own facility in Anniston for storage. In addition to firearms from domestic locations, the Secretary of the Army may also recover certain surplus firearms furnished to foreign countries on a grant basis under the Foreign Assistance Act and transfer them to CMP. If the Secretary of the Army decides to transfer surplus firearms from a foreign country, TACOM and the Office of the Administrative Assistant to the Secretary of the Army works with the State Department, Office of Defense Cooperation, representatives located in the respective foreign country to recover and facilitate the transfer and shipment of surplus firearms from these recipients to CMP. For example, according to Army officials, CMP received approximately 100,000 surplus M1 rifles in fiscal year 2018; more than 13,000 surplus M1 rifles were recovered and transferred from Turkey in addition to nearly 87,000 surplus M1 rifles from the Philippines. After CMP receives the surplus firearms, the MOU requires CMP to perform all accounting procedures required by the Army for inventory control, including compiling the surplus firearms’ serial numbers. According to CMP officials, CMP uses a commercial point of sale system to track inventory and performs an audit of firearms stored at its facilities annually. For example, according to CMP officials, to count and verify the inventory of incoming shipments of surplus firearms, CMP staff open and inspect each box of surplus firearms upon receipt of the firearms in their facilities in Anniston, Alabama. CMP staff then inventory each firearm by matching the unique serial number found on the receiver of each firearm to the manifest included with the shipment. CMP then enters these firearms into its inventory using each firearm’s unique serial number. After shipping firearms to CMP, the MOU requires TACOM to update the Army’s Unique Item Tracking database for tracking the firearms, which it does by serial number. By statute, title to a transferred firearm does not vest with CMP until immediately before CMP delivers the firearm to an eligible purchaser. CMP Uses Sales Procedures and Federal Background Checks to Address Requirements Related to the Sale of Surplus Firearms CMP primarily uses sales procedures and the Federal Bureau of Investigation’s National Instant Criminal Background Check System (NICS) to address requirements related to the sale of surplus firearms. See figure 3 below for a description of several of the processes used by CMP to address the requirements related to the sale of surplus firearms. While various federal statutes and provisions from the agreements between the Army and CMP apply to selling both the surplus rifles and the surplus handguns, there are some differences. For example: Because section 40733 of Title 36, U.S. Code, exempts CMP from certain federal firearms requirements and restrictions, CMP can ship surplus rifles directly to a customer’s home, unless that would conflict with state or local laws applicable where the firearm is being shipped. In contrast, CMP must operate as an FFL for selling surplus M1911 handguns and will ship purchased handguns to an FFL, such as a certified gun shop, in the customer’s state. The local FFL repeats the background check before turning over the firearm to the customer. The MOU limits CMP’s sale of surplus rifles to eight per customer per calendar year, while the Memorandum of Agreement limits CMP’s sale of M1911 handguns to one per customer while it is in effect. The minimum age to purchase any rifle from CMP, including surplus rifles, is 18 while the minimum age for purchasing handguns, including surplus handguns, is 21. CMP is required to ship the surplus handguns with a security device such as a trigger lock, which it is not required to do for the sale of surplus rifles. CMP uses sales procedures to address federal requirements and agreements with the Army. According to CMP officials, the sales procedures, specifically the application to purchase the surplus firearms that customers are required to complete and have notarized, address some of the federal requirements and agreements between the Army and CMP for the sale of surplus firearms. According to CMP, customers are required to mail the original completed application package, including copies of substantiating documentation and notarization, to CMP in order to apply to purchase a surplus firearm. The application includes a form requiring potential customers to certify that they do not fall within any of the categories of individuals prohibited from being sold or receiving a firearm. The form must be signed and notarized, and specifically lists the prohibited categories, as well as certain CMP-unique categories. As part of the form, potential customers must also certify that by receipt or possession of the firearm they will not be in violation of any state law or published ordnance applicable where they reside. Additionally, applicants must provide proof of the following: Citizenship and age: Applicants must include a copy of a U.S. birth certificate; passport; proof of naturalization; a military identification card for certain ranks (active duty, reserve component, National Guard, or retired); or any official government document that shows that an individual was born in the United States or that otherwise identifies U.S. citizenship. According to the application procedures, a copy of a driver’s license is proof of age, but not of citizenship. Membership in CMP-affiliated organization: Applicants are required to provide a copy of their current membership card or another proof of membership in a CMP-affiliated organization. According to CMP, this requirement can also be satisfied by providing proof of membership in one of the federally chartered veterans’ organizations such as the Veterans of Foreign Wars or American Legion; proof of either current or retired military service; or proof of current or retired status in a law enforcement department, agency, or association. Marksmanship or other firearms-related activity: Applicants are required to show proof of participation in a marksmanship-related activity or otherwise show familiarity with the safe handling of firearms and range procedures. According to CMP, this can be accomplished by providing documentation of current or past military or law enforcement service, participation in a shooting competition, completion of a marksmanship clinic that included live-fire training, a concealed carry license, or a FFL license, among other things. Proof of license, permit, or firearms owner identification card: If the state or locality where the applicant resides requires a license, permit, firearms owner identification card, or other documentation, applicants are also required to include a photocopy of such a document with the application for purchase of the surplus firearm. Federal Firearms Licensee: Applicants purchasing surplus rifles who reside in states or localities where shipments must be made to an FFL and applicants purchasing surplus M1911 handguns must provide a copy of the license for the FFL that will be receiving the shipped firearm. According to CMP officials, CMP staff verifies the completeness of the application, including all required documentation while entering applicants’ information into CMP’s commercial point of sale and inventory system. According to CMP officials this involves staff entering the customer’s data into the point of sale and inventory system and verifying the customer’s name, address, proof of age, proof of citizenship, and membership in a CMP-affiliated organization, among other data. CMP uses a separate version of the same commercial point of sale and inventory system to enter and verify customers’ information for the purchase of surplus M1911 handguns. According to CMP officials this is due in part to the requirement to operate as an FFL in order to sell the surplus M1911 handguns. According to CMP, in addition to meeting additional record-keeping requirements required of FFLs, using two different systems helps CMP ensure it addresses certain sales requirements included in the agreements it has with the Army. For example, according to CMP officials, this helps them address the Memorandum of Understanding and Memorandum of Agreement provisions regarding the maximum number of sales of each type of firearm per customer. During our site visits to CMP’s southern headquarters in Anniston, Alabama in August 2018 and November 2018 we observed CMP officials processing applications from the public to purchase surplus firearms to better understand how CMP addressed certain federal requirements and agreements between the Army and CMP. Specifically, we observed 11 transactions from the receipt of an order through processing and packaging for shipment. Six of these transactions involved rifles and five involved handguns. The 11 transactions we observed were consistent with the sales procedures we identified above. For example, in all cases, we saw CMP staff verify and update customer information from the application packet for existing customers of surplus rifles and input customer information from the application packet for new customers of both the surplus rifles as well as the surplus handguns. We also observed that CMP staff could not move forward with the sale without entering and verifying the information supplied in the application. In one instance, we observed a CMP employee entering an application for the purchase of a surplus rifle that had not been notarized and signed. CMP employees stopped the process for this application and informed us the applicant would be contacted directly and requested to provide the required notarization in order for CMP to proceed with the sale. In another instance, CMP staff demonstrated what would occur if information pertaining to the documentation required to demonstrate membership in a CMP-affiliated gun club was not entered. We saw that CMP employees could not continue to the next screen without entering these data. CMP addresses various other federal requirements for the sale of firearms via background checks. Once the application procedures we described above are completed, CMP staff then enter the prospective customer’s information into the system used by the Federal Bureau of Investigation to perform background checks. CMP intends this National Instant Criminal Background Check System (NICS) to address certain federal requirements for the sale of firearms that we identified above. For example, the NICS background check analyzes various databases to determine whether a prospective customer falls into any of the categories of persons prohibited from being sold or receiving a firearm. The prohibitions involve sale to or receipt by a prospective customer who is under indictment for or has been convicted in any court of a crime punishable by imprisonment for a term exceeding 1 year; is a fugitive from justice; or has been convicted in any court of a misdemeanor crime of domestic violence, among other things. The NICS background check also searches databases to identify a prospective customer who has been discharged from the U.S. Armed Forces under dishonorable conditions, was a U.S. citizen but has since renounced his or her citizenship, or is an unlawful user of or addicted to any controlled substance. Additionally, CMP uses the NICS background check to confirm the applicant is of the minimum age necessary to purchase either a rifle or a handgun. Specifically, CMP uses the NICS background checks to ensure the applicant is at least 18 in order to purchase a surplus rifle or a minimum of 21 in order to purchase a surplus handgun. If an age is entered into NICS that is younger than the minimum age required for purchasing a firearm, the system will not continue performing the background check and will notify CMP staff that the buyer is not old enough to purchase the firearm(s). During our site visits to CMP’s southern headquarters in Anniston, Alabama in August 2018 and November 2018 we observed CMP employees performing the NICS background checks for 10 of 11 transactions, and we observed that the employees were unable to proceed with the background check without certain required information. Specifically, CMP employees demonstrated the result of a change to the birthdate while processing the sale of a surplus M1911 handgun so that the customer would be under 21 years of age. This resulted in the NICS system automatically not allowing the background check to proceed. TACOM Oversees the Charges and Reimbursements Funded by CMP for the Transfer of Surplus Firearms. As previously discussed, the MOU requires (1) CMP to reimburse the Army for certain costs associated with the transfer of firearms to CMP, and (2) TACOM to account for the funds reimbursed by CMP. Specifically, CMP is responsible for assuming or reimbursing TACOM for certain costs associated with transportation, standard depot operations, and administrative support. According to Army officials, these administrative support costs include TACOM’s annual cost of one full-time equivalent position to help administer the identification and shipment of surplus firearms to CMP. The MOU also requires TACOM to provide CMP with semi-annual reports identifying the reimbursable costs the Army incurred for any firearms transfers. According to TACOM and CMP officials, TACOM has met this requirement since at least fiscal year 2012 by providing briefings at CMP’s biannual Board of Director’s meetings. CMP reimburses TACOM by depositing funds into a TACOM-managed reimbursement account. TACOM uses the funds in this account to pay the costs associated with transferring firearms to CMP. According to the MOU, TACOM is also responsible for maintaining accountability of funds provided by CMP in support of certain transportation, supply depot operations, and administrative support. The MOU further provides that administrative funding will be evaluated at the end of each fiscal year. For fiscal years 2008 through 2015, TACOM did not have complete information on the reimbursable costs incurred by the Army and on the amounts CMP reimbursed the Army for those costs. This is because, according to TACOM officials, information on transactions involving the reimbursement account prior to fiscal year 2016 was maintained under a different accounting system and could no longer be accessed. According to TACOM officials, TACOM began using a new financial system in fiscal year 2016 to track the information used to maintain accountability including, among other things, the Army’s reimbursable costs and CMP’s payments for those costs. For fiscal years 2016 and 2017, TACOM officials provided us with examples of the documentation from the current system demonstrating that in addition to tracking CMP’s reimbursement payments TACOM tracks reimbursable costs for transportation, standard depot operations, and administrative support using five specific categories of information: travel, labor, commercial transportation, intra- Army purchases, and contract service. For example, these documents showed that CMP reimbursed the Army for a total of $5 million in fiscal year 2017. Sale of Surplus Firearms Has Been the Primary Source of CMP’s Revenue; Although Associated Profits Could Not Be Determined, Estimated Future Revenue Could Fund Operations for Several Years CMP’s primary source of revenue from fiscal years 2008 through 2017 was the sale of surplus firearms. During this time frame, according to CMP’s internal financial documents, CMP generated $196.8 million in revenue from the sales of surplus Army rifles. However, the profit that CMP realized from these sales could not be determined. CMP’s internal financial documents show that CMP incurred $84.7 million in costs for those sales, but CMP’s methodology for calculating costs associated with the transfer and sale of surplus rifles did not account for depreciation and administrative expenses. CMP officials anticipate generating additional revenue from the future sale of surplus M1911 handguns and surplus rifles that CMP currently has available to sell. We estimate these sales could generate as much as $104.9 million, or enough to fund CMP’s operations for several more years. CMP’s Primary Source of Revenue Has Been from the Sale of Surplus Army Rifles Based on our analysis of CMP’s IRS filings and the corporation’s internal financial documents, we identified four primary sources of revenue, of which the sale of surplus Army rifles accounted for the largest share. Sale of surplus Army rifles. According to CMP’s internal financial documents, CMP generated $196.8 million in revenue from the sale of surplus Army rifles during fiscal years 2008 through 2017. The vast majority of these firearms were M1 rifles (see app. II for additional details on the specific types of rifles CMP sold during that time frame). Although the number of surplus rifles CMP sold varied from year to year, as shown in figure 4, except for fiscal years 2012 and 2013, the number of rifles sold has trended downward from fiscal years 2008 through 2017. Sale of ammunition and memorabilia. CMP purchases bulk quantities of commercially available ammunition at a discounted rate due to the size of the order, and then sells this ammunition to its affiliated groups. CMP also sells memorabilia such as T-shirts and hats. According to CMP’s fiscal years 2008 through 2017 internal financial documents, the sale of ammunition and memorabilia was CMP’s second largest source of revenue and generated $76.4 million. Figure 5 shows boxes of commercially purchased ammunition stored in CMP’s warehouse in Anniston, Alabama. Investment account income. CMP officials told us that CMP established an investment account to ensure it had the financial resources to continue to meet its mission should the transfer of surplus firearms from the Army cease. According to CMP’s fiscal years 2008 through 2017 IRS filings, CMP reported earning $49.8 million in interest and dividend income from the corporation’s investment account. As seen in figure 6, CMP’s investment account grew by approximately $88 million, from $100.3 million at the end of fiscal year 2008 to $188.6 million by the end of fiscal year 2017. Although CMP’s investment account grew over this 10-year period, in some years CMP made net deposits into the account, and in other years CMP had net withdrawals from the account. For example, CMP’s net deposits from fiscal years 2008 through 2013 were $73.5 million. However, CMP had net withdrawals of $21.5 million from fiscal years 2014 through 2017. CMP officials stated that they used withdrawals from the investment account in those years to expand marksmanship- related programs and to finance construction of the Talladega Marksmanship Park, completed in 2015, shown in figure 7. Marksmanship-related programs. CMP charges fees for individuals to participate in its marksmanship-related programs such as training programs, matches, youth camps, and competitions. According to CMP’s fiscal years 2008 through 2017 IRS filings, CMP generated approximately $9.2 million in revenue from these fees. CMP’s IRS filings for those years also indicate that CMP’s expenses associated with CMP’s marksmanship programs exceeded revenue by approximately $85.8 million. According to CMP officials, CMP heavily subsidizes participation fees for both matches and youth camps to help make the corporation’s programs as accessible as possible, and revenue generated from the sale of surplus firearms covered any program deficits. Figure 8 shows competitors during the National Matches event we observed in July 2018. Other revenue. According to CMP’s fiscal years 2008 through 2017 internal financial documents, CMP also generated some additional revenue from a variety of other sources. For example, CMP’s Talladega Marksmanship Park has generated over $1.5 million in revenue from range rental and match fees, among other things. Profits from the Sale of Surplus Firearms Could Not Be Determined CMP reported an overall profit of $125.9 million on its IRS filings for fiscal years 2008 through 2017, but this amount includes all categories of revenue and expense for business operations, not only those categories specific to surplus rifle sales. We were therefore unable to use CMP’s IRS filings to determine CMP’s profits from the sale of surplus rifles. The amount of profit specific to surplus rifle sales also could not be determined from CMP’s internal financial documents. CMP’s internal financial documents showed $84.7 million in expenses to sell surplus rifles in fiscal years 2008 through 2017, including costs associated with labor, shipping, and other expenses to prepare the surplus firearms for sale. This is less than the $196.8 million CMP’s internal financial documents show CMP generated in revenue from the sale of surplus Army rifles during fiscal years 2008 through 2017. However, in its internal financial documents, the methodology CMP used to calculate the expenses to sell surplus Army rifles did not include all of CMP’s expenses for these sales. Specifically, the methodology CMP used did not account for depreciation and administrative expenses. CMP did not begin selling surplus M1911 handguns until November 2018, and therefore had just begun generating revenue from these sales at the time of our report. CMP’s internal financial documents reported some costs associated with the surplus M1911 handguns. For example, in fiscal year 2018, in response to an Army requirement, CMP spent approximately $0.7 million upgrading a facility used to house CMP’s M1911 handgun operations. CMP also reported expenses specific to the M1911 handguns of just over $8,000 in fiscal year 2017. CMP Could Generate Millions of Dollars in Future Revenue from the Projected Sale of M1911 Handguns and Surplus Rifles According to CMP officials, CMP anticipates selling most, if not all, of the M1911 handguns because there has been a higher demand for the surplus M1911 handguns than the quantity available to CMP for sale. For example, CMP officials reported that they received more than 19,000 orders for the 8,000 surplus M1911 handguns transferred from the Army in January 2018. How much CMP will sell each surplus handgun for depends on the quality, or grade, of the handguns as determined by CMP. Specifically, CMP officials told us CMP will sell service grade surplus M1911 handguns for $1,050, field grade handguns for $950, and rack grade handguns for $850 each. CMP officials reported that as of December 2018, CMP had sold 632 service grade surplus M1911 handguns for $1,050 each, which generated $663,600 in revenue. Further, CMP officials told us they had determined that 145 of the surplus M1911 handguns were in unsellable condition. As a result, as of December 2018, 7,223 surplus M1911 handguns remained from the original 8,000 CMP received from the Army. If CMP sold all of the remaining handguns, we estimate that CMP could generate from $6.14 million to $7.58 million in additional revenue, depending on the grade of each surplus M1911 handgun sold. As of December 2018, CMP officials told us they expected to complete the processing and sale of the surplus M1911 handguns in the spring of 2019. We estimate that by the time these sales are completed CMP could generate total revenue of from $6.8 million to $8.2 million from the sale of surplus M1911 handguns. CMP may also be able to continue to generate revenue from surplus rifles that are currently available for sale. Based on CMP’s reported sales of 304,233 surplus rifles from fiscal years 2008 through 2017 and revenue generated from these sales of $196.8 million, we determined the average sale of these surplus rifles to be approximately $650 per rifle. According to CMP, as of August 16, 2018 it had approximately 148,714 sellable surplus rifles. Based on our calculation of the average sales price of $650 per surplus rifle, we estimate CMP could generate approximately $96.7 million in revenue from selling surplus rifles currently available for sale. Combined with the potential revenue from the sale of M1911 handguns, we estimated CMP could generate from $103.5 million to $104.9 million from the future sale of surplus firearms. Given CMP’s fiscal year 2017 expenses of $15.8 million, and assuming a similar level of future annual expenses, we estimate CMP could fund a similar level of operations for several more years from the sale of all of the surplus firearms it currently has available for sale. Further, as discussed earlier, CMP has other sources of revenue. As of September 30, 2017, CMP reported having cash of $3.6 million and an investment account that was valued at $188.6 million, for a total of $192.2 million. This could also allow CMP to continue operations for several additional years if it did not receive any additional transfers of surplus firearms. CMP and the Five Selected Corporations Have Similarities in Aspects of Their Business Operations, but Differ in Their Relationship with Members In addition to CMP, we examined five other federally chartered corporations–the U.S. Naval Sea Cadet Corps, the Civil Air Patrol, Big Brothers Big Sisters of America, Future Farmers of America, and the Boy Scouts of America–that have a similar focus on the development, education, or training of youth. Four of the six corporations, including CMP, have received federal funding or resources, and each of the six corporations is governed by some form of a board of directors. However, CMP’s relationship with members, which CMP officials refer to as “affiliated groups” (e.g., gun clubs throughout the United States), differs from the other five corporations we selected for comparison. Organizational mission. All five of the other federally chartered corporations we examined have a focus on the development, education, or training of youth. The Naval Sea Cadet Corps identifies itself as a national youth leadership development organization that promotes interest and skill in naval disciplines while instilling strong moral character and life skills through leadership and technical programs modeled after the Navy’s professional development system. The Civil Air Patrol’s mission statement includes the development of youth and promotion of air, space, and cyber power. Further, the Civil Air Patrol identified that it promotes aviation and related fields through aerospace/science technology engineering and math education and by helping shape future leaders through its cadet program. Big Brothers Big Sisters of America’s overall mission includes providing children facing adversity with strong and enduring, professionally supported relationships that change their lives for the better, including helping children to achieve educational success. Future Farmers of America’s mission statement involves making a positive difference in the lives of students by developing their potential for premier leadership, personal growth and career success through agricultural education. The Boy Scouts of America identified that its goal is to train youth in responsible citizenship, character development, and self-reliance through participation in a wide range of outdoor activities, and educational programs, among other things. Federal funding or resources. CMP, the Naval Sea Cadet Corps, the Civil Air Patrol, and Big Brother Big Sisters of America received some form of federal funding or resources during fiscal years 2015 through 2017. CMP is the only one of these four corporations that relies on the transfer and sale of federally donated surplus firearms for the majority of its revenue. According to officials from the corporations, the Naval Sea Cadet Corps and the Civil Air Patrol rely on federal appropriations and federal grants from the Navy and the Air Force, respectively. For example, according to officials from the Naval Sea Cadet Corps, the corporation received approximately $5.1 million in federal grants from the Navy from fiscal years 2015 through 2017. Civil Air Patrol officials stated that federal funds were the largest source of revenue. According to officials from Big Brothers Big Sisters of America, the corporation received approximately $3.8 million in federal grants from the Department of Labor and $8.2 million in federal grants from the Department of Justice’s Office of Juvenile Justice and Delinquency Prevention from fiscal years 2015 through 2017. Officials from Big Brothers Big Sisters of America told us that these grants were the corporation’s largest source of funding. Officials at both Future Farmers of America and the Boy Scouts of America told us they raise funds through membership dues and merchandise sales, among other things, but do not receive any federal funding or resources. Organizational structure. The leadership structure of CMP and the five selected federally chartered corporations was similar. That is, officials from CMP and the five selected corporations told us that each corporation has a board of directors or board of governors that may or may not have term limits. For example, CMP’s Board of Directors includes 11 board members with repeatable 2-year term limits, for which the Chairman of the Board also serves as the Chief Executive Officer. According to Boy Scouts of America officials, the corporation’s National Council’s Board of Directors is elected through a nominating process and has no fixed term limits. The Board of Directors in turn elects representatives to the Executive Committee and there is also an Advisory Council. The Advisory Council, according to Boy Scouts of America officials, reports to the Board of Directors and comprises both former members of the board and members who may become future directors on the board. According to officials from Future Farmers of America, that corporation has a Board of Directors of which four members are designated by the Department of Education including a designated Chairperson, and these four members serve open-ended terms. According to these officials, the remaining members of the board not designated by the Secretary of Education serve 3-year terms. The Civil Air Patrol has an 11-member Board of Governors: four are appointed by the Secretary of the Air Force; four are from its volunteer force; and three are from outside the corporation. Organizational relationships. CMP’s relationship with what it refers to as affiliated groups (e.g., gun clubs) throughout the United States, differs when compared with the five federally chartered corporations we selected for review. CMP is located in two facilities: one in Anniston, Alabama, that, according to CMP officials, primarily handles sales and operations and one in Port Clinton, Ohio, that, according to CMP officials, manages mission-related programs, such as the National Matches. CMP also sells surplus firearms to members of groups affiliated with CMP from throughout the United States. But, while CMP officials identified 5,002 affiliated clubs throughout the United States and referred to them as being “affiliated” with CMP, none of these entities are actually part of CMP. According to CMP officials, the clubs pay a small annual fee to become affiliated with CMP, which allows them to participate in CMP- sanctioned marksmanship matches and so that their members are eligible to buy surplus firearms from CMP, among other things. In contrast, according to officials from the other five selected corporations, those corporations have members or affiliates throughout the United States— meaning that these members and affiliated groups are part of the organization as a whole. For example, officials from the Boy Scouts of America told us that they divide the country into regions, then local councils, local districts, counties or communities, and then to local sponsors of individual units or troops; all members are part of the Boy Scouts of America. Similarly, Naval Sea Cadet Corps officials told us the organization is comprised of regional and local units; there is open communication between headquarters and the local units, and a standardized training program is implemented at the local level. Agency Comments, Third-Party Views, and Our Evaluation We provided copies of a draft of this report to the Secretary of the Army, the Civilian Marksmanship Program, and other interested parties for comment. The Secretary of the Army and Civilian Marksmanship Program provided technical comments, which we incorporated into this report as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or at maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology The National Defense Authorization Act for Fiscal Year 2018 required the Army to transfer surplus M1911 handguns to the Civilian Marksmanship Program (CMP) during fiscal years 2018 and 2019, including no fewer than 8,000 in fiscal year 2018 and no more than 10,000 in any fiscal year. The act also included a provision for us to review certain matters related to CMP. This report (1) examines the Army’s and CMP’s procedures to address requirements governing the transfer and sale of firearms; (2) examines CMP’s primary sources of revenue, costs and profits, and estimated future revenue associated with the sale of surplus firearms; and (3) compares certain aspects of CMP’s business operations with those of five selected youth-focused, federally chartered nonprofit corporations The scope of our review focused primarily on fiscal years 2008 through 2017. We compiled 10 years of the sale of surplus rifles from fiscal years 2008 through 2017 to understand the numbers of surplus rifles transferred as well as the revenue, costs, and profits associated with the sale of surplus rifles. To identify the requirements governing the transfer and sale of surplus firearms, we reviewed applicable federal statutes including relevant provisions from chapter 407 of Title 36, and section 922 of Title 18, U.S. Code, as well as agreements between the Army and CMP such as the 2016 Memorandum of Understanding and the 2018 Memorandum of Agreement. We also reviewed transfer and sales procedures from fiscal years 2018 and 2019 to provide a current status regarding CMP’s sale of surplus M1911 handguns, which CMP began selling in November 2018. To identify procedures put in place by the Army to address the requirements governing the transfer of firearms, we reviewed documentation of reimbursements CMP made to the Tank-automotive and Armaments Command (TACOM), the organization within the Army responsible for facilitating the transfer of surplus firearms to CMP as well as for managing the related reimbursement account. Our review of the procedures associated with the reimbursement account included obtaining cash collection vouchers submitted to the Army by CMP and TACOM briefings presented at CMP’s biannual Board of Director’s meetings. Further, we interviewed TACOM and Defense Logistics Agency (DLA) officials to gain an understanding of how reimbursable costs are identified and requested from CMP. We also compared multiple source documents related to transfers. To understand how TACOM identifies and reports costs associated with the transfer of surplus firearms, we reviewed documentation related to reimbursement for labor, transportation, and standard depot operation costs associated with the transfer of firearms from the Army to CMP. To identify procedures put in place by CMP to address the requirements governing the transfer and sale of firearms, we conducted site visits to CMP’s northern and southern headquarters in Port Clinton, Ohio and Anniston, Alabama, and observed the inventory and sales processes for rifles and handguns. During our site visits to CMP’s southern headquarters in Anniston, Alabama, in August 2018 and November 2018 we observed 11 examples of firearm transactions and compared the procedures with various federal requirements and the agreements between the Army and CMP. We also reviewed documentation of sale order forms, and of CMP’s sales operating system processing an order in order to identify how CMP enters and confirms certain information related to sales. In addition, we interviewed CMP officials to obtain further clarification on the organization’s sales processes. To determine CMP’s primary sources of revenue, as well as the costs and profits associated with the sale of surplus rifles, we reviewed financial information provided by CMP. Our review included an analysis of CMP’s IRS filings and internal financial documents for fiscal years 2008 through 2017. We used CMP’s IRS filings to provide information on revenue generated from overall sales, investments, and programs, as well as on the growth of CMP’s investment account. We relied on the internal financial documents for a more granular account of the revenue CMP generated specifically from the sale of surplus rifles as well as commercially purchased ammunition and memorabilia. CMP officials provided us with a methodology for determining which data within the organization’s internal financial documents are revenue and expenses specific to the sale of surplus rifles. We assessed the reliability of the data by interviewing CMP officials to gain an understanding of how CMP’s IRS filings and internal financial documents are produced and found it sufficiently reliable for our purposes. To assess the reliability of the surplus firearms transfer data provided by TACOM we spoke with TACOM officials for clarification and further explanation of the data provided, including firearm nomenclature and identification codes. The additional information TACOM provided allowed us to identify 17 different types of .22 or .30 caliber surplus rifles that could be grouped together based on make, model, and/or caliber. TACOM officials confirmed our groupings for the types of firearms transferred from fiscal years 2008 through 2017, and we used the results of our analysis to summarize the number and types of surplus rifles transferred to CMP during this time frame. We found the data to be sufficiently reliable for our purposes. To determine potential future revenue associated with the sale of surplus M1911 handguns we obtained current sales price information from CMP, and used this information to project a range of potential future revenue based on the number of surplus rifles and handguns CMP currently has on hand. Specifically, to determine the range of potential revenue for the sale of surplus handguns, we asked CMP to provide information on the sales prices for each of the three grades of preordered M1911 handguns. CMP reported that it had sold 632 surplus M1911 handguns as of December 13, 2018 and further that it had identified another 145 surplus handguns as unsellable. To determine the number of surplus handguns remaining to be sold, we subtracted both the 632 surplus handguns CMP reported as sold and the 145 surplus handguns CMP had determined to be unsellable from the total of 8,000 surplus M1911 handguns the Army originally transferred to CMP. To calculate the range of potential revenue from the remaining 7,223 surplus M1911 handguns, we then multiplied the 7,223 remaining surplus handguns by the lowest and the highest sales prices, $850 and $1,050 respectively. This gave us a range of revenue from the future sales of from $6.14 million to $7.58 million. We then added the known $663,600 in revenue from the sale of the 632 service grade handguns CMP identified to our low and high end calculations to determine the range of future revenue of from $6.8 million to $8.2 million from the sale of surplus M1911 handguns. To determine potential future revenue associated with the sale of surplus rifles, we reviewed inventory and sales data provided by CMP and used this information to estimate potential future revenue based on the average price of the surplus rifles CMP has sold from fiscal years 2008 through 2017. Based on CMPs reported sales of 304,233 surplus rifles from fiscal years 2008 through 2017 and revenue generated from these sales of $196.8 million, we determined the average sale of these surplus rifles to be approximately $650 per rifle. According to CMP, as of August 16, 2018, it had 228,791 rifles on hand, of which CMP identified 148,714 as being in sellable condition. We then multiplied the number of rifles available for sale as of August 2018 by $650, assuming the average sales price would remain the same going forward, to obtain the potential future revenue from the sale of surplus rifles. We then added the range of potential surplus M1911 handgun sales to determine a potential range of CMP’s future sales of surplus firearms. Given CMP’s fiscal year 2017 expenses of $15.8 million, and assuming those expenses remained the same, CMP could fund a similar level of operations for several years from the sale of all of the surplus firearms it currently has available to sell. In order to compare CMP’s business operations with those of other federally chartered nonprofit corporations, we focused on CMP’s youth- focused mission and identified eight other youth-focused, federally chartered nonprofit corporations. Specifically, we reviewed 93 federally chartered nonprofit corporations to identify corporations that focused on the education, training, or development of youth. We developed a set of relevant questions and interviewed officials from five of the eight federally chartered nonprofit corporations we identified—the Naval Sea Cadet Corps, the Civil Air Patrol, Big Brothers Big Sisters of America, Future Farmers of America, and the Boy Scouts of America. Of the remaining three corporations, two did not respond to our requests for meetings and the third declined to meet. We posed the same questions to all the corporations’ officials we met with and compared certain aspects of CMP’s business operations with the federally chartered nonprofit corporations regarding governance, organizational structure and relationships, and funding sources. We conducted this performance audit from May 2018 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Surplus Rifles Transferred from the Army to the Civilian Marksmanship Program from Fiscal Years 2008 through 2017 The Army transferred 279,032 surplus rifles to the Civilian Marksmanship Program (CMP) from fiscal years 2008 through 2017. The surplus rifles transfer data characterized rifles with different descriptions for nomenclatures (e.g., M1903, Mossberg M144, and M1917 Enfield) and firearm identification codes. Our analysis determined that the different nomenclatures could be combined into 17 distinct groups because many of the rifles were variants of the same type of .30 caliber rifle or carbine, or .22 caliber rimfire rifle. The 17 types of rifles we identified were grouped together based on make, model, and/or caliber. Through our analysis, we determined that the majority of rifles transferred to CMP by the Army from fiscal years 2008 through 2017 have been surplus M1 rifles. Our analyses determined that 203,644 of the 279,032 surplus rifles transferred to CMP from fiscal years 2008 through 2017 were serviceable M1 rifles. The second largest type of surplus rifles transferred during this period were drill rifles—rifles not capable of firing live or blank rounds of ammunition—although CMP received nearly four times the number of M1s as it did drill rifles. See table 1 for a description of the surplus rifles transferred to CMP by the Army from fiscal years 2008 through 2017. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Marilyn Wasleski, Assistant Director; Scott Behen, Analyst-in-Charge; Mae Jones; Richard Kusman; Amie Lesser; Rebecca Mendelsohn; Mike Shaughnessy; Mike Silver; Carter Stevens; and Roger Stoltz made key contributions to this report.
Why GAO Did This Study Since 1996, the Army has transferred more than 700,000 surplus rifles and handguns to CMP. The National Defense Authorization Act (NDAA) for Fiscal Year 1996 authorized CMP to sell certain types of surplus Army firearms to U.S. citizens, including M1 .30 caliber rifles. CMP reimburses the Army for the costs to prepare and transport surplus firearms to CMP. The NDAA for Fiscal Year 2018 required the Army during fiscal years 2018 and 2019 to transfer to CMP surplus M1911 .45 caliber handguns, including not fewer than 8,000 in fiscal year 2018 and not more than 10,000 in any fiscal year, and included a provision for GAO to conduct a review of certain matters related to CMP. Among other things, GAO examined (1) the Army and CMP's procedures to address requirements governing the transfer and sale of firearms and (2) CMP's primary sources of revenue, costs and profits, and estimated future revenue associated with the sale of surplus firearms. GAO reviewed applicable federal statutes and agreements between the Army and CMP; analyzed firearms transfer data, and CMP's Internal Revenue Service filings and internal financial documents; and visited both CMP's northern headquarters in Port Clinton, Ohio and its southern headquarters in Anniston, Alabama. What GAO Found The Civilian Marksmanship Program (CMP) is a federally chartered, nonprofit corporation that, among other things, instructs U.S. citizens in marksmanship; promotes practice and safety in the use of firearms; and sells surplus Army firearms (see figure), ammunition, repair parts, and other supplies. CMP is required to give priority to activities that benefit firearms safety, training, and competition for youth and that reach as many youth participants as possible. CMP also charges fees for individuals to participate in some of its programs. The Army and CMP have established procedures to address federal requirements for the transfer and sale of surplus firearms. Both organizations established procedures to carry out the transfer of surplus Army firearms as identified in a 2016 Memorandum of Understanding (MOU) and a 2018 Memorandum of Agreement, both between the Army and CMP. To address requirements for selling surplus firearms, CMP uses a combination of procedures, including an application requiring prospective customers to provide proof of citizenship and age, among other things, and a check against the National Instant Criminal Background Check System. Per the MOU, the Army's Tank-automotive and Armaments Command oversees the Army's costs and reimbursements from CMP for certain costs associated with storing, transporting, and administering the transfer of surplus firearms. The primary source of CMP's revenues from fiscal years 2008 through 2017 was from the sale of surplus rifles, which, according to CMP's internal financial documents, generated $196.8 million in revenue. CMP also sold commercial ammunition and memorabilia, which, according to the same documents, generated $76.4 million in revenue. Further, according to its Internal Revenue Service filings for this time frame, CMP reported earning $49.8 million in interest and dividends from its investment account. CMP began selling surplus M1911 handguns in November 2018 and had just begun generating revenue from these sales at the time of GAO's review. The profit that CMP realized from the sales of surplus rifles could not be determined because CMP's methodology to calculate expenses did not account for all of CMP's costs associated with the sale of these rifles. GAO estimates future sales of CMP's surplus handgun and rifles currently available for sale could generate as much as $104.9 million, or enough to fund CMP's operations for several years. Further, as of September 30, 2017, CMP reported having cash of $3.6 million, and an investment account valued at $188.6 million. This could also allow CMP to continue operations for several years.
gao_GAO-19-291
gao_GAO-19-291_0
Background AI/AN Veterans The number of AI/AN veterans eligible for both VA and IHS services is unknown. The U.S. Census Bureau estimates that in 2017 approximately 141,000 AI/AN individuals identified themselves as veterans. This estimate includes only individuals who identified as AI/AN alone and not in combination with another racial group. IHS and VA do not have an administrative mechanism for determining the number of AI/AN veterans who are users of both systems. Instead, each agency separately relies on individuals to identify either as veterans, or as AI/AN, resulting in different counts. Specifically, according to IHS, in fiscal year 2017, 48,169 active IHS users self-identified as veterans. According to VA, in fiscal year 2017, 80,507 VA-enrolled veterans self-identified as AI/AN. VA and IHS Structure and Benefits VA is charged with providing health care services to the nation’s eligible veterans, and served 6.8 million veterans in fiscal year 2017 with a total health care budget of about $69 billion. VA’s health care system includes 18 regional networks—Veterans Integrated Service Networks—to which each of VA’s facilities is assigned. VA has 170 medical centers, which offer a variety of inpatient and outpatient services, ranging from routine examinations to complex surgical procedures. VA’s health care system also includes community-based outpatient clinics and other facilities that generally limit services to primary care and some specialty care. When needed services are not available at VA facilities or within required driving distances or time frames, VA may purchase care from non-VA providers through its community care programs, such as the Veterans Choice Program. Eligibility for VA health care is based on several factors, including the veteran’s period of active service, discharge status, the presence of service connected disabilities or exposures, income, and other factors. VA uses factors such as these to categorize eligible veterans into eight enrollment priority groups—established to manage the provision of care. Some veterans qualify for free health care services based on service connected disabilities, income, or other special eligibilities, while others may be responsible for co-payments. IHS was established to provide health services to members of AI/AN tribes, and its facilities are primarily in rural areas on or near reservations. IHS’s fiscal year 2017 budget was approximately $5 billion, and the agency served about 1.6 million individuals. The agency is organized into 12 federally designated geographic areas. IHS provides services directly through a federally operated network of 25 hospitals, 53 health centers, and 30 health stations in 37 U.S. states. In addition, about 54 percent of IHS’s funds are provided to THPs to operate about 580 of their own facilities such as hospitals, health centers, clinics and health stations. IHS also provides funding to 41 nonprofit organizations through the Urban Indian Health program to provide health care services to AI/AN individuals living in urban areas. IHS and THP facilities are often limited to providing primary and emergency care services. When needed health care services are not available at IHS or THP facilities, in certain circumstances the facilities may pay external providers to provide these services through IHS’s Purchased/Referred Care (PRC) program. Before the PRC program can provide payment, patients must exhaust all health care resources available to them from private insurance, state health programs, and other federal programs, including VA. Furthermore, eligibility for PRC payment is not automatic, and IHS has reported that PRC funds are not sufficient to pay for all necessary care and, therefore, generally pay for only the highest priority costs, such as emergency care and transportation to that care. To be eligible for IHS health care services, an individual must generally be a member or descendant of one of the current 573 federally recognized Indian tribes, as evidenced by such factors as tribal membership, enrollment, residence on tax-exempt land, ownership of restricted property, active participation in tribal affairs, or other relevant factors. In instances where an AI/AN veteran is eligible for a particular health care service from both VA and IHS, VA is the primary payer. The VA and IHS MOU and Reimbursement Agreements The 2010 MOU between VA and IHS set mutual goals and objectives to facilitate coordinating and resource-sharing between the two agencies. Specifically, the five MOU goals are as follows: 1. Increase access to and improve quality of health care and services to the mutual benefit of both agencies. Effectively leverage the strengths of the VA and IHS at the national and local levels to afford the delivery of optimal clinical care. 2. Promote patient-centered collaboration and facilitate communication among VA, IHS, AI/AN veterans, tribal facilities, and Urban Indian clinics. 3. In consultation with tribes at the regional and local levels, establish effective partnerships and sharing agreements among VA headquarters and facilities, IHS headquarters and facilities, tribal facilities, and Urban Indian Health Programs in support of AI/AN veterans. 4. Ensure that appropriate resources are identified and available to support programs for AI/AN veterans. 5. Improve health promotion and disease prevention services to AI/AN veterans to address community-based wellness. In accordance with these five goals, the MOU contains specific areas in which VA and IHS agreed to collaborate and coordinate, including: Reimbursement: development of payment and reimbursement policies and mechanisms to support care delivered to dually eligible AI/AN veterans. Sharing staff: sharing of specialty services, joint credentialing and privileging of health care staff, and arranging for temporary assignment of IHS Public Health Service commissioned officers to VA. Staff training: providing systematic training for VA, IHS, THP, and Urban Indian Health Program staff on VA and IHS eligibility requirements to assist them with appropriate referrals for services. Information Technology Interoperability: interoperability of systems to facilitate sharing of information on common patients, and establishment of standard mechanisms for VA, IHS, and THP providers to access records for patients receiving care in multiple systems. VA and IHS each designated certain staff to oversee and implement the MOU, but VA is generally responsible for administering the MOU. For example, VA’s Office of Community Care provides oversight of the reimbursement agreements—which are a key part of the MOU. Within that office, VA established the IHS/THP Reimbursement Agreements Program to carry out portions of the MOU related to the development of payment and reimbursement policies. Under these policies, in instances where an AI/AN veteran is eligible for a particular health care service from a VA facility, that veteran can instead receive the eligible service at an IHS or THP facility without prior VA approval and, under a reimbursement agreement, VA will reimburse the facility for the service. Some key aspects of the reimbursement agreement program are as follows: All IHS facilities are covered under one national reimbursement agreement between VA and IHS. THPs each negotiate their own separate reimbursement agreements with VA. While VA uses a reimbursement agreement template based on the agreement with IHS, the terms of each THP agreement may deviate from those in IHS’s national agreement. Urban Indian Health Programs are generally not eligible for reimbursement agreements. VA provides reimbursement for outpatient and inpatient direct care services provided at IHS and THP facilities. VA also reimburses IHS and THP facilities for costs of outpatient prescriptions for AI/AN veterans, as well as filling prescriptions for AI/AN veterans served at IHS and THP facilities through VA’s Consolidated Mail Outpatient Pharmacy program. VA does not provide reimbursement for those services from external providers paid for by IHS or THP PRC programs. VA reports that the process of establishing reimbursement agreements with THPs has multiple phases. The process begins with initial communication between the THP and VA, followed by an orientation briefing. The THP then begins to draft the agreement (based on VA’s template) and prepare required VA paperwork (e.g., an implementation plan and proof of certification or accreditation). Once drafted, the THP submits the draft agreement and paperwork for review by VA’s IHS/THP Reimbursement Agreements Program, followed by review by a VA contracting officer and legal team. The agreement is complete once it is signed by VA and the THP. VA and IHS Continue to Jointly Oversee the MOU, but Gaps Exist in Measuring Performance A joint leadership team of VA and IHS officials continues to oversee the implementation of the 2010 MOU through meetings, regular reporting, and the establishment of goals and measures to assess performance— but these measures lack targets for assessing progress toward the goals. VA and IHS officials also told us they are drafting a revised MOU to be broader and more flexible than the existing MOU and are updating the performance measures. However, officials have not indicated that any revised measures will include targets. VA and IHS Have Continued to Carry Out MOU Oversight Activities and Implementation, and Are in the Process of Revising the MOU Since our last report in 2014, a joint national leadership team comprised of VA and IHS officials has continued to use quarterly meetings, routine reporting, and MOU goals and measures to oversee MOU implementation and help facilitate collaboration. VA and IHS officials told us that the leadership team consists of officials in VA’s Office of Rural Health and Office of Tribal Government Relations, and the IHS Deputy Director for Intergovernmental Affairs. Specifically, the leadership team has met to discuss the progress and status of the MOU, develop implementation policy and procedures, create performance measures and timelines, and evaluate progress on those measures. The leadership team also compiles annual reports on progress in MOU implementation that includes information about activities and challenges on meeting MOU goals using established measures, and information on the reimbursement agreements and outpatient pharmacy program. In addition, VA and IHS issue monthly data reports on the reimbursement agreements, including the total amount disbursed, the number of veterans receiving services reimbursed by VA, and the number of claims processed for IHS and THP facilities. The leadership team receives input from workgroups tasked with the responsibility for implementing and developing strategies to address the goals of the MOU. The workgroups primarily consist of VA and IHS staff who meet periodically to discuss goals and report quarterly to the leadership team. Tribal officials have participated in some MOU workgroups, though they are not a part of the MOU leadership team. Since our last report in 2014, the number of workgroups decreased from 12 to three groups. (See table 1.) VA and IHS officials said that there were a number of reasons why the number of workgroups had decreased over time, such as consolidation into broader groups because the missions of some groups were similar. VA officials noted that the 12 original workgroups reflected the structure of the MOU, but over time they realized that there was not a need for workgroups in some of these areas. With the establishment of the MOU, VA and IHS have been able to share resources and collaborate on activities to improve access of care for AI/AN veterans. VA and IHS reported that the MOU has helped both agencies develop an outpatient pharmacy program for AI/AN veterans, hold joint training and recruitment events, and establish the reimbursement agreement program, among other accomplishments. The VA, IHS, and THP facility officials we spoke with noted activities related to the reimbursement agreements and a few noted improvements in areas such as training and telehealth as a result of the MOU. However, most of the facility officials generally reported they had not observed improvements in national-level VA and IHS collaboration and coordination in other areas identified by the MOU. Additionally, these facility officials told us that their facilities have not implemented any new policies, procedures, or any specific facility performance goals or targets that were linked to the MOU. VA and IHS headquarters officials acknowledged that all areas of the MOU have not been implemented at all facilities, and noted that while improvements have been made in many areas, organizational challenges remain, such as in the area of information technology. One IHS headquarters official added that even though VA and IHS have not fully implemented all parts of the MOU, they have addressed each area of the MOU in some manner. For example, one of the goals of the MOU is to improve coordination of care by developing and testing innovative approaches and disseminating best practices. IHS headquarters officials indicated that the agency has addressed this goal in part by creating an Improving Patient Care program that was informed by using VA curriculum and utilizing lessons learned from VA’s Patient Aligned Care Teams. VA and IHS leadership said they are currently in the process of revising the MOU to be broader and more flexible to better meet the care needs of AI/AN veterans. Regularly monitoring and updating written agreements on collaboration, such as the MOU, is consistent with our key collaboration practices. IHS officials said that in contrast to the current MOU, in the new MOU, they are not looking to delineate every area of coordination and instead are grouping topics into broader areas of coordination. In the fiscal year 2017 MOU annual report, VA and IHS noted they were removing outdated language from the MOU and planned to create a more comprehensive, flexible MOU that would serve both agencies well into the future. VA and IHS officials indicated that these revisions will address some areas in the current MOU that they have not yet been able to implement. In June 2018, VA officials said that the leadership team had decided upon a revised set of MOU goals and associated objectives. In February 2019, VA and IHS reported that the target completion date for the new MOU was spring 2020. VA and IHS MOU Performance Measures Are Not Sufficient VA and IHS have improved their efforts to measure progress towards meeting the five MOU goals since 2014. In response to a recommendation made in our April 2013 report, VA and IHS revised their MOU performance measures in 2015—better aligning the measures with the MOU goals. In addition, as a result of our work in 2013, the agencies revised an existing data collection reporting template used to gather information for each measure—such as the measurable objective, rationale and intent of the measures, action plan, milestones, and barriers—to help determine whether MOU goals were being met. While we found that the three existing MOU workgroups had since stopped using this template, a VA official confirmed that they believe relevant information is still captured through its monthly and quarterly reports. Nonetheless, while VA and IHS improved their performance measurement efforts since our 2013 report, we found that the revised MOU performance measures still do not have quantitative and measurable targets to assess agency progress toward the goals. We have previously reported that performance measures should have numerical targets or other measurable values, which help assess whether overall goals and objectives were achieved by easily comparing projected performance and actual results. Besides having measureable targets, other key attributes of successful performance measures include linkage to an agency’s goals and mission, clarity, objectivity, and balance. None of the 15 revised measures have targets against which performance can be measured to assess progress and evaluate effectiveness. (The results of our assessment are shown in table 2.) For example, while the number of shared VA-IHS trainings and webinars is a performance measure, there is no target for the number of shared trainings VA and IHS hope to complete each year. VA officials we spoke with stated VA has not considered adding targets to these measures, noting that the nature of the measures and MOU work against establishing targets. For example, officials said that the measures related to the reimbursement agreements are dictated by the needs of the population seeking health care and the providers at the IHS and THP facilities. VA officials we spoke with said instead of targets, they assess success or failure by whether they see incremental growth in the measures. Officials added that they examine these measures quarterly to determine if they have increased, decreased, or remained stable. If the measures are stable or decrease, officials said they consider if these trends can be reversed. However, the absence of targets limits the ability of VA and IHS to use these measures to assess performance. Without defined measurable targets or goals, VA and IHS lack a clear basis for objectively and strategically evaluating how and where improvements should be made. For example, while it is helpful to count the number of tribal outreach activities conducted, setting an annual target for such activities would allow the agencies to better assess whether they are meeting their goals in this area. In addition, some of these measures also lacked other attributes important for assessing performance. Specifically, five of the measures listed the completion of an annual metric review, which is a task to execute rather than a desired performance outcome to be measured. VA and IHS also are not using two measures. Specifically, they have not collected any data to track results on the number of VA and IHS employees who attend training and on the quality of health care provided. Relatedly, for the measure on health care quality, VA and IHS have not developed a clear definition against which to measure performance, as specific quality measures have not been determined and data are not being collected. VA and IHS have documented challenges related to confusion and difficulty in tracking some measures; for example, at a meeting in March 2017, the MOU leadership team discussed that measures were not well tailored to the workgroup structure at that time. IHS officials also acknowledged that the measures currently in place are counting activities, but not necessarily always measuring performance—such as whether trainings held were effective. VA officials said that revising the MOU will give them an opportunity to revisit the performance measures used, and that they are looking to apply lessons learned to do a better job in the future at defining the measures. Similarly, IHS officials noted that the agencies are engaged in conversation about the performance measures to make them more useful. However, as previously noted, VA officials said that they have not considered establishing targets for the measures. Use of Reimbursement Agreements Has Increased Since 2014 and IHS and THP Facilities Viewed the Agreements as Beneficial THP facilities’ use of reimbursement agreements with VA increased from 2014 through 2018. The selected IHS and THP facilities we spoke with viewed the reimbursement agreements as beneficial, but also identified some concerns. The Number of Reimbursement Agreements Entered, and the Amount of Claims Reimbursed and Veterans Served through Them, Have Increased Since 2014 The use of VA’s reimbursement agreements with THPs increased from 2014 through 2018, as measured by the number of agreements, claims reimbursed, and veterans served. In addition, there was also an increase in payments made for prescriptions filled through the VA’s Consolidated Mail Outpatient Pharmacy program for AI/AN veterans receiving services at IHS and THP facilities. As all IHS facilities are covered under a single national agreement that was instituted prior to 2014, there was less change in the use of reimbursement agreements by these facilities. Reimbursement agreements entered. The number of reimbursement agreements with THPs more than doubled from 2014 to 2018, increasing about 113 percent. We previously reported, as of May 16, 2014, that VA had 53 reimbursement agreements with THPs. VA data showed that as of December 2018 it had 113 reimbursement agreements with THPs, representing about 34 percent of the 337 total IHS-funded THPs. (See fig. 1.) VA also reported that there were 42 additional pending reimbursement agreements with THPs that were in varying phases of submission, processing, and review. In addition, as in 2014, IHS facilities are covered under a single national agreement, and the number of IHS facilities covered by it has remained similar. In 2014, we reported that VA officials had conducted outreach through tribal letters and events to educate THPs about the option of establishing reimbursement agreements, and officials told us this outreach has continued. As we reported previously, there are several reasons a THP might decide not to have an agreement with VA, such as deciding it was not worth the time and resources needed to establish an agreement. Officials from a national tribal organization we spoke with said that smaller tribes without many veterans or resources may not be interested. IHS officials also noted that if a THP’s veteran population has alternate payment resources (e.g., Medicaid or private insurance), it may not be worth the steps to implement a reimbursement agreement if the THP will not be billing VA for veterans’ services. Amount of claims reimbursed. In fiscal year 2014, VA paid IHS and THP facilities $11.5 million for services provided to AI/AN veterans, which grew to $20.1 million in fiscal year 2018. This increase mainly represents the growth in reimbursement to THP facilities—which grew 181 percent, from $4.3 million in fiscal year 2014 to $12.1 in fiscal year 2018. During this same time period, reimbursements to IHS facilities remained relatively stable, reflecting the stable number of IHS facilities receiving reimbursements. (See fig. 2.) Veterans served. Between fiscal year 2014 and fiscal year 2018, according to VA data, the number of unique AI/AN veterans receiving services reimbursed by VA each year has increased from about 3,800 in 2014 to a high of nearly 5,300. (See fig. 3.) While IHS facilities accounted for a larger percentage of veterans with reimbursed services compared to THPs, the number of veterans receiving services reimbursed by VA at THPs increased significantly. For fiscal year 2014, 2,965 AI/AN veterans received services reimbursed by VA at IHS facilities, which decreased slightly to 2,829 in fiscal year 2018. In comparison, 885 veterans received services reimbursed by VA at THP facilities in fiscal year 2014, which nearly tripled to 2,531 veterans in fiscal year 2018. Prescriptions filled. Similar to increases in the numbers of AI/AN veterans served under the reimbursement agreements, AI/AN veterans’ utilization of VA’s Consolidated Mail Outpatient Pharmacy program has also increased. Prescriptions filled through this program more than doubled—from more than 440,000 prescriptions in fiscal year 2014 to nearly 886,000 prescriptions in fiscal year 2018. (See fig. 4.) VA and IHS annual reports indicate that the pharmacy program has been one of the most successful collaborations between VA and IHS for AI/AN veterans, providing more than 2 million prescriptions for VA-IHS patients since the pharmacy program collaboration began in 2010. While this program was originally limited to AI/AN veterans served at IHS facilities, in December 2016, VA and IHS entered into an Interagency Agreement that extended the program to THPs. IHS and THP Facilities Viewed the Reimbursement Agreements as Beneficial, but Identified Some Concerns Officials from the majority of IHS and THP facilities we contacted said they were generally pleased with the reimbursement agreements. Among those, officials from one THP noted that the revenue received from their reimbursement agreement freed up other resources that allowed them to hire an additional part-time worker to conduct VA outreach activities. Additionally, a representative of a national tribal organization noted that IHS and THP facilities’ funding is limited and this revenue helps them extend services to eligible AI/AN veterans. However, officials from a number of IHS and THP facilities also had concerns about the agreements, including the lack of reimbursement for PRC program services provided by IHS and THP facilities, the length of time it took to enter into the agreements, and the time frames of the agreements: Lack of reimbursement for PRC program services. Officials at most IHS and THP facilities we contacted said they believed VA should reimburse facilities for services from external providers paid through the PRC program. Officials at some facilities said they have had to deny PRC services due to a lack of program funds. According to some facility and IHS area office officials, this issue is particularly relevant in states where Medicaid was not expanded under the Patient Protection and Affordable Care Act (PPACA). In states where Medicaid eligibility was expanded, more AI/AN individuals may therefore be eligible for Medicaid—potentially freeing up PRC funds. For example, an IHS official noted that prior to Medicaid expansion in his state they would have to limit PRC funds to be used only in life or death scenarios after May or June of each year, but that currently his facility was not limiting any PRC services. Given the limitations in PRC program funds, officials from a national tribal organization and some THPs noted they have raised the possibility of including the PRC program in the reimbursement agreements with VA, although the program was ultimately not included. VA officials noted that there is no statutory requirement for them to include the PRC program in the reimbursement agreements and also identified several other reasons for not including it. For example, they said that VA does not want to pay for services externally that it already offers internally and that it would prefer to coordinate the patient’s care within VA’s existing programs, such as VA’s own programs for purchasing care from external providers—like the Veterans Choice Program. The length of time to enter into an agreement. Officials from a few THP facilities and one national tribal organization we spoke with noted concerns about the amount of time it took to enter into reimbursement agreements. Our analysis of VA reimbursement agreement data shows that the median amount of time that it took to enter an agreement with THPs was over 1 year (about 403 days). We found that the number of days from the first contact by a THP to the actual signing of the agreement ranged from 96 days (over 3 months) to 1,878 days (more than 5 years). According to VA records and interviews, there were reasons for delays in completing reimbursement agreements, including lengthy negotiations, incomplete submission of information from the THPs, lapses in communication between VA and the THP, and a THP’s lack of medical certification or accreditation. VA officials explained that the amount of time increases if the THP does not want to use the VA-approved reimbursement agreement template or wants to change the terms of the agreement. For example, an official from one THP facility said that it took 2.5 years to finalize its reimbursement agreement due, in part, to internal challenges with their legal counsel and external challenges with negotiating the terms of the agreement during a time when the VA was developing a national reimbursement agreement template. VA officials also explained that entering the agreement with IHS was simpler than entering agreements with THPs because it was a national agreement between two federal agencies and, for example, did not require having a contracting officer review the agreement—an extra step needed for agreements with non-federal agencies. The length of time reimbursement agreements are in effect. Officials from a few THP facilities expressed a desire for longer reimbursement agreements that would permit greater planning ability. The agreement between VA and IHS was initially set for 3 years. It was then extended twice, once for 2 years and once for 1.5 years. The time frames for THP agreements have generally been extended consistent with extensions to the national agreement. Officials from one THP we spoke with said that having short-term reimbursement agreements causes problems with internal organizational planning and it would be beneficial to have a longer term non-expiring agreement that can be cancelled so that THPs do not continue to expend resources to complete new agreements or amendments every 2 years. In June 2018, VA and IHS signed an amendment to extend the terms of the national reimbursement agreement through June 30, 2022. VA officials said they are currently in the process of working with THPs to similarly extend their agreements. Facilities Cited Varying Levels of Coordination, and Key Challenges Included Making Referrals from IHS and THP Facilities to VA In speaking to officials at selected VA, IHS, and THP facilities about key issues related to coordinating care for AI/AN veterans, we found that the extent of coordination they reported varied widely. For example, three IHS and THP facilities said they had little to no care coordination with their local VA partners; noting, for example, that they rarely refer veterans to VA since they offer more services than the closest VA facilities. Other facilities described more extensive and formalized care coordination, including shared funding of certain VA and THP employees, or VA employees on site at THP facilities to manage veterans’ care and referrals to and from VA. In Alaska, for example, where services offered by VA are very limited, VA instead has formal sharing and reimbursement agreements established with 26 THPs, which provide the majority of services to AI/AN veterans, as well as some non-Native veterans. Two of the THP facilities we spoke with in Alaska have VA employees working on site to help coordinate veterans’ care. VA and IHS headquarters officials indicated that the MOU was intended to allow for variation in the level of coordination at the local, facility level not to create demands or obligations on facilities. One VA official noted that as the new MOU is developed, both VA and IHS want to continue to allow VA, IHS, and THP facilities to engage in whatever level of coordination makes sense. Despite variation in the extent of coordination, officials identified several common challenges regarding coordination between local VA, IHS, and THP facilities: Referring patients to VA facilities. Officials from 9 of the 15 VA, IHS, and THP facilities we contacted reported conflicting information about the process for referring AI/AN veterans from IHS and THP facilities to VA facilities for specialty care. For example, 4 of the IHS and THP facilities we spoke with said that AI/AN veterans generally could not be referred directly to VA specialty care by IHS or THP providers without first being seen and referred by a provider at VA. These facility officials indicated that this practice was a barrier to care. These officials also noted that this could result in the patient receiving, and the federal government paying for, duplicative tests. However, officials at another IHS facility indicated that IHS and THP facilities should be able to refer patients directly to VA specialty care. Additionally, during an interview at a VA facility, local and regional officials had differing understandings of whether IHS and THP facilities could refer patients directly to VA specialty care. VA and IHS headquarters officials both reported that in general, IHS or THP facilities cannot refer a patient to VA specialty care without that patient first being seen in VA primary care. However, VA officials reported that there is no national policy or written guidance on how to refer patients from an IHS or THP facility to a VA facility. VA officials said that the coordination process is left to the local VA facility and the respective IHS or THP facilities and the process can vary from one facility to another— explaining why differing information was reported by facility officials. Our past work on interagency collaborative mechanisms identifies that it is a leading collaboration practice to have written guidance and agreements to document how agencies will collaborate. Without a written policy or guidance about how referrals of AI/AN veterans from IHS and THP facilities to VA facilities may be managed, VA and IHS cannot ensure that VA, IHS, and THP facilities have a consistent understanding of the options available for these referrals. Information technology interoperability and access. Officials at 10 of the 15 VA, IHS, and THP facilities we contacted cited challenges related to accessing each other’s health information technology systems. Most stated that a lack of interoperability of their electronic health records caused challenges, while a few IHS and THP facilities also mentioned that the lack of access to VA systems makes it difficult to verify a veteran’s eligibility or determine the services for which VA will reimburse. For example, one THP noted that if an AI/AN veteran was sent to VA for a service, the THP provider would not receive the veteran’s follow-up records as quickly as if they had access to each other’s systems. Improving systems’ interoperability was a focus area identified in the MOU, and an IHS official indicated that while the agencies had some initial work on the topic, no systematic solutions were identified. We have previously identified VA’s lack of systems interoperability—particularly with the Department of Defense—as a contributor to the agency’s challenges related to health care. VA and IHS officials identified some potential workarounds to this lack of interoperability, although they noted that some of the described workarounds could be time consuming and may not be feasible for all facilities: An IHS headquarters official said that IHS and VA each have the ability to request the sharing of information from an individual electronic health record held by the other agency through secure emails—although the official noted that this is not as fast or efficient as being able to log in to each other’s systems. VA officials also reported that VA belongs to the eHealth Exchange— a national health information exchange—and said that IHS or THPs could join that, through which they would be able to access information about common veteran patients. However, IHS reported that although the agency explored connecting to the eHealth Exchange several years ago, testing and onboarding costs to participate were prohibitive. IHS noted that several individual facilities across the IHS system have elected to invest in connections with regional health information exchanges. Similarly, two THPs we spoke with reported being a part of other, more locally-based health information exchanges, but noted that VA was not part of these exchanges. A VA official noted that there is an enrollment guide that details how enrollment and eligibility verification will be managed between IHS, THP, and VA facilities. This guide describes how IHS or THP facilities can request veterans’ enrollment and eligibility information from the VA Health Eligibility Center using a templated spreadsheet that sends requests via email through a secure data transfer service. VA’s Health Eligibility Center verifies the list and returns the completed enrollment/eligibility excel spreadsheet to the IHS or THP facility securely. IHS and THP facilities can also contact the VA Health Eligibility Center directly by telephone for fewer than five veterans per call, or their local VA medical center by telephone to verify one AI/AN veteran’s enrollment and eligibility per call. IHS or THP facilities could also enter an arrangement with a local VA facility to have VA employees or co-funded employees on site at IHS or THP facilities, or to have VA-credentialed employees that can access VA systems to share information. However, these options may not be systemic solutions that work at all facilities. An IHS headquarters official noted, for example, that not all IHS or THP facilities have the type of relationship with their local VA facility that would lead to the establishment of such arrangements. In terms of the potential for improving interoperability in the future, VA is in the process of implementing a new electronic health record system, and we have previously reported that VA has identified increased interoperability as a key expected outcome of its decision to switch systems. Officials from two VA and THP facilities were hopeful that this new system will help improve interoperability since some THPs use an electronic health record system from the same company that VA has a contract with. Additionally, an IHS headquarters official said that IHS is also reevaluating its information technology platform and one requirement of any new IHS system will be to enhance interoperability with VA, pending the funding to do so. IHS also reported that the agency will consider health information exchange participation as part of the agency’s information technology modernization efforts. Staff turnover. Officials from 9 of 15 facilities identified staff turnover at VA, IHS, and THP facilities as an impediment to having better or consistent coordination. VA, IHS, and THP facility officials described situations in which the coordination between facilities was dependent on specific staff or facility leadership. According to officials, when there was turnover among these staff or positions went unfilled, or were eliminated, the coordination decreased or came to a halt. For example, officials at one VA facility said that they have found that if a sitting tribal government expresses interest in VA collaboration, they have to act quickly and work with the tribe before there is turnover and new tribal leadership comes in with different priorities. Additionally, officials from one IHS facility described a situation in which they had previously coordinated with their local VA facility through that facility’s AI/AN liaison. However, the coordination lapsed when the liaison left VA and the position went unfilled. Similarly, a THP official stated that coordination with VA was previously led by a nurse case manager on site who was a joint VA and THP employee. The official said that since that person’s retirement, she did not know who to contact at VA to coordinate veterans’ care. Officials at one IHS facility noted that due to turnover and attrition they would like to see more education for front line staff at both IHS and VA, so they can more efficiently obtain care for patients at the VA. VA headquarters officials acknowledged that staff turnover and retraining is a challenge that they will need to continually address as the MOU is carried out. In our prior work related to IHS and VA, we have found that both agencies face challenges related to staff turnover and training. VA Co-Payments. Officials at 3 of the 11 IHS and THP facilities we contacted, as well as IHS headquarters officials and representatives of two national tribal organizations said that the copayments that VA charges veterans represented a barrier to AI/AN veterans receiving care. While AI/AN veterans do not have any cost-sharing for care provided at IHS or THP facilities, they are subject to the same copayments as other veterans when they receive care from VA facilities. VA data shows, for example, that of the 80,507 VA-enrolled self-identified AI/AN veterans in in fiscal year 2017, about 30 percent were charged copayments, averaging about $281.56 billed per veteran. Officials from one THP noted that this kind of financial liability may discourage AI/AN veterans from getting care at VA, or lead them to return to the THP after they realize they will have to pay for care at VA. While some of our interviewees suggested that VA should waive copayments for AI/AN veterans, a VA official said they do not have the legal authority to do this. The official said that their statute specifies the categories of veterans for which they must charge copayments and VA is not authorized to waive the copayments for AI/AN veterans on the basis of their AI/AN status without statutory exemptions. While certain AI/AN veterans may qualify for waived copayments based on their inclusion in other statutory categories, AI/AN veterans are not specifically listed as a category for which copayments can otherwise be waived. VA officials also cautioned that because AI/AN veterans may qualify for waived copayments through these other categories, the possibility of copays should not discourage IHS or THP facilities from referring AI/AN veterans to VA. Conclusions Since 2014, VA and IHS have continued to work together to oversee and implement their MOU aimed at improving the health care provided to dually eligible AI/AN veterans. While the agencies have made progress in certain areas of the MOU, especially those related to reimbursement, other parts have seen less attention. VA and IHS are now updating the MOU, and plan to revisit the related performance measures. This gives the agencies an opportunity to evaluate how well their existing oversight mechanisms have been working, and to improve these mechanisms accordingly in the future. Regardless of these updates, the agencies need to have effective performance measures. While the agencies took steps to improve MOU performance measures in response to one of our prior reports, these steps were not sufficient and the measures they set lack important attributes, including measurable targets. VA and IHS have indicated that they plan to reevaluate performance measures as they update the MOU, but have not indicated that these new measures will identify targets. Absent targets, VA and IHS are limited in their ability to measure progress towards MOU goals and ultimately make strategic decisions about how and where improvements should be made. At the local level, care for AI/AN veterans relies on coordination among individual VA, IHS, and THP facilities. However, variations in relationships among these many facilities and staff turnover creates challenges, which heightens the importance of clear and consistent guidance from the national level. Yet no written guidance exists related to referring AI/AN veterans to VA facilities for specialty care. Without such guidance, VA and IHS cannot ensure that facilities have a consistent understanding of the available referral options for AI/AN veterans. Enhancing their guidance in this area will help VA and IHS ensure that AI/AN veterans have access to needed care. Recommendations for Executive Action We are making a total of three recommendations, including two to VA and one to IHS. Specifically: As VA and IHS revise the MOU and related performance measures, the Secretary of Veterans Affairs should ensure these measures are consistent with the key attributes of successful performance measures, including having measurable targets. (Recommendation 1) The Secretary of Veterans Affairs should, in consultation with IHS and tribes, establish and distribute a written policy or guidance on how referrals from IHS and THP facilities to VA facilities for specialty care can be managed. (Recommendation 2) As VA and IHS revise the MOU and related performance measures, the Director of IHS should ensure these measures are consistent with the key attributes of successful performance measures, including having measurable targets. (Recommendation 3) Agency Comments We provided a draft of this report to VA and the Department of Health and Human Services for review and comment. We have reprinted the comments from VA in appendix I and the comments from the Department of Health and Human Services in appendix II. Both departments concurred with our recommendations. The Department of Health and Human Services also provided technical comments, which we incorporated as appropriate. In response to our recommendations to ensure revised performance measures include key attributes of successful performance measures, VA and the Department of Health and Human Services provided information about the process for finalizing the new MOU, including conducting tribal consultation. They noted that VA and IHS will work together to ensure that performance measures under the new MOU include appropriate measurable targets. Regarding our recommendation to VA about establishing and distributing a written policy or guidance on how referrals from IHS and THP facilities to VA facilities for specialty care can be managed, VA noted the Office of Community Care is working on a process to enhance care coordination among all VA and non-VA providers—including IHS and THP providers. VA noted that for IHS and THPs, this will include establishing forms and procedures to refer patients to VA for specialty care, and that VA will provide training to applicable staff once the process and procedures are finalized. VA also noted that it is in the process of establishing an advisory group that will include tribal, IHS, and VA representation, and will make recommendations related to care coordination guidance and policies. The target completion date for establishing this group is spring 2020. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of VA and the Department of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Comments from the Department of Veterans Affairs Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kathleen M. King (Director), William Hadley (Assistant Director), Christina Ritchie (Analyst-in-Charge), Jennie Apter, Shaunessye D. Curry, Jacquelyn Hamilton, and Vikki Porter made key contributions to this report.
Why GAO Did This Study A 2010 MOU set mutual goals for VA and IHS collaboration and coordination related to serving AI/AN veterans. Under this MOU, VA has established reimbursement agreements with IHS and tribal health programs to pay for care provided to AI/AN veterans. In 2013 and 2014, GAO issued two reports on VA and IHS implementation and oversight of the MOU. GAO was asked to provide updated information related to the agencies' MOU oversight. This report examines (1) VA and IHS oversight of MOU implementation since 2014, (2) the use of reimbursement agreements to pay for AI/AN veterans' care since 2014, and (3) key issues identified by selected VA, IHS, and tribal health program facilities related to coordinating AI/AN veterans' care. To conduct this work, GAO reviewed VA and IHS documents, reports, and reimbursement data from 2014 through 2018. GAO interviewed VA and IHS officials at the headquarters level, and officials at 15 VA, IHS, and tribal facilities in four states—Alaska, New Mexico, North Carolina, and Oklahoma—selected based on factors including the number of reported AI/AN veterans served, and geographic diversity. GAO also interviewed organizations representing tribes and tribal health programs. What GAO Found The Department of Veterans Affairs (VA) and the Department of Health and Human Services' (HHS) Indian Health Service (IHS) established a memorandum of understanding (MOU) to improve the health status of American Indian and Alaska Native (AI/AN) veterans through coordination and resource sharing among VA, IHS, and tribes. Since GAO's last report on the topic in 2014, VA and IHS have continued to jointly oversee the implementation of their MOU—for example, through joint workgroups and quarterly meetings and reports—but they lack sufficient measures for assessing progress towards MOU goals. Specifically, while the agencies established 15 performance measures, they did not establish targets against which performance could be measured. For example, while the number of shared VA-IHS trainings and webinars is a performance measure, there is no target for the number of shared trainings VA and IHS plan to complete each year. GAO's work on best practices for measuring program performance has found that measures should have quantifiable targets to help assess whether goals and objectives were achieved by comparing projected performance and actual results. VA and IHS officials said they are currently in the process of revising the MOU and updating the performance measures used. However, officials have not indicated that any revised measures will include targets. Total reimbursements by VA for care provided to AI/AN veterans increased by about 75 percent from fiscal year 2014 to fiscal year 2018. This increase mainly reflects the growth in reimbursement from VA to tribal health program facilities—facilities that receive funding from IHS, but are operated by tribes or tribal organizations. Similarly, the number of VA's reimbursement agreements with tribal health programs and the number of AI/AN veterans served under the reimbursement agreements also increased during this period. The VA, IHS, and tribal facility officials GAO spoke with described several key challenges related to coordinating care for AI/AN veterans. For example, facilities reported conflicting information about the process for referring AI/AN veterans from IHS or tribal facilities to VA, and VA headquarters officials confirmed that there is no national policy or guide on this topic. One of the leading collaboration practices identified by GAO is to have written guidance and agreements to document how agencies will collaborate. Without a written policy or guidance about how referrals from IHS and tribal facilities to VA facilities should be managed, the agencies cannot ensure that VA, IHS, and tribal facilities have a consistent understanding of the options available for referrals of AI/AN veterans to VA specialty care. This could result in an AI/AN veteran receiving, and the federal government paying for, duplicative tests if the veteran is reassessed by VA primary care before being referred to specialty care. What GAO Recommends GAO is making three recommendations—one each to VA and IHS to establish measurable targets for performance measures and one to VA to establish written guidance for referring AI/AN veterans to VA facilities for specialty care. VA and HHS concurred with these recommendations.
gao_GAO-19-319
gao_GAO-19-319_0
Background Coastal communities face hazards from coastal storms and flooding that can cause loss of life, property damage, and damage to the environment. More specifically, coastal communities face threats from erosion and damages from waves, wind, and storm surges. For example, during Superstorm Sandy in 2012, shoreline water levels rose across the East Coast, causing billions of dollars in property damage to homes and businesses. These threats can be exacerbated by several factors, including sea level rise and commercial and residential development, according to Corps documents on coastal risk reduction and resilience. For example, rising sea levels increase the risks from regular tidal flooding and coastal storms and new construction along coastlines can increase the number of people and buildings at risk from the storms. The Corps constructs projects to help reduce the risks from coastal storm hazards and mitigate erosion, wave damage, and flooding, which may include the use of hard structures. The Corps has decades of experience developing projects that use hard structures, such as revetments, seawalls, and storm surge barriers, to reduce the risks from coastal storm hazards, according to a 2014 report by the National Academy of Sciences (see fig. 1). Natural infrastructure can also be designed and developed for coastal storm and flood risk reduction purposes. Natural infrastructure can involve several types of natural features that have the potential to reduce risks to coastal areas from storms (see fig. 2). Diverse natural features occur in different areas of the United States. For example, some areas along the Florida Gulf Coast are host to mangroves—coastal wetlands found in tropical and subtropical regions—that can reduce the impacts of high energy waves from storm surges. The extent to which natural infrastructure can reduce risks to coastal areas from storms and flooding depends on the types of natural features being used. For example, underwater vegetation, such as seagrass, has less capacity to reduce wave energy than a coral reef, which is a hard underwater structure, according to scientific studies. According to a 2014 National Academy of Sciences report, in addition to reducing the risks of storms and flooding for coastal communities, projects using natural infrastructure may provide other benefits, depending on the type of natural feature associated with the project. Among other things, natural infrastructure has the potential to enhance commercial and recreational fisheries and create recreational opportunities. For example, natural infrastructure may support fish habitats, which could enhance a commercial or recreational fishery. In addition, wetlands may improve habitats for birds, which could enhance bird watching activities. Similarly, replenishing beaches may provide more beach area for individuals to use for recreational activities, and provide nesting habitat for birds and sea turtles. Corps Organization The Corps’ Civil Works program—responsible for water resources projects—is organized in three tiers: a national headquarters in Washington, D.C.; eight regional divisions; and 38 districts (see fig. 3). Corps headquarters primarily develops the policies and guidance that the agency’s divisions and districts carry out as part of their oversight responsibilities for the water resources projects under the Corps’ purview. Corps districts are responsible for planning, engineering, constructing, and managing water resources projects in their districts, including projects that consider or use natural infrastructure. The Corps has several programs and initiatives related to using natural infrastructure for water resources projects. For example, the Engineer Research and Development Center, the research organization within the Corps, manages a portfolio of research related to water resources projects that includes research focused on flood risk management and coastal systems. The Corps also has an initiative called Engineering With Nature®, which the Corps’ scientists and engineers developed to facilitate using sustainable practices in Corps projects. The Corps’ Water Resources Project Planning Process The Corps develops water resources projects, including coastal storm and flood risk management projects, in conjunction with nonfederal sponsors, such as state and local governments. According to Corps guidance, the planning process for these projects begins with a nonfederal sponsor identifying a problem and approaching the Corps to help develop a solution. Upon statutory authorization for a study and appropriations to fund it, the Corps and the nonfederal sponsor enter into an agreement to conduct a feasibility study for a potential project. Nonfederal sponsors are to participate in the planning process, as well as remain involved through project design, construction, and post-project operations and maintenance. For example, for projects where the Corps constructs hard infrastructure, such as a seawall, the nonfederal sponsor is to assume responsibility for monitoring and maintenance costs associated with the seawall after its construction. In contrast, for a project that involves replenishing a beach, the Corps and the nonfederal sponsor usually share the cost of replenishment for a specific period of time, typically 50 years. The U.S. Water Resources Council’s 1983 Economic and Environmental Principles and Guidelines for Water and Related Land Resources Implementation Studies (Principles and Guidelines) outline the standards and procedures that the Corps is to follow for planning water resources projects, including those with coastal storm and flood risk management objectives. The Principles and Guidelines establish that the federal objective of water resources projects is to contribute to national economic development while protecting the nation’s environment. The Corps implements the planning process outlined in the Principles and Guidelines by conducting feasibility studies for proposed water resources projects. The Corps’ Planning Guidance Notebook (Planning Guidance) provides detailed guidance on how to implement the general process outlined in the Principles and Guidelines for planning water resource projects. The Corps’ feasibility study process includes four major phases and five milestones, as shown in figure 4. The Corps initiates a feasibility study by forming a project team, comprising Corps engineers, economists, planners, and other specialists, to conduct the study. The Corps project team begins with a scoping phase that specifies the problem, such as the potential for coastal storm and flood damage, and identifies opportunities for a project to address the problem. The project team then inventories conditions in the project area, including physical, economic, and social conditions, and forecasts how these conditions may change over the life of a potential project. As it continues the scoping phase, the project team identifies various measures that could address the problem, such as replenishing an existing beach or constructing a seawall. The project team then develops potential individual measures or combinations of measures (e.g., beach replenishment and seawall construction) into an initial list of alternatives. Since 2016, the Corps has been required by statute to consider natural infrastructure in certain circumstances. With its initial list of alternatives, the Corps project team is to then evaluate each alternative by (1) comparing it to the scenario of proceeding with no project; (2) applying criteria established in the Principles and Guidelines; (3) identifying beneficial and adverse effects of each alternative; and (4) considering other relevant factors, such as compliance with environmental requirements. To identify beneficial and adverse effects of each alternative, the Corps uses four general categories established in the Principles and Guidelines, as shown in table 2. The Corps’ Planning Guidance states that project teams should evaluate alternatives using the four categories of analysis, but the evaluation from two categories—National Economic Development and Environmental Quality—must be included in each feasibility study. According to the Planning Guidance, evaluating projects’ potential costs and benefits through these categories of analysis provides a basis for determining which alternatives should be eliminated from consideration, modified, or selected for further analysis. This evaluation can eliminate alternatives that do not meet planning objectives and may narrow the initial list of alternatives to a final list for more detailed analyses and comparison. Corps officials stated that the process of evaluating alternatives can be iterative and is project specific. The Corps project team then is to conduct detailed analyses of its final list of alternatives to compare them to each other and select a recommended alternative. The project team includes the recommended alternative in a draft report with its analysis. The draft report is made available for review and comment by nonfederal sponsors, federal and state agencies, and other stakeholders. The project team incorporates comments into the report, as appropriate, and determines whether the agency will endorse the recommended alternative. The project team finalizes its feasibility study after internal review. The Corps then prepares a report summarizing the proposed plan—known as the Chief’s report—and submits it to Congress for consideration and potential authorization. The Corps Typically Identified Project Costs and Damage Reduction Benefits for Selected Projects That Used Natural Infrastructure Based on our review of Corps guidance and eight selected projects that used natural infrastructure, we found that the Corps typically identified project costs and damage reduction benefits in selecting the alternative, although for some projects it also considered additional benefits, such as recreational benefits. Once a Corps project team develops a final list of alternatives in conducting a feasibility study for a particular project, the project team is to conduct an economic analysis for each alternative. This analysis allows the team to compare costs and benefits directly across the alternatives, including alternatives using natural infrastructure, hard infrastructure, or a combination of the two. Specifically, the project team is to develop estimates for each project alternative’s net economic benefits—benefits minus costs—to identify and select the project alternative with the maximum net benefits. The Corps’ Planning Guidance states that the Corps shall select coastal storm and flood risk management projects determined to have the maximum net benefits. Our review of Corps guidance and eight selected projects identified the following costs and benefits that the Corps generally incorporated into its economic analyses: Project costs. According to the Corps’ Planning Guidance, project costs include three categories: implementation costs, other direct costs, and associated costs. Implementation costs, for example, include planning and design, construction, construction contingency, operations, maintenance, repair, and other costs necessary to implement a project. The eight selected projects that we reviewed included analyses of project costs, which mostly focused on implementation and interest costs. For example, the costs for the Corps’ Jacksonville District Lido Key project included initial construction costs (i.e., beach construction and hard infrastructure designed to reduce shore currents), future beach replenishment costs (i.e., operations related to placing material on beaches to replenish eroding shores), and monitoring costs (e.g., measurement of beach fill, sediment type, and habitat quality). Damage reduction benefits. Reducing damages to existing structures, including homes and commercial buildings, is the primary benefit the Corps considers when identifying benefits for coastal storm risk management project alternatives, according to the Corps’ Planning Guidance. The guidance outlines general steps for estimating damage reduction benefits, which are to be calculated and included in each coastal storm and flood risk management alternative’s economic analysis. In seven of the eight projects we reviewed, the Corps analyzed damage reduction benefits as part of its economic analysis. For example, the Corps’ project team for the New York District Union Beach project determined the potential damage reduction benefits of each alternative by estimating the alternative’s potential to reduce (1) damages to coastal property from flooding and waves, (2) public emergency spending, and (3) administrative costs for the National Flood Insurance Program (see fig. 5). We also found that for some selected projects, the Corps identified and incorporated additional benefits into the projects’ economic analyses, including the following: Incidental recreational benefits. Corps project teams may include in the economic analysis recreational benefits that stem directly from the project alternative but that are incidental to the primary purpose of damage reduction, according to the Corps’ Planning Guidance. Specifically, Corps project teams may include recreational benefits, such as increases in recreational visits because beaches are larger in their economic analysis of project alternatives, but recreational benefits are limited to no more than 50 percent of the total economic benefits used to justify an alternative (i.e., demonstrate that an alternative has greater benefits than costs). After an alternative has been economically justified, the team can use the full estimated recreational benefits with the damage reduction benefits to select the alternative with maximum net benefits. In our review of eight projects, we identified four projects where the Corps project team included recreational benefits in its economic analysis for the project alternative that was selected. For one such project, the Los Angeles District’s Encinitas-Solana Beach project, the Corps’ economic analysis showed that the selected project alternative had lower damage reduction benefits than project costs. However, when the Corps added recreational benefits—as allowed by Corps policy—the combined annual damage reduction and recreational benefits resulted in the alternative having greater benefits than costs (see fig. 6). Other direct incidental benefits. The Corps may also consider other direct incidental benefits in its economic analysis, as appropriate, according to the Principles and Guidelines. In our review of eight projects, we identified three projects that included estimated incidental benefits aside from recreational benefits. The three projects included economic benefits associated with reduced maintenance costs for local communities, whose expenses for maintaining local beaches would decline after the Corps projects were constructed. Other than these reduced maintenance costs, the Corps did not include other types of direct incidental benefits, such as environmental or other social benefits, in the economic analyses for the eight projects we reviewed. According to Corps officials, some project alternatives using natural infrastructure may provide direct incidental benefits that are not included in the economic analysis, such as environmental and social benefits. For example, the draft feasibility study for the New York District’s Jamaica Bay project states that natural infrastructure can provide direct incidental benefits, such as improving ecosystems, filtering water, and improving aesthetics. The Corps acknowledged these incidental benefits and their importance to communities in its draft feasibility study, but did not incorporate these benefits into its economic analysis because they could not be monetized, according to Corps district officials. Corps headquarters officials said incidental benefits that cannot be monetized in the economic analysis are considered in the planning process through the evaluation of other Principles and Guidelines categories. Two reports published by the National Academy of Sciences stated that when assessing project alternatives, the Corps primarily uses qualitative measures to assess benefits that are difficult to monetize but that relegates such effects to secondary status compared to the monetized estimates of costs and benefits. Moreover, a 2004 National Academy of Sciences report found that the Principles and Guidelines outlines a process that focuses on the effects that can be monetized, which does not allow for full consideration of a project’s total economic effects. Nonetheless, for three of the eight projects we reviewed, we found that the Corps modified its approach in selecting the use of natural infrastructure as part of the recommended alternative. For instance, for the Encinitas-Solana Beach project, the Corps granted an exception to its planning process and recommended a locally preferred plan. In certain circumstances, Corps project teams can deviate from the Corps’ Planning Guidance that calls for the Corps to select the project alternative with the maximum net benefits. Corps headquarters officials said that requesting such an exception is the primary method the agency uses for recommending a project alternative that does not meet the Corps maximum net benefits requirement for a project focused solely on coastal storm or flood risk management. For the Encinitas-Solana Beach project, the California Coastal Commission found that the Corps’ proposed alternative with the maximum net benefits was inconsistent with the mission of California’s coastal management program to protect and enhance the state’s coastal environment. In particular, the Commission had concerns about the size of the project and the amount of sand to be added to the beach under the proposed alternative, as well as the potential adverse effects on a nearshore natural reef and marine resources. In response, the Corps’ Los Angeles District worked with the project’s nonfederal sponsors to address the commission’s concerns and revised the project by reducing its size and potentially lessening its environmental impacts. The commission approved the revised project alternative in November 2013. The Corps’ Planning Guidance also allows projects with multiple objectives to incorporate other analyses in selecting a recommended alternative. For the Philadelphia District’s Lower Cape May project, ecosystem restoration was the project’s primary objective, but it also had a coastal storm risk management objective. According to the project’s feasibility study, the project focused on protecting and restoring a freshwater marsh that was being flooded with salt water from storms because of continued beach erosion. The Corps used a cost- effectiveness analysis to meet the primary objective, which compared environmental measures (e.g., the number of acres of habitat restored) with the costs of different alternatives. In addition, this project included a beach component to protect the marsh from saltwater intrusion. In the process of designing beach alternatives, the Corps project team conducted a damage reduction benefit analysis to determine an optimal size for the beach that would provide the greatest net damage reduction benefit to nearby communities. This analysis helped inform the Corps decision to select a beach design option that met the project’s primary objective of protecting the ecosystem, while also providing the most incidental damage reduction benefits to local communities, according to Corps district officials. For the third project, the New York District’s Jamaica Bay project, the Corps incorporated natural features into the project, although it did not directly include the economic benefits of these features in its economic analysis. For the project, the Corps project team recommended an alternative that was designed to address frequent flooding within Jamaica Bay at three locations. The project team incorporated wetlands into the design at one location, along with hard infrastructure. The nonfederal sponsors of the project told us that they advocated for the inclusion of these natural features, where appropriate, because of their risk reduction and ecological benefits. In response to the interests of nonfederal sponsors, the Corps project team developed and recommended the alternative incorporating hard infrastructure, such as floodwalls, along with coastal wetlands. The Corps did not include the risk reduction benefits from the wetlands in the economic analysis, but the draft feasibility study noted that the project was economically justified based on the monetary benefits of the hard infrastructure alone and that the wetlands provided additional benefits that could not be monetized. The Corps Faces Challenges in Developing Cost and Benefit Information for Some Types of Natural Infrastructure and Has Initiated Steps to Address Them Based on our literature review, agency documentation, and interviews with Corps officials and other stakeholders, we found that the Corps faces challenges developing cost and benefit information for some natural infrastructure to help inform the process for selecting project alternatives and conducting economic analyses in feasibility studies. Specifically, these challenges related to (1) assessing the performance of some types of natural infrastructure and (2) monetizing the social and environmental benefits associated with using natural infrastructure. The Corps recognizes the need to obtain additional data to better develop cost and benefit information for some types of natural coastal infrastructure, and it has begun taking steps to do so. Challenge in Assessing the Performance of Some Types of Natural Infrastructure Information is not readily available on the performance of some types of natural features in reducing coastal storm and flood damages, which makes it challenging for the Corps to develop cost and benefit information for these features and compare them to other alternatives, such as those that use hard infrastructure. For example, Corps headquarters officials said that—in contrast to beaches and dunes—there are significant knowledge gaps about the extent to which wetlands, reefs, and subaquatic vegetation can reduce the risks associated with coastal storms by, for example, moderating wave heights and flooding. In addition, there are knowledge gaps about how these natural features will change over time and how any changes might affect the long-term performance of the features. A Corps report from January 2015 also identified knowledge gaps in understanding how some natural infrastructure, such as wetlands, may perform during coastal storms or floods. According to the report, wetlands may reduce storm surge, but in some instances water can be redirected, potentially causing a storm surge increase elsewhere. Corps officials noted that all structures—whether natural or hard—change over time, requiring maintenance and repair, but said that natural infrastructure may change more dramatically than hard infrastructure and over a shorter period of time. For example, a healthy wetland could restore itself and reduce maintenance costs after a major storm or require the Corps to take action to restore the wetland after the storm event, which could increase the costs of maintaining the wetland. Corps officials also stated that there are knowledge gaps regarding whether wetlands can absorb major storm surges and how these features would perform in the event of recurring coastal storms in a short period of time. Specifically, natural features may be damaged during intense storms (e.g., wetlands can erode and vegetation may be stripped apart), which may degrade the long-term performance of the features. Because the Corps does not have information on performance for some natural features, it has been unable to update engineering guidance to include the use of some natural features, according to Corps officials. A Corps headquarters official explained that the agency must first develop a broader understanding of how some natural features, such as wetlands, perform under various coastal storm scenarios over time before it can begin to develop design guidance for using these features for coastal storm protection and flood risk management projects. A Corps headquarters official said that the agency recognizes the need to obtain additional information on natural infrastructure and has initiated steps to address the challenge related to developing information on the performance of some types of natural infrastructure. In particular, in October 2016, the Engineer Research and Development Center began collaborating with several entities, including other federal agencies, international partners, academic institutions, and nongovernmental organizations, to develop guidelines for using some types of natural infrastructure. According to the scoping document, this effort is to entail developing guidelines to support various phases of building natural infrastructure projects, including conceptualization, design, engineering, construction, and maintenance. According to the Corps official, an anticipated key output from the international effort includes developing information on defining performance for different types of natural infrastructure features and options for measuring performance depending on project objectives. The final product is expected to include chapters with information on analyzing natural infrastructure benefits and related monitoring, maintenance, and adaptive management issues, among others. The Corps official stated that the guidelines will not be official Corps guidance or policy, but Corps project teams and other practitioners can use the guidelines as a resource for identifying best practices in planning projects and assessing potential alternatives. For example, the guidelines will include case studies illustrating design and engineering concepts for certain types of natural features. The guidelines are scheduled for publication in March 2020. The Corps has also developed a separate internal initiative to help fill knowledge gaps regarding how some natural features’ performance can provide benefits relevant to flood risk management, among other benefits. Specifically, the Corps’ Engineering With Nature® Initiative is focused on sharing natural infrastructure best practices that are emerging, and communicating the information to staff in the Corps’ district offices and other key stakeholders. According to a Corps official, the goal of this initiative, among other things, is to help familiarize the Corps’ district staff with existing natural infrastructure information and relevant case studies. The Corps’ Galveston and Philadelphia Districts have projects that may incorporate natural infrastructure. For example, the Galveston District is considering opportunities through the Coastal Texas study to use natural features, such as barrier islands, wetlands, and reefs, in combination with hard infrastructure (e.g., levees), to reduce the risks from storms and floods. Similarly, the Corps’ Philadelphia District is considering a plan to design, construct, and evaluate natural features as part of the New Jersey Back Bays Storm Flood Risk Management study. In addition, in 2018, the Corps’ coastal working group initiated a project within the Corps to help identify natural infrastructure knowledge gaps and prioritize key areas for research based on requests for information received from Corps’ districts. The Corps plans to incorporate information gathered from this project into a strategic plan that is intended to help inform research funding decisions for fiscal year 2020, according to a Corps official. Challenge in Monetizing Environmental and Social Benefits Our review of economic literature identified challenges in estimating the total economic benefits associated with using natural infrastructure features. Several studies noted that data for conducting economic analyses are not readily available. For example, one study noted that there is insufficient information on how restoring wetlands might affect the survival of certain endangered species. Such information is needed, according to the study, to provide insight on the extent to which such features might generate economic benefits. Another study noted that because projects that combine natural features with more traditional structures (i.e., hybrid projects) are relatively new, less is known about their effectiveness or their costs and benefits. Finally, according to another study, estimating recreational benefits associated with natural habitats, such as coastal marshes, can be difficult because there is insufficient information about the extent to which the public visits those sites. In the eight projects we reviewed, Corps project teams did not estimate incidental benefits other than recreational benefits or through avoiding maintenance costs for coastal storm and flood risk management projects. As previously discussed, environmental and social benefits are considered incidental benefits, and Corps guidance indicates that they do not have to be included in the economic analysis. On the other hand, when assessing potential alternatives of coastal storm and flood risk management projects in its feasibility studies, the Corps can quantify or describe the benefits qualitatively and consider these effects during the planning process, outside of the economic analysis. For example, the Corps has measures to quantify changes in habitat, such as number of acres of wetlands restored. The Corps can also qualitatively describe habitat benefits for specific species. However, these nonmonetized benefits may not affect the selection of the recommended alternative, which is generally based on the monetized net benefit estimates of each proposed alternative. The Corps has begun developing a process for identifying, describing, and considering a broader array of potential benefits when assessing natural infrastructure alternatives for specific projects. Specifically, a June 2017 memorandum from the Corps’ Director of Civil Works indicated that projects with coastal storm and flood risk management objectives as well as other objectives should consider social and environmental benefits in the formulation, design, and implementation of projects within existing legislation and Corps policy. A Corps headquarters official said that the agency is not attempting to monetize all potential benefits but is considering options for accounting for potential benefits other than through the traditional monetary assessments of costs and economic benefits. A Corps headquarters planning group is currently working on developing an initiative that would identify a process for using a flexible approach for considering the social and environmental effects of natural infrastructure for coastal storm and flood risk management projects. For example, project teams may have the option of determining whether to incorporate nonmonetized social and environmental benefits, such as enhancing public safety in coastal communities, into the decision-making process for selecting the recommended alternative. The Corps official stated that the agency has begun working on developing guidance for this initiative and expects to issue the guidance in calendar year 2019. Agency Comments We provided a draft of this report for review and comment to the Department of Defense. The department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Defense, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix II. Appendix I: Selected Projects with Coastal Storm and Flood Risk Management Objectives Using Natural Infrastructure This appendix presents information on the eight projects that we selected for review with coastal storm and flood risk management objectives that the U.S. Army Corps of Engineers (Corps) constructed and that included natural infrastructure. We randomly selected eight projects across Corps districts on the Atlantic, Gulf, and Pacific coasts. In seven of the eight projects, the Corps recommended alternatives with either beaches or dunes as the type of natural features to be used for coastal storm and flood risk reduction (see table 3). According to several Corps district officials we interviewed, alternatives featuring beaches are often most appropriate because other natural features, such as wetlands, would not survive the impacts of the high-energy storm waves in open ocean coastal areas where these projects are located. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Leo Acosta (Analyst-in-Charge), Mark Braza, Eric Charles, Timothy Guinane, and Jeanette Soares made key contributions to this report. Important contributions were also made by John Delicath and Sara Sullivan.
Why GAO Did This Study The Corps constructs water resources projects to reduce risks to coastal communities from storm damage, among other things. These projects can involve building hard structures, such as seawalls, to protect against flooding and wave damage. The Corps and some state and local agencies are increasingly considering using natural infrastructure, such as wetlands, to reduce risks from coastal storms and flooding. GAO was asked to review the uses, costs, and benefits of natural coastal infrastructure for the Corps' coastal storm and flood risk management projects. This report describes (1) how the Corps considered costs and benefits for selected projects that used natural infrastructure and (2) challenges the Corps faces in developing cost and benefit information for using natural infrastructure and steps taken to address them. GAO reviewed Corps guidance; obtained information on projects that used natural infrastructure and received funding from fiscal years 2012 through 2017; randomly selected eight coastal storm and flood risk reduction projects from the Atlantic, Gulf, and Pacific coasts; and reviewed each project's planning documentation and economic analyses. Findings from these projects are not generalizable to all Corps' projects. GAO also reviewed economic literature, reviewed Corps documents related to the use of natural infrastructure, and interviewed Corps officials and stakeholders with experience in using natural infrastructure. What GAO Found The U.S. Army Corps of Engineers (Corps) typically identified project costs and damage reduction benefits for the eight projects using natural infrastructure that GAO reviewed. In selecting projects, the Corps is to conduct economic analyses of project alternatives, which may include hard structures, natural infrastructure, or a combination, to compare their costs and benefits. Corps guidance states that for coastal storm and flood risk management projects it is to select the alternative determined to have the maximum net benefits (benefits minus project costs). The Corps calculated project costs for the eight projects, such as planning, design, construction, and maintenance costs. It calculated damage reduction benefits for seven projects by estimating reduced damages to existing structures in the project area, including to homes and commercial buildings. Corps guidance allows the economic analysis to also include incidental benefits of a project, and four projects incorporated recreational benefits of alternatives, such as increases in recreational visits because beaches would be larger. The Corps did not include other types of incidental benefits, such as environmental or other social benefits, for the eight projects. Corps documentation for one project identified environmental benefits of constructing wetlands as part of the project, such as improving ecosystems and filtering water. However, Corps officials said they did not incorporate these benefits into the economic analysis because the benefits could not be monetized. The Corps faces challenges in developing cost and benefit information for some types of natural infrastructure and has initiated steps to address this. For example, a 2015 Corps report identified knowledge gaps in understanding how natural coastal infrastructure, such as wetlands, may perform during coastal storms. These knowledge gaps make it challenging for the Corps to develop cost and benefit information for some natural infrastructure alternatives and compare them to other alternatives, such as those that use hard infrastructure. The Corps recognizes the need to obtain additional data to better develop cost and benefit information and has begun taking steps to do so. For example, in 2018, the Corps initiated a project to help identify natural infrastructure knowledge gaps and prioritize key areas for research. The Corps plans to incorporate information gathered from this project into a strategic plan that is intended to help inform research funding decisions for fiscal year 2020, according to a Corps official.
gao_GAO-19-399
gao_GAO-19-399_0
Background The decennial census produces data vital to the nation. The data are used to apportion the seats of the U.S. House of Representatives; realign the boundaries of the legislative districts of each state; allocate billions of dollars each year in federal financial assistance; and provide a social, demographic, and economic profile of the nation’s people to guide policy decisions at each level of government. Furthermore, businesses, nonprofit organizations, universities, and others regularly rely on census data to support their work. Given the importance of the decennial census to the nation, it is important for the Bureau to manage risks that could jeopardize a complete, accurate, and cost-effective enumeration. To assist federal government leaders in managing such complex and inherently risky missions across their organizations, in prior work we developed an ERM framework that, among other things, identifies essential elements for federal ERM and good practices that illustrate those essential elements. Notably, these elements and practices apply at all levels of an organization and across all functions—such as those related to managing risks to the 2020 Census. Furthermore, Office of Management and Budget (OMB) Circulars No. A-11 and A-123 require federal agencies to implement ERM to ensure their managers are effectively managing risks that could affect the achievement of agency strategic objectives. As discussed in our ERM Framework, ERM is a decision-making tool that allows leadership to view risks as an interrelated portfolio rather than addressing risks only within silos. Fundamental to ERM is the development of risk mitigation and contingency plans. Mitigation plans detail how an agency will reduce the likelihood of a risk event and its impacts, should it occur. Contingency plans identify how an agency will reduce or recover from the impact of a risk after it has been realized. Among other things, these plans provide the roadmap for implementing the agency’s selected risk response and the vehicle for monitoring, communicating, and reporting on the success of that response. In developing these plans, it is important that agencies keep in mind the interaction of risks and risk responses, as the response to one risk may affect the response to another or create a new risk entirely. We also developed a Fraud Risk Framework to provide a comprehensive set of leading practices that serves as a guide for agency managers developing and enhancing efforts to combat fraud in a strategic, risk- based manner. The framework is designed to focus on preventive activities, which generally offer the most cost-efficient use of resources since they enable managers to avoid a costly and inefficient pay-and- chase model of recovering funds from fraudulent transactions after payments have been made. The Bureau Identified 360 Active Risks to the 2020 Census Consistent with our ERM framework, the Bureau developed a decennial risk management plan which, among other things, requires that it identify risks to the 2020 Census at the portfolio and program levels. Portfolio risks are those that could jeopardize the success of the 2020 Census as a whole, and they typically span several years with many potential risk events over the period. Program risks are narrower—they could jeopardize the success of an individual program, including the 35 operations that support the 2020 Census as well as the 2018 End-to-End Test. As of December 2018, the Bureau had identified 360 active risks to the 2020 Census—meaning the risk event could still occur and adversely impact the census. Of these, 30 were at the portfolio level and 330 were at the program level. As shown in figure 1, the greatest number of active program risks was to the Systems Engineering and Integration operation which manages the Bureau’s delivery of an IT “System of Systems” to meet 2020 Census business and capability requirements. For example, the Bureau’s description of one of the risks to this operation indicated that if certain key system test plans and schedules are not clearly communicated among and collaborated on by relevant Bureau teams, then the 2020 Census systems are at risk of not meeting performance, cost, and schedule goals and objectives. The Bureau Classified 21 Percent of Active Risks as High Priority The Bureau’s decennial risk management plan requires that it classify risks by priority level. These classifications are intended to highlight the most critical risks and identify where to allocate additional resources. Figure 2 shows how the Bureau had classified the 360 active risks as of December 2018. To determine risk priority, the Bureau’s decennial risk management plan requires that it assign each risk numerical ratings for likelihood of occurrence and potential impact. When multiplied, the result is a numerical priority rating, which the Bureau divides into three classifications for high priority, medium priority, and low priority (see figure 3). The Bureau Determined That It Should Mitigate 67 Percent of Active Risks According to the Bureau’s decennial risk management plan, all portfolio- level risks must be mitigated to reduce the likelihood of the risk event and its impacts, should it occur. In contrast, when a program-level risk is identified, risk owners—the individuals assigned to manage each risk— are to select from the following risk responses. Mitigate. This may be an appropriate response where there are actions or techniques that will reduce the likelihood of the risk event and its impact, should it occur. Watch. This may be an appropriate response where a trigger event can be identified far enough in advance so that mitigation activities can be delayed until then. Accept. This may be an appropriate response where the probability and potential impact of the risk is so low that mitigation actions do not appear necessary or the impact can be absorbed if the risk occurs. As of December 2018, the Bureau planned to mitigate 67 percent of the active risks it had identified (see table 1). Notably, this signifies that the Bureau determined there were actions it could take or techniques it could employ to reduce the likelihood of the majority of risks to the enumeration or their impact, should they occur. The Bureau Had Mitigation and Contingency Plans for Most Risks, but Not Clear Time Frames for Plan Development and Approval or a Clear Status for Mitigation Plans The Bureau Had Mitigation and Contingency Plans for Most Risks That Required Them The Bureau’s decennial risk management plan sets out the following requirements for developing mitigation and contingency plans: Mitigation plans are required for all active portfolio risks and for all active program risks with a mitigate risk response. Contingency plans are required for all active portfolio risks with a high- or medium-priority rating, and a moderate or higher likelihood of occurrence. Contingency plans are also required for active program risks with a high- or medium-priority rating, a moderate or higher likelihood of occurrence, and a risk response of mitigate or accept. Of the 360 active risks to the census as of December 2018, 242 (67 percent) met the Bureau’s criteria for requiring a mitigation plan (see table 2). According to the Bureau’s risk registers, 232 of these risks (96 percent) had a mitigation plan. In addition, 146 of the active risks (41 percent) met the Bureau’s criteria for requiring a contingency plan. According to the Bureau’s risk registers, 102 of these risks (70 percent) had a contingency plan. Our prior reporting similarly found that earlier in the decennial cycle, the Bureau did not have mitigation and contingency plans for all risks that required them. In November 2012, we found that the Bureau had mitigation and contingency plans for each of the portfolio risks it had identified at the time, but none for the program risks. We reported that such plans were needed to help the Bureau fully manage associated risks, and we recommended that the Bureau develop risk mitigation and contingency plans for all program risks. In April 2014, the Bureau provided us with program-level risk registers that contained both risk mitigation and contingency plans where appropriate, and we closed the recommendation as implemented. However, as of December 2018, the Bureau is missing required mitigation and contingency plan for both portfolio and program risks. The Bureau Has Not Set a Clear Time Frame for Developing Mitigation and Contingency Plans Example of 2020 Census Risk Without Required Contingency Plan In July 2016, the Bureau added a risk titled, Major Disasters, to its portfolio risk register. The Bureau’s description of the risk stated that if a major disaster—such as an earthquake—occurs during final preparations for or implementation of the 2020 Census, then census operations may not be executed as planned, leading to increased costs, schedule delays, or lower quality data. Leading up to the 2010 Census, Hurricane Katrina devastated the coastal communities of Louisiana, Mississippi, and Alabama; a few weeks later, Hurricane Rita cut across Texas and Louisiana. Damage was widespread. Among other things, in the aftermath of Katrina, the Red Cross estimated that nearly 525,000 people were displaced and their homes were declared uninhabitable. If a major disaster, such as a hurricane, occurs leading up to or during the 2020 Census, having a contingency plan would help ensure that housing units and their residents are accurately counted, particularly when hundreds of thousands of people— temporarily or permanently—may migrate to other areas of the country. As of December 2018, however, the Bureau had neither a draft nor approved contingency plan for this risk, although it required one since first added to the risk register nearly 2.5 years earlier. According to the Bureau, though not documented in a contingency plan, it is taking actions to respond if this risk is realized. However, if such actions are reflected in disparate documents or no documents at all, then decision makers are left without a comprehensive picture of how the Bureau is managing this risk to the 2020 Census. Some of the risks that were missing required plans had been added to the risk registers in recent months, but others had been added more than 3 years earlier. Specifically, the 10 risks without mitigation plans were added from June to December 2018, and the 44 risks without contingency plans were added from June 2015 to December 2018. The one portfolio risk without a required mitigation plan was added in December 2018, and the five portfolio risks without required contingency plans were added in July 2015, July 2016, October 2017, August 2018, and December 2018, respectively. In some instances, a risk may not meet the Bureau’s criteria for requiring a mitigation or contingency plan when first added to the risk register. However, we found that all 10 risks without required mitigation plans and 37 of the 44 risks without required contingency plans met the Bureau’s criteria for requiring such plans within a month of being added to the register (of the 37 risks without a required contingency plan, five were at the portfolio level and 32 were at the program level). The Bureau’s decennial risk management plan states that mitigation and contingency plans should be developed as soon as possible after risks requiring such plans are added to the risk registers, but it does not include a clear time frame for doing so. According to the Bureau’s 2020 Census Portfolio Risk and Issue Process Manager—responsible for developing, maintaining, and administering the risk management process for both portfolio and program risks to the 2020 Census—no time frame is included because risk owners are aware of their responsibility and a specific time frame would not speed up the process given competing demands on their time. However, the official said the Bureau would consider adding a specific time frame when it updates the decennial risk management plan in 2019. Standards for Internal Control in the Federal Government (Standards for Internal Control) states that management should define objectives in specific terms—including the time frames for achievement—so that they are understood at all levels of the entity. In addition, OMB Circular No. A-123 states that effective risk management is systematic, structured, and timely. Without setting a clear time frame for developing mitigation and contingency plans, some risks may go without them for extended periods, potentially leaving the 2020 Census open to the impact of unmanaged risks. The Bureau’s Risk Registers Clearly Indicated the Status of Contingency but Not Mitigation Plans The Bureau’s decennial risk management plan requires that both portfolio and program risk registers include the word “draft” or “approved” alongside each contingency plan. As of December 2018, this status showed that 41 percent of contingency plans in the Bureau’s risk registers were still in draft form and had not been approved by management (29 percent at the portfolio level and 42 percent at the program level). Specifically, management had approved 60 of the 102 contingency plans (five at the portfolio level and 55 at the program level) but not the remaining 42 (two at the portfolio level and 40 at the program level). On the other hand, the Bureau’s decennial risk management plan includes no requirements for indicating the status of either portfolio or program risk mitigation plans in the risk registers. Our review of the risk registers found that some of the portfolio risk mitigation plans included the word “draft” alongside the plan, but none included any indication of whether the plan had been approved by management. In addition, none of the program risk mitigation plans indicated whether the plan was in draft or had been approved by management, but we found that at least some appeared to be in draft. For example, one program risk mitigation plan stated that the Risk Review Board had recommended contacting three individuals for next steps; however, the plan did not appear finalized because it did not discuss any next steps and it is not clear that further action had been taken. Although the Bureau had mitigation plans in place for 96 percent of risks that required them, without a clear indication of the status of these plans in the risk registers, we were unable to determine how many had been approved by management. According to Bureau officials, the risk registers are Bureau management’s primary source of information regarding risks to the census. Standards for Internal Control states that management should use quality information from reliable sources and clearly document internal controls to achieve the entity’s objectives and respond to risks. Including a clear indication of the status of both mitigation and contingency plans in the risk registers would help to support Bureau officials’ management of risks to the census; in addition, it would help to ensure that those plans are finalized and that the census is not left open to unmanaged risks. The Bureau Does Not Have a Clear Time Frame for Obtaining Management Approval of Mitigation and Contingency Plans Of the 42 contingency plans awaiting approval, many had been added to the risk registers in recent months, but others had been added more than 4 years earlier. Specifically, the two portfolio risks were added in September 2014 and August 2017, and the 40 program risks were added from October 2015 to December 2018. Moreover, we found that both of the portfolio risks and 34 of the 40 program risks without finalized contingency plans met the Bureau’s criteria for requiring such a plan within a month of being added to the register. The Bureau’s decennial risk management plan requires risk owners to present mitigation and contingency plans to management for approval as soon as possible after risks requiring such plans are added to the risk registers. However, as with development of the mitigation and contingency plans, the Bureau’s decennial risk management plan does not include a clear time frame for doing so because, according to the Bureau’s 2020 Census Portfolio Risk and Issue Process Manager, a specific time frame would not speed up the process given competing demands on risk owners’ time. As previously noted, Standards for Internal Control states that management should define objectives in specific terms—including the time frames for achievement—so that they are understood at all levels of the entity. In addition, OMB Circular No. A- 123 states that effective risk management is systematic, structured, and timely. Without setting a clear time frame for approving draft mitigation and contingency plans, some risks may not be finalized. The Bureau Did Not Consistently Include Key Information for Managing Risks in the Mitigation and Contingency Plans We Reviewed Mitigation and contingency plans assist agencies in managing and communicating to agency stakeholders the status of risks. We reviewed the mitigation and contingency plans for six portfolio-level risks to the 2020 Census which the Bureau identified as among the “major concerns that could affect the design or successful implementation of the 2020 Census” (see table 3). We found that the Bureau’s mitigation and contingency plans for these risks did not consistently include key information needed to manage them. These six risks, if not properly managed, could adversely affect the cost and quality of the 2020 Census. According to the Bureau’s decennial risk management plan, for each portfolio-level risk the risk owner must develop mitigation and contingency plans using the Bureau’s mitigation and contingency plan templates (see appendixes III and IV for the Bureau’s templates). Those templates require, among other things, that the Bureau specify key activities for reducing the likelihood of the risk and its impacts. We found that the Bureau’s decennial risk management plan generally aligns with our ERM framework which is designed to help agencies, among other actions, identify, assess, monitor, and communicate risks. However, we also found some instances where the Bureau’s risk management plan did not require mitigation and contingency plans to include certain key attributes we identified, which we discuss below. See figure 4 for a list of key attributes that we used when reviewing mitigation and contingency plans. As indicated in the attribute descriptions, six of the seven attributes are applicable to mitigation plans. Clearly defined trigger events do not apply to mitigation plans because they signal when a risk has been realized and contingency activities must begin. Each of the seven attributes are applicable to contingency plans, although two attributes—activity start and completion dates and activity implementation status—are only applicable if the risk has been realized. As of December 2018, the results of our review of the Bureau’s mitigation and contingency plans for the six portfolio-level risks we selected were in most cases mixed: some mitigation and contingency plans aligned with a particular key attribute, while others did not (see table 4). For two attributes—activity start and completion dates and activity implementation status—we found the Bureau generally included the relevant information across the six selected mitigation plans, which should help ensure that activities are carried out in a timely manner and that agency officials and stakeholders are informed and assured that the risks are being effectively managed. On the other hand, none of the mitigation or contingency plans included a monitoring plan, which would help the Bureau to track whether plans are working as intended. We found that where attributes are required but not consistently implemented, the gap stems from the Bureau not always holding risk owners accountable for fulfilling all of their risk management responsibilities, such as keeping plans up to date. Bureau officials responsible for overseeing risk management for the 2020 Census stated that they encourage risk owners to complete all of their risk management responsibilities; however, risk owners do not always do so because they have competing demands on their time. Therefore, the officials said they are generally satisfied if the risk owners have completed at least some of their risk management responsibilities. However, they also agreed that risk management should be among the Bureau’s top priorities and that risk owners should fulfill all of their risk management responsibilities. Bureau officials also stated that the Bureau is managing risks to the census, even if not always reflected in the mitigation and contingency plans. We acknowledge that the Bureau is taking actions to manage risks to the 2020 Census beyond those reflected in its mitigation and contingency plans. However, if these actions are reflected in disparate documents or are not documented at all, then Bureau officials, program managers, and other decision makers are left without an integrated and comprehensive picture of how the Bureau is managing risks to the 2020 Census. Consequently, the Bureau’s risk management efforts are neither clear nor transparent, which may create challenges for decision makers’ ability to quickly and accurately identify essential information to set priorities, allocate resources, and restructure their efforts, as needed, to ensure an accurate and cost-effective enumeration. In addition, where mitigation and contingency plans are not clearly documented and only certain individuals know about them, there is potential for the loss of organizational knowledge, particularly as key personnel change roles or leave the agency altogether. Below we provide examples of gaps, by attribute, in the Bureau’s mitigation and contingency plans for the six risks we reviewed. Risk-Register Entries Were Missing Key Information For each portfolio and program risk mitigation and contingency plan, the Bureau’s decennial risk management plan requires risk owners to enter a description of the plan in the relevant risk register. However, our review of risk register entries for both mitigation and contingency plans across all active risks as of December 2018 found they were missing some key attributes, including monitoring plans, activity start and completion dates for most activities, the implementation status for some activities, individuals responsible for activity completion, and clearly defined trigger events. In some instances, the missing attributes were a result of the Bureau not requiring them in the risk register descriptions. In other instances, where the Bureau’s decennial risk management plan does require the attribute in the risk register descriptions, the gap was due to the Bureau not holding risk owners accountable for them. Some of the attributes missing from the registers were included in the separate mitigation and contingency plans. However, at the program level there are no separate mitigation plans, making the risk registers the only source of information for program-level mitigation activities. According to Bureau officials, after the 2020 Census they plan to require separate mitigation plans for program risks as well. At the same time, Bureau officials noted that they primarily rely on the risk registers to monitor risks to the census and usually do not refer to the separate mitigation and contingency plans. Standards for Internal Control states that management should use quality information from reliable sources that is appropriate, current, complete, accurate, accessible, and provided on a timely basis to achieve the entity’s objectives. Similarly, OMB Circular No. A-123 states that effective risk management is based on the best available information. Because the risk registers are Bureau management’s primary source of information regarding risks to the census—and currently their only source of information on program-level risk mitigation—including this information in the risk registers would help to support Bureau officials’ ability to manage risks to the 2020 Census. The Bureau’s Approach to Managing Fraud Risk for the 2020 Census Generally Aligns with Selected Components of the Fraud Risk Framework but Does Not Yet Include a Fraud Risk Tolerance or Fraud Referral Plan The Bureau has designed an approach for managing fraud risk for responses to the 2020 Census. We found that the approach generally aligns with leading practices in the commit, assess, and design and implement components of the Fraud Risk Framework. Specifically, the Bureau demonstrated commitment to combating fraud by creating a dedicated entity to lead antifraud efforts for the 2020 Census, conducted a fraud risk assessment, and developed a risk response plan, among other actions, consistent with leading practices from the selected components. However, the Bureau has not yet determined the program’s fraud risk tolerance or outlined plans for referring potential fraud to the Department of Commerce Office of Inspector General (OIG) to investigate. Bureau officials described plans and milestones to address these steps but not for updating the antifraud strategy to include them. Standards for Internal Control states that management should clearly document internal controls to achieve the entity’s objectives and respond to risks. In addition, management should use quality information that is current and complete. Updating the antifraud strategy to include the Bureau’s fraud risk tolerance and plan for OIG referral will help to ensure that the strategy is current, complete, and conforms to leading practices. Appendix IV presents additional details of our review of applicable leading practices. Managers of federal programs maintain the primary responsibility for enhancing program integrity and managing fraud risks. Those who are effective at managing their fraud risks collect and analyze data, identify fraud trends, and use the information to improve fraud risk management activities. Implementing effective fraud risk management processes is important to help ensure that federal programs fulfill their intended purpose, funds are spent effectively, and assets are safeguarded. The Fraud Risk Framework provides a comprehensive set of leading practices that serve as a guide for agency managers developing and enhancing efforts to combat fraud in a strategic, risk-based manner. The Fraud Risk Framework is also aligned with Principle 8 (“Assess Fraud Risk”) of Standards for Internal Control. It is designed to focus on preventive activities, which generally offer the most cost-efficient use of resources. The leading practices in the Fraud Risk Framework are organized into four components—commit, assess, design and implement, and evaluate and adapt—as depicted in figure 5. The Bureau Designated an Entity to Manage Fraud Risk and Took Steps to Develop an Organizational Culture Conducive to Fraud Risk Management The commit component of the Fraud Risk Framework calls for an agency to commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management. This component includes demonstrating a senior-level commitment to integrity and combating fraud, and establishing a dedicated entity to lead fraud risk management activities. The Bureau has taken steps that align with all applicable leading practices in this component, according to our review. Specifically, senior- level commitment to combating fraud helps create an organizational culture to combat fraud. The Bureau showed this commitment by creating an antifraud group, made up of multiple operational divisions within the Bureau—the Decennial Census Management Division, Decennial Information Technology Division, and Decennial Contracts Execution Office—and staff from the Bureau’s technical integration contractor. Staff from these divisions make up the Self-Response Quality Assurance (SRQA) group with the primary purpose of identifying and responding to potentially fraudulent responses received in the 2020 Census. SRQA members were assigned roles and responsibilities to combat fraud in the 2020 Census. According to the framework, antifraud entities should understand the program and its operations; have defined responsibilities and the necessary authority across the program; and have a direct reporting line to senior-level managers within the agency. We found that SRQA met these leading practices through our interviews with knowledgeable officials who discussed the Bureau’s strategy for managing fraud risk for the 2020 Census, and our review of documentation such as the fraud risk assessment, which listed roles and responsibilities for staff from the divisions in the antifraud group and the technical integration contractor. The group also directly reports to senior-level managers within the agency through weekly status reports that include milestones, activities, and challenges. According to the Fraud Risk Framework, the antifraud entity, among other things, serves as the repository of knowledge on fraud risks and controls; manages the fraud risk-assessment process; leads or assists with trainings and other fraud-awareness activities; and coordinates antifraud initiatives across the program. The Bureau staffed the antifraud entity with members knowledgeable of the program and tasked them with managing the fraud risk assessment process. Also, the members facilitated communication with management and among stakeholders on fraud- related issues through weekly status reports. According to SRQA officials, issues and concerns are escalated to senior-level managers on an as- needed basis so they can be coordinated across the program. The Bureau Assessed Fraud Risks and Developed a Risk Profile but Has Not Yet Determined Fraud Risk Tolerances The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and to assess risks to determine a fraud risk profile. This includes assessing the likelihood and effect of fraud risks and determining a risk tolerance. Risk tolerance is the acceptable level of variation in performance relative to the achievement of objectives. In the context of fraud risk management, if the objective is to mitigate fraud risks—in general, to have a low level of fraud—the risk tolerance reflects managers’ willingness to accept a higher level of fraud risks. Risk tolerance can be either qualitative or quantitative, but regardless of the approach, Standards for Internal Control states that managers should consider defining risk tolerances that are specific and measurable. The first part of the fraud risk assessment process includes leading practices on tailoring the assessment to the program; planning to conduct assessments both at regular intervals and when there are changes to the program or operating environment; identifying specific tools, methods, and sources for gathering information about fraud risks; and involving relevant stakeholders in the assessment process. The Bureau has met all the leading practices in the first part of the assess component, according to our review. Specifically, the Bureau tailored the fraud risk assessment to the 2020 Census as this is the first time an internet-response option will be available for a decennial census in the United States. To identify specific tools, methods, and sources for gathering information about fraud risks, the Bureau met with relevant stakeholders, along with subject- matter experts, and conducted focus groups to develop various fraud scenarios that became a key part of the assessment. The Bureau also involved relevant stakeholders in the assessment process by outlining their roles and responsibilities for the 2020 Census. For example, the Decennial Census Management Division serves as the fraud lead and oversees managing risks such as operational implementation, methodology, and workload demands with support from the other operational divisions in the antifraud group. According to the Fraud Risk Framework while the timing can vary, effective antifraud entities plan to conduct fraud risk assessments at regular intervals and when there are changes to the program or operating environment, as fraud risk assessments are iterative and not meant to be onetime exercises. The Bureau’s assessment takes this into account by acknowledging that risk assessment is an ongoing process. The assessment also states that the SRQA team will continue to evaluate and develop modeling techniques to train against existing fraud scenarios, and SRQA welcomes input from all stakeholders to ensure the Bureau identifies fraud risks, and works to implement controls and mitigation plans throughout the 2020 Census. The second part of the fraud risk assessment process includes identifying inherent fraud risks affecting the program; assessing the likelihood and effect of inherent fraud risks; determining a fraud risk tolerance; examining the suitability of existing fraud controls and prioritizing residual fraud risks; and documenting the program’s fraud risk profile (see figure 6). The Bureau met three out of these five leading practices, including identifying inherent fraud risk; assigning numeric rankings for likelihood and impact of various fraud scenarios; and documenting the 2020 Census fraud risk profile, which outlines the strengths and weaknesses of the program. We concluded that one leading practice, examining the suitability of existing fraud controls and prioritizing residual fraud risks, was not applicable since the fraud detection system is new to the 2020 Census and changes the way the Bureau will detect different fraud scenarios. As a result, all fraud risks for the 2020 Census are residual risks. In reviewing the remaining leading practice in the fraud assessment processes, we found that after identifying inherent fraud risk and assigning numeric rankings for likelihood and impact of various fraud scenarios, the Bureau did not take the next step to determine a fraud risk tolerance. Some of the steps the Bureau took to develop a risk response plan are similar to steps for developing a fraud risk tolerance. Specifically, the Bureau developed a process that classifies self-responses into risk categories of low, medium, or high. Bureau officials stated that they plan to use the classification to determine appropriate follow-up steps based on risk scores generated by its Fraud Detection Analytics Model that was develop by SRQA for the 2020 Census. However, the Bureau did not define thresholds for the low-, medium-, and high-risk categories. These thresholds, if defined, would meet the intent of a fraud risk tolerance by indicating the acceptable level of variation in self-responses. SRQA officials stated that they are developing these thresholds, and therefore its fraud risk tolerance, and plan to have them completed in August 2019. This includes reviewing available information collected through the 2018 End-to-End Test, running simulations, defining thresholds, and then evaluating the results to make adjustments. Responses will receive a score, but until the Bureau defines fraud risk tolerance thresholds for the low-, medium-, and high-risk categories, it cannot effectively implement its antifraud strategy to allocate responses for follow-up or inclusion. This may also affect the Bureau’s ability to evaluate and adapt its antifraud strategy if initial benchmarks are not in place to use for monitoring, with subsequent adjustments potentially requiring additional time and resources. While officials described steps and time frames to develop a fraud risk tolerance, they did not do so for updating the antifraud strategy to include the tolerance. Updating the antifraud strategy to include the Bureau’s fraud risk tolerance will help to ensure that the strategy is current, complete, and conforms to leading practices. The Bureau Designed a Response Plan and Collaborated Internally to Mitigate Fraud Risks but Did Not Include Plans to Refer Potential Fraud to the Office of Inspector General The design and implement component of the Fraud Risk Framework calls for federal managers to design and implement a strategy with specific control activities to mitigate assessed fraud risks and collaborate to help ensure effective implementation. This includes determining risk responses and documenting an antifraud strategy; designing and implementing specific control activities; developing a plan outlining how the program will respond to identified instances of fraud; and establishing collaborative relationships and creating incentives to help ensure effective implementation of the antifraud strategy. For determining risk responses and documenting an antifraud strategy, the framework states that managers should (a) use the fraud risk profile to help decide how to allocate resources to respond to residual fraud risks; (b) develop, document, and communicate an antifraud strategy to employees and stakeholders that describes the program’s activities for preventing, detecting, and responding to fraud, as well as monitoring and evaluation; (c) establish roles and responsibilities of those involved in fraud risk management activities, such as the antifraud entity and external parties responsible for fraud controls, and communicate the role of the Office of Inspector General (OIG) to investigate potential fraud; (d) create timelines for implementing fraud risk management activities, as appropriate, including monitoring and evaluations; (e) demonstrate links to the highest internal and external residual fraud risks outlined in the fraud risk profile; and (f) link antifraud efforts to other risk management activities, if any. The Bureau developed and documented an antifraud strategy (the fraud risk assessment and the risk response plan) and communicated it to applicable employees. Bureau officials provided final versions of the antifraud strategy in October 2018 and stated that all stakeholders were provided with excerpts applicable to their area. The antifraud strategy outlines the beginning and end dates for fraud detection operations, and links to the highest residual fraud risks. The risk response includes links to other risk management activities such as a security layer that is designed, created, and maintained by the technical integration contractor security group in coordination with the Office of Information Security and Decennial Information Technology Division. According to the risk response plan, this group protects the fraud detection system and its associated systems from outside attacks such as hacks and distributed denial of service attacks. However, we found that the Bureau’s approach to managing fraud risk did not fully align with two leading practices in this component. First, until the Bureau defines its fraud risk tolerances, such as defining low-, medium-, or high-risk thresholds, it will not be able to effectively allocate resources to respond to residual fraud risks consistent with the Fraud Risk Framework’s leading practices. Second, the Bureau did not initially coordinate with the Department of Commerce (Commerce) OIG about its antifraud strategy, which is not consistent with the leading practices. Such lack of coordination could have precluded the OIG from determining if potentially fraudulent activities should be investigated. After discussing the results of our review with the Bureau, the Bureau contacted and met with the Commerce OIG in February 2019. Based on the Bureau’s notes from this meeting, the Bureau is on track to addressing the leading practice regarding coordination. The framework states that to design and implement specific control activities to prevent and detect fraud, managers should (a) focus on fraud prevention over detection; (b) consider the benefits and costs of control activities to address identified residual risks; and (c) design and implement the control activities such as data-analytics to prevent and detect fraud. The 2020 Census antifraud control activities focus on detecting potentially fraudulent responses. The Bureaus plans to use a combination of data analytics and follow up to review response data before they are added to the Bureau’s overall Census counts. The Bureau’s efforts for the 2020 Census also focus on minimizing costs. Specifically, if the Bureau’s fraud detection can minimize the amount of cases that require manual investigation or work by field operations staff to collect the information again, it can reduce the cost and workload to the Bureau. The framework states the antifraud strategy should also ensure that responses to identified instances of fraud are prompt and consistent. In addition, effective managers of fraud risks are to refer instances of potential fraud to the OIG or other appropriate parties, such as law- enforcement entities or the Department of Justice, for further investigation. The Bureau’s plan describes its process for scoring responses using its Fraud Detection Analytics Model and then sorting responses into a low-, medium-, or high-risk category. The plan also outlines risk responses that depend on the risk category. For example, medium-risk responses are reviewed internally and could be incorporated into the census count or sent for additional follow up. However, the Bureau’s antifraud strategy does not call for instances of potential fraud to be referred to the Commerce OIG. Specifically, the Bureau’s fraud risk assessment and risk response plan do not mention the Commerce OIG. Bureau officials stated that the Commerce OIG did not participate in the development of these documents. In February 2019, after we discussed the results of our review with the Bureau, the Bureau met with the Commerce OIG to discuss potential referrals. As a result, the Bureau agreed to develop and share with the Commerce OIG a plan that outlines a potential referral process by summer 2019. Managers who effectively manage fraud risks collaborate and communicate with stakeholders to share information on fraud schemes and the lessons learned from fraud control activities. The framework describes collaborative relationships as including other offices within the agency; federal, state, and local agencies; private-sector partners; law- enforcement entities; and entities responsible for control activities. In addition, managers should collaborate and communicate with the OIG to improve their understanding of fraud risks and align their efforts to address fraud. The Bureau collaborated internally with groups such as the Security Operations Center that maintain the security layer that protects Bureau systems and the nonresponse follow-up groups that visit households to collect information again. The Bureau also provided contractors with guidance by finalizing the antifraud strategy and incentives by entering into an agreement with the technical integrator contractor, which allows the Bureau to exercise an option to continue the contract for another year. However, the Bureau did not begin to collaborate and communicate with the Commerce OIG to improve its understanding of fraud risks and align efforts to address fraud until after we discussed the results of our review with the Bureau. Bureau officials viewed the primary purpose of the fraud detection system as a way to improve data reliability, according to interviews. As a result, in 2018, the Bureau changed the name of the operation from Fraud Detection to SRQA. According to Bureau officials, the change better reflects the operation’s focus on detecting potential falsification in decennial census response data and referring suspected responses to a field resolution operation to collect the data again. Bureau officials initially stated that SRQA would not conduct investigations that lead to the kind of law enforcement activities traditionally associated with fraud detection. As mentioned above, the Bureau met with the Commerce OIG in February 2019 to discuss the potential for referrals and, according to the Bureau, initiate a process for doing so. However, officials did not discuss steps and a time frame for updating the antifraud strategy to include this process. Doing so will help to ensure that the strategy is current, complete, and conforms to leading practices. Conclusions Adequately addressing risks to the census is critical for ensuring a cost- effective and high-quality enumeration. The Bureau has taken important steps to address risks to the 2020 Census, but with less than a year until Census Day, the Bureau has not developed mitigation and contingency plans for all risks that require them. In addition, the Bureau does not have clear time frames for developing and obtaining management approval of mitigation and contingency plans, and some risks have gone without required plans for months and years. Moreover, the status of some plans is unclear and not all plans have received management approval. Some of the plans the Bureau has developed are missing key attributes we identified for helping to ensure the plans contain the information needed to manage risks. For example, none of the Bureau’s plans described how the Bureau will monitor the risk response, so the Bureau may not be able to track whether the plans are working as intended. These issues have arisen in some instances because the Bureau’s decennial risk management plan does not require mitigation and contingency plans to have each of the seven key attributes we identified; in other instances, the issues have arisen because Bureau officials do not always hold risk owners accountable for fulfilling all their risk management responsibilities. Consistently documenting risk management activities would support management’s ability to more quickly make informed decisions in response to risks confronting the 2020 Census. It would also help protect the Bureau from losing institutional knowledge in the event risk owners change roles or leave the agency. The Bureau’s fraud risk strategy generally aligned with our Fraud Risk Framework, including developing response plans and collaborating internally to address risks. However, the Bureau has not yet determined the program’s fraud risk tolerance or outlined a plan for referring potential fraud to the Commerce OIG to investigate, but plans to do so later this year. Setting a tolerance would help the Bureau monitor risks, and referring potential fraud to the Commerce OIG would allow it to determine if further investigation is appropriate. In addition to taking these actions, updating the antifraud strategy to include the Bureau’s fraud risk tolerance and plan for OIG referral will help to ensure that the strategy is current, complete, and conforms to leading practices. Recommendations for Executive Action We are making the following seven recommendations to the Department of Commerce and the Census Bureau: The Secretary of Commerce should ensure that the Director of the Census Bureau develops and obtains management approval of mitigation and contingency plans for all risks that require them. (Recommendation 1) The Secretary of Commerce should ensure that the Director of the Census Bureau updates the Bureau’s decennial risk management plan to include clear time frames for developing and obtaining management approval of mitigation and contingency plans. (Recommendation 2) The Secretary of Commerce should ensure that the Director of the Census Bureau updates the Bureau’s decennial risk management plan to require that portfolio and program risk registers include a clear indication of the status of mitigation plans. (Recommendation 3) The Secretary of Commerce should ensure that the Director of the Census Bureau updates the Bureau’s decennial risk management plan to require that risk mitigation and contingency plans, including the risk register descriptions and separate plans, have the seven key attributes for helping to ensure they contain the information needed to manage risk. (Recommendation 4) The Secretary of Commerce should ensure that the Director of the Census Bureau holds risk owners accountable for carrying out their risk management responsibilities. (Recommendation 5) The Secretary of Commerce should ensure that the Director of the Census Bureau updates the Bureau’s antifraud strategy to include a fraud risk tolerance prior to beginning the 2020 Census and adjust as needed. (Recommendation 6) The Secretary of Commerce should ensure that the Director of the Census Bureau updates the Bureau’s antifraud strategy to include the Bureau’s plans for referring instances of potential fraud to the Department of Commerce Office of Inspector General for further investigation. (Recommendation 7) Agency Comments We provided a draft of this report to the Secretary of Commerce. In its written comments, reproduced in appendix V, the Department of Commerce agreed with our findings and recommendations and said it would develop an action plan to address them. The Census Bureau also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Director of the U.S. Census Bureau, and the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact Robert Goldenkoff at (202) 512-2757 or goldenkoffr@gao.gov or Rebecca Shea at (202) 512-6722 or shear@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The objectives of this study were to examine (1) what risks to the 2020 Census the Census Bureau (Bureau) has identified, (2) the risks for which the Bureau has mitigation and contingency plans, (3) the extent to which the Bureau’s mitigation and contingency plans included information needed to manage risk, and (4) the extent to which the Bureau’s approach to managing fraud risks to the 2020 Census aligns with leading practices outlined in our Fraud Risk Framework. To answer the first three objectives, we reviewed Bureau documentation regarding its approach to managing risks facing the 2020 Census, including its decennial risk management plan, operational plan, governance management plan, Risk Review Board meeting minutes and agendas, and guidance and training documents. In addition, we interviewed Bureau officials responsible for overseeing risk management for the 2020 Census. To describe what risks to the 2020 Census the Bureau has identified and the risks for which the Bureau has mitigation and contingency plans, we also reviewed the Bureau’s portfolio- and program-level decennial risk registers. To assess the extent to which the Bureau’s mitigation and contingency plans included information needed to manage risk, we selected a nongeneralizable sample of six risks from the Bureau’s risk registers based on factors such as likelihood of occurrence and potential impact (see table 3). To select these risks, we began with the 12 risks identified by the Bureau in its 2020 Census Operational Plan as the “major concerns that could affect the design or successful implementation of the 2020 Census.” Next, we sorted the risks by numerical priority rating as of June 2018, a Bureau-assigned figure calculated by multiplying numerical scores for likelihood of occurrence and potential impact (see figure 3). We then selected the six risks with the highest priority ratings. For each selected risk, we reviewed relevant Bureau documentation—including risk mitigation and contingency plans—and we conducted semistructured interviews with the Bureau officials responsible for managing the risk. In addition, drawing principally from our Enterprise Risk Management (ERM) framework as well as secondary sources, we identified seven key attributes for risk mitigation and contingency plans to help ensure they contain the information needed to manage risks (see figure 4). Specifically, we reviewed our ERM framework and other relevant prior work on risk management, as well as commonly used risk management publications from sources including the Office of Management and Budget, the Project Management Institute, and the Chief Financial Officers Council and Performance Improvement Council. We analyzed these publications to identify portions relevant to risk mitigation and contingency planning. Next, we synthesized the information and derived attributes that appeared most important for effective risk mitigation and contingency plans. We assessed the attributes against the essential elements laid out in our ERM framework and found that each attribute aligned with one or more of the elements. Six of the seven attributes—all but clearly defined trigger events—are applicable to mitigation plans. Each of the seven attributes are applicable to contingency plans, although two attributes—activity start and completion dates and activity implementation status—are only applicable if the risk has been realized. We assessed the risk mitigation and contingency plans entered in the Bureau’s risk registers as of December 2018, as well as the separate mitigation and contingency plans for the six selected risks, against the seven key attributes. To evaluate the extent to which the Bureau’s approach to managing fraud risks to the 2020 Census aligns with leading practices outlined in our Fraud Risk Framework, we reviewed Bureau documentation related to the 2020 Census antifraud strategy. This strategy includes a fraud risk assessment that identifies and evaluates scenarios in which fraudulent activity could impact the 2020 Census results. It also includes a concept of operations that uses the fraud risk assessment to develop risk responses and its fraud detection systems. In addition, we interviewed Bureau officials responsible for antifraud efforts for the 2020 Census. We evaluated the information gathered based on the commit, assess, and design and implement components of our Fraud Risk Framework. Our assessment was limited to a review of the presence or absence of leading practices from the framework, not whether they were sufficient. We also did not review the leading practices for the “evaluate and adapt” component of the framework. This component focuses on evaluating outcomes using a risk-based approach and then adapting activities established in the other components to improve fraud risk management. Because the census is not scheduled to start until 2020, the Bureau will not be able to implement leading practices such as: monitoring and evaluating the effectiveness of preventive activities; measuring outcomes, in addition to outputs, of fraud risk management or using the results of monitoring and evaluations to improve the design and implementation of fraud risk management activities. We conducted this performance audit from May 2018 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: U.S. Census Bureau Operations Supporting the 2020 Census Appendix II: U.S. Census Bureau Operations Supporting the 2020 Census Purpose Define and implement program management policies, processes, and the control functions for planning and implementing the 2020 Census to ensure an efficient and well- managed program. Manage the delivery of an Information Technology (IT) “System of Systems” to meet 2020 Census business and capability requirements. Ensure all 2020 Census operations and systems adhere to laws, policies, and regulations that ensure appropriate systems and data security, and protect respondent and employee privacy and confidentiality. Identify and finalize content and design of questionnaires and other associated nonquestionnaire materials. Ensure consistency across data collection modes and operations. Provide optimal design and content of the questionnaires to encourage high response rates. Assess and support language needs of non-English speaking populations. Determine the number of non-English languages and level of support for the 2020 Census. Optimize the non-English content of questionnaires and associated nonquestionnaire materials across data collection modes and operations. Ensure cultural relevancy and meaningful translation of 2020 Census questionnaires and associated nonquestionnaire materials. Provide the geographic foundation to support 2020 Census data collection and tabulation activities within the Master Address File/Topologically Integrated Geographic Encoding and Referencing System. This system serves as the national repository for all spatial, geographic, and residential address data needed for census and survey data collection, data tabulation, data dissemination, geocoding services, and map production. Provide an opportunity for tribal, federal, state, and local governments to review and improve the address lists and maps used to conduct the 2020 Census as required by Public Law 103-430. Deliver a complete and accurate address list and spatial database for enumeration and determining the type and address characteristics for each living quarter. Print and distribute internet invitation letters, reminder cards or letters or both, questionnaire mailing packages, and materials for other special operations, as required. Other materials required to support field operations are handled in the Decennial Logistics Management operation. Capture and convert data from the 2020 Census paper questionnaires, including mail receipt, document preparation, scanning, optical character and mark recognition, data delivery, checkout, and form destruction. Communicate the importance of participating in the 2020 Census to the entire population of the 50 states, the District of Columbia, and Puerto Rico to support field recruitment efforts, engage and motivate people to self-respond (preferably via the internet), raise and keep awareness high throughout the entire 2020 Census to encourage response, and effectively support dissemination of Census data to stakeholders and the public. Internet Self-Response Maximize online response to the 2020 Census via contact strategies and improved access for respondents. Collect response data via the internet to reduce paper and nonresponse follow-up. Purpose Make it easy for people to respond anytime and anywhere to increase self-response rates by providing response options that do not require a unique Census ID. Maximize real-time matching of non-ID respondent addresses to the census living quarters address inventory, assigning nonmatching addresses to census blocks. Update the address and feature data and enumerate respondents in person. Designated to occur in areas where the initial visit requires enumerating while updating the address frame, particularly in remote geographic areas that have unique challenges associated with accessibility. Update the address and feature data and leave a choice questionnaire package at every housing unit identified to allow the household to self-respond. Designed to occur in areas where the majority of housing units do not have a city-style address to receive mail. Enumerate people living or staying in group quarters and provide an opportunity for people experiencing homelessness and receiving service at service-based locations, such as soup kitchens, to be counted in the census. Enumerate individuals in occupied units at transitory locations who do not have a usual home elsewhere, such as recreational vehicle parks, campgrounds, racetracks, circuses, carnivals, marinas, hotels, and motels. Provide questionnaire assistance for respondents by answering questions about specific items on the census form or other frequently asked questions about the 2020 Census, and provide an option for respondents to complete a census interview over the telephone. Also provide outbound calling support of nonresponse follow-up reinterview and coverage improvement. Determine housing unit status for nonresponding addresses that do not self-respond to the 2020 Census and enumerate households that are determined to have a housing unit status of occupied. Create and distribute the initial 2020 Census enumeration universe, assign the specific enumeration strategy for each living quarter based on case status and associated paradata, create and distribute workload files required for enumeration operations, track case enumeration status, run postdata collection processing actions in preparation for producing the final 2020 Census results, and check for fraudulent returns. Obtain counts by home state of U.S. military and federal civilian employees stationed or deployed overseas and their dependents living with them. Prepare and deliver the 2020 Census population counts to the President of the United States for congressional apportionment, tabulate and disseminate 2020 Census data products for use by the states for redistricting, and tabulate and disseminate 2020 Census data for use by the public. Provide to each state the legally required Public Law 94-171 redistricting data tabulations by the mandated deadline of 1 year from Census Day (April 1, 2021). Enhance the accuracy of the 2020 Census through remediating potential gaps in coverage by implementing an efficient and equitable process to identify and correct missing or geographically misallocated large group quarters and their population, and positioning remaining count issues for a smooth transition to the Count Question Resolution Operation. Provide a mechanism for governmental units to challenge their official 2020 Census results. Area Purpose Coordinate storage of the materials and data and provide 2020 Census records deemed permanent, including files containing individual responses, to the National Archives and Records Administration and to the National Processing Center to use as source materials to conduct the Age Search Service. Also store data to cover in-house needs. Island Areas Censuses Enumerate all residents of American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands; process and tabulate the collected data; and disseminate data products to the public. Develop the survey design and sample for the Post-Enumeration Survey of the 2020 Census and produce estimates of census coverage based on the Post-Enumeration Survey. Identify matches, nonmatches, and discrepancies between the 2020 Census and the Post-Enumeration Survey for both housing units and people in the same areas. Both computer and clerical components of matching are conducted. Collect person and housing unit information (independent from the 2020 Census operations) for the sample of housing units in the Post-Enumeration Survey to help understand census coverage and to detect erroneous enumerations. Document how well the 2020 Census was conducted, and analyze, interpret, and synthesize the effectiveness of census components and their impact on data quality or coverage or both. Measure the success of critical 2020 Census operations. Formulate and execute an experimentation program to support early planning and inform the transition and design of the 2030 Census and produce an independent assessment of population and housing unit coverage. Support 2020 Census field operations for decennial staff (i.e., headquarters, PDC, Regional Census Center, Area Census Office, Island Areas Censuses, remote workers, and listers/enumerators.) Provide the administrative infrastructure for data collection operations covering the 50 states, the District of Columbia, and Puerto Rico. Coordinate space acquisition and lease management for the regional census centers, area census offices, and the Puerto Rico area office; and provide logistics management support services (e.g., kit assembly, supplies to field staff). Provide the IT-related Infrastructure support to the 2020 Census, including enterprise systems and applications, 2020 Census-specific applications, Field IT infrastructure, mobile computing, and cloud computing. Appendix III: 2020 Census Portfolio Risk Mitigation and Contingency Plan Templates Appendix IV: Leading Practices from GAO’s Fraud Risk Framework For the 2020 Census, the Census Bureau (Bureau) is trying to increase participation and reduce costs by offering more self-response options to households. This includes self-responses received via internet, phone, or mail. In 2018, the Self-Response Quality Assurance group finalized its antifraud strategy that includes a fraud risk assessment and risk response plan that focuses specifically on these responses. We developed a data collection instrument to structure our review of the antifraud strategy as it related to the commit, assess, and design and implement components of our Fraud Risk Framework. Our assessment was limited to a review of the presence or absence of leading practices from the framework, not whether they were sufficient. We also did not assess the Bureau’s approach against leading practices in the “evaluate and adapt” component of the framework because the Bureau will not be able to implement practices in this component until the 2020 Census begins. The following table summarizes our comparison of the Bureau’s antifraud strategy to leading practices in the fraud risk framework. Appendix V: Comments from the Department of Commerce Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Lisa Pearson and Philip Reiff (Assistant Directors), Emmy Rhine Paule and Ariel Vega (Analysts-in- Charge), Carole Cimitile, Ann Czapiewski, Robert Gebhart, Maria McMullen, Ty Mitchell, James Murphy, Carl Ramirez, Kayla Robinson, Kate Sharkey, Andrea Starosciak, Michael Steinberg, Umesh Thakkar, and Jon Ticehurst made significant contributions to this report.
Why GAO Did This Study With less than 1 year until Census Day, many risks remain. For example, the Bureau has had challenges developing critical information technology systems, and new innovations—such as the ability to respond via the internet—have raised questions about potential security and fraud risks. Fundamental to risk management is the development of risk mitigation and contingency plans to reduce the likelihood of risks and their impacts, should they occur. GAO was asked to review the Bureau's management of risks to the 2020 Census. This report examines (1) what risks the Bureau has identified, (2) the risks for which the Bureau has mitigation and contingency plans, (3) the extent to which the plans included information needed to manage risk, and (4) the extent to which the Bureau's fraud risk approach aligns with leading practices in GAO's Fraud Risk Framework. GAO interviewed officials, assessed selected mitigation and contingency plans against key attributes, and assessed the Bureau's approach to managing fraud risk against GAO's Fraud Risk Framework. What GAO Found As of December 2018, the Census Bureau (Bureau) had identified 360 active risks to the 2020 Census. Of these, 242 required a mitigation plan and 232 had one; 146 required a contingency plan and 102 had one (see table). Mitigation plans detail how an agency will reduce the likelihood of a risk event and its impacts, if it occurs. Contingency plans identify how an agency will reduce or recover from the impact of a risk after it has been realized. Bureau guidance states that these plans should be developed as soon as possible after a risk is added to the risk register, but it does not establish clear time frames for doing so. Consequently, some risks may go without required plans for extended periods. GAO reviewed the mitigation and contingency plans in detail for six risks which the Bureau identified as among the major concerns that could affect the 2020 Census. These included cybersecurity incidents and integration of the 52 systems and 35 operations supporting the census. GAO found that the plans did not consistently include key information needed to manage the risk. For example, three of the mitigation plans and five of the contingency plans did not include all key activities. Among these was the Bureau's cybersecurity mitigation plan. During an August 2018 public meeting, the Bureau's Chief Information Officer discussed key strategies for mitigating cybersecurity risks to the census—such as reliance on other federal agencies to help resolve threats—not all of which were included in the mitigation plan. GAO found that gaps stemmed from either requirements missing from the Bureau's decennial risk management plan, or that risk owners were not fulfilling all of their risk management responsibilities. Bureau officials said that risk owners are aware of these responsibilities but do not always fulfill them given competing demands. Bureau officials also said that they are managing risks to the census, even if not always reflected in their mitigation and contingency plans. However, if such actions are reflected in disparate documents or are not documented at all, then decision makers are left without an integrated and comprehensive picture of how the Bureau is managing risks to the census. The Bureau has designed an approach for managing fraud risk to the 2020 Census that generally aligns with leading practices in the commit, assess, and design and implement components of GAO's Fraud Risk Framework. However, the Bureau has not yet determined the program's fraud risk tolerance or outlined plans for referring potential fraud to the Department of Commerce Office of Inspector General (OIG) to investigate. Bureau officials described plans to take these actions later this year, but not for updating the antifraud strategy. Updating this strategy to include the Bureau's fraud risk tolerance and OIG referral plan will help ensure the strategy is current, complete, and conforms to leading practices. What GAO Recommends GAO is making seven recommendations, including that the Bureau set clear time frames for developing mitigation and contingency plans, require that mitigation and contingency plans include all key attributes, hold risk owners accountable for carrying out their risk management responsibilities, and update its antifraud strategy to include a fraud risk tolerance and OIG referral plan. The Department of Commerce agreed with GAO's recommendations.
gao_GAO-19-315
gao_GAO-19-315_0
Background Medicaid Section 1115 Demonstrations A total of 43 states operated at least part of their Medicaid programs under demonstrations, as of November 2018. State demonstrations can vary in size and scope, and many are comprehensive in nature, affecting multiple aspects of a state’s Medicaid program. Nationally, federal spending under demonstrations represented over 30 percent of all federal Medicaid spending in fiscal year 2016. (See app. II.) Demonstrations are typically approved by CMS for an initial 5-year period, but some states have operated portions of their Medicaid programs under a demonstration for decades. This can be achieved by a state requesting approval by CMS for one or more 3- to 5-year extensions of an existing demonstration (referred to as an extension). States often make changes to their demonstrations, either through the extension process or by requesting to amend a demonstration during the approval period (referred to as an amendment). From January 2017 through May 2018, CMS approved applications for a new demonstration, extension, amendment, or a combination of these in 23 states. (See fig. 1.) Each demonstration is governed by special terms and conditions (STCs), which reflect the agreement reached between CMS and the state, and describe the parameters of the authority granted to the state. For example, the STCs may define for what populations and services funds can be spent under the demonstration, as well as specify various state reporting requirements. The STCs also include a spending limit for the demonstration that is meant to ensure the demonstration is budget neutral to the federal government; that is, the federal government should spend no more under a state’s demonstration than it would have spent without the demonstration. Transparency Requirements for Demonstration Applications Requirements for new demonstrations and extensions. As required under PPACA, HHS issued regulations in 2012 to address transparency in the approval of applications for new demonstrations and extensions. The regulations include requirements for states to seek public input on their proposals prior to submitting an application to CMS, requirements for information states must include in their public notices and applications, and procedures that CMS would follow upon receiving the application. CMS reviews the submitted application to check for compliance with these regulations, before seeking additional public input through a 30-day comment period at the federal level. (See fig. 2.) The regulations also provide CMS discretion to engage in additional transparency activities on a case-by-case basis. Requirements for amendments. The 2012 regulations do not apply to states seeking to amend existing demonstrations. Instead, the transparency requirements for amendments are set by guidance HHS issued in 1994 and in the individual STCs that govern each demonstration. The requirements from the guidance and STCs include, for example, that the state seek public input prior to submitting its application and provide in its application an explanation of its process for notifying the public and a detailed description of what is being amended, including the impact on beneficiaries. Transparency Requirements Post- Approval CMS’s regulations also include monitoring and evaluation requirements to ensure that the outcomes of demonstrations are transparent. Monitoring. States must perform periodic reviews of the implementation of their demonstrations, and the STCs typically require states to report those outcomes to CMS periodically. The regulations also require states to conduct a public forum within 6 months after the implementation date of the demonstration, and annually thereafter, to solicit public comments on the progress of the demonstration project and summarize issues raised in monitoring reports submitted to CMS. The regulations require that states submit the annual monitoring reports to CMS. Evaluation. States are required to conduct evaluations to assess whether their demonstrations are achieving the state’s goals and objectives. After a demonstration is approved, states are required to submit an evaluation design to CMS for review and approval. The evaluation design must discuss the hypotheses that will be tested, the data that will be used, and other items outlined in the STCs. In the event that a state wishes to extend its demonstration, the state’s extension application must include, among other things, a report presenting the evaluation’s findings to date, referred to as an interim evaluation report. States are also required to submit final evaluation reports at the demonstration’s end. All evaluation designs and reports are to be made public. CMS Has Developed Procedures for New Demonstrations and Extensions to Improve Transparency of Approvals, but Weaknesses Remain CMS Has Developed Procedures to Improve the Transparency of Demonstration Approvals Transparency Requirements for New Demonstrations and Extensions The transparency requirements for new Medicaid section 1115 demonstrations and extensions of existing demonstrations are established in federal regulations at 42 C.F.R. part 431, subpart G. We found that CMS has developed procedures for assessing states’ applications for new demonstrations and extensions against the transparency requirements established in 2012 (see sidebar). Specifically, CMS’s procedures involve reviewing incoming applications for new demonstrations or extensions against detailed checklists the agency designed to align with transparency requirements in the regulations. (CMS refers to these as completeness checks.) For example, the checklist for new demonstrations includes checks for whether the application included a description of the demonstration; any proposed changes to the benefits, delivery system, or eligibility requirements; information on the public hearing(s) and public comment process the state conducted; and a summary of the issues raised in the state public comment process. (See fig. 3.) We found that CMS completed checklist reviews for each of the 11 applications for new demonstrations or extensions that CMS approved from January 2017 through May 2018. CMS has also developed and implemented procedures for seeking public input at the federal level and making that input publicly available. This includes CMS sending email notifications to individuals who have registered on the agency’s website when demonstration applications are open for public comment; posting the application on the website where the public can post comments during the 30-day comment period; and maintaining the public comments on the website, which are maintained indefinitely, according to CMS officials. We found that CMS conducted a federal comment period for all 11 of the new and extension applications in our review period. In addition to storing the federal public comments, CMS’s website contains a record of key decisions and documents for each demonstration (referred to as the administrative record). The administrative record includes states’ applications, as well as CMS’s approvals, denials, and decisions about the completeness of applications—a requirement under the 2012 regulations. CMS officials told us that they include additional documents as standard practice, though they are not required to be posted, such as a fact sheet on the demonstration and other official communication between the agency and the state, to support transparency. CMS first launched this section of its website in December 2011 with an aim to improve access to Medicaid program information, including information on demonstrations, and redesigned the website in 2013 to improve functionality. The administrative record provides a history, dating as far back as 2011, of what a state has tested, how the approach has evolved over time, and what has been learned from the approach. CMS’s Policies and Procedures for Ensuring Transparency of Demonstration Approvals Have Weaknesses We identified several areas of weakness in CMS’s policies or procedures for ensuring transparency in approvals of new demonstrations and extensions of existing demonstrations. These weaknesses related to the transparency of major changes made to pending applications, the transparency of changes to approved spending limits, and inconsistency in CMS’s review of applications for compliance with transparency requirements for new demonstrations and extensions. Transparency of Major Changes to Pending Applications CMS did not apply a consistent approach to ensuring transparency in two states that made major changes to their demonstration applications mid- review. Indiana and Kentucky submitted changes to pending applications, the first for an extension and the latter for a new demonstration that had substantial potential effects for some beneficiaries. Indiana’s changes included adding new eligibility requirements for some beneficiaries, and Kentucky’s changes included accelerating the effective dates of new requirements to maintain eligibility (see sidebar). CMS did not require either state to solicit public input, though both states opted to hold a public comment period on the proposed changes concurrent with CMS’s review. Further, CMS reviewed Indiana’s proposed changes against limited transparency requirements but did not do so for Kentucky. Indiana submitted a final version of its application summarizing public input and the state’s response, while Kentucky did not. Thus, the extent to which these comments were considered at the state and federal levels was not transparent to the public. Figure 4 shows a timeline of the events surrounding Indiana’s and Kentucky’s requests to make changes to their pending demonstration applications. Kentucky: In July 2017, Kentucky submitted changes to its pending application for a new demonstration, including: replacing a provision for a year-long phase-in of a proposed 20-hour per week work and community engagement requirement for beneficiaries with a 3- month phase-in of the requirement; and disenrolled for 6 months for failing to timely report changes in income or other circumstances affecting eligibility. CMS approved a significant increase in the spending limit for a portion of Florida’s demonstration—which appeared to reflect a change in the agency’s position on the allowable use of the funds—without making transparent the basis for this decision. Specifically, CMS increased the spending limit for a pool of funds for payments to offset providers’ uncompensated care costs by close to $1 billion in 2017 after having reduced the limit 2 years earlier. In its approval letter, CMS provided limited information on the basis for this change. CMS stated that the limit was based on the state’s most recent data on uncompensated care costs, but did not disclose a significant change in its methodology for setting these limits. In unpublished correspondence to Congress, CMS indicated that the calculation of the spending limit was broadly consistent with previous policy with one significant change. Specifically, the letter indicated that whether the state had opted to expand Medicaid coverage to low-income, childless adults as provided for under PPACA would no longer factor into the limit, thus allowing CMS to include uncompensated care costs for this population in setting the limit. This change led to increasing the state’s spending limit to $1.5 billion annually. (See text box.) Moreover, CMS noted plans to apply this change across all states going forward. CMS officials, however, did not indicate that they had publicly communicated this policy change to all states. In past reports, we have recommended that HHS make public the basis for demonstration approvals including the basis for elements used to set spending limits, and in 2008, we raised the issue as a matter for Congress to consider. CMS has taken a number of steps in the last several years to update and make public its policies for setting spending limits, but has not yet taken action to make public the basis of spending limits. CMS Decision to Increase Spending Limit for a Funding Pool in Florida’s Medicaid Section 1115 Demonstration In letters to Florida, CMS wrote: April 14, 2015: “…over time, CMS has had a number of concerns about the LIP , including its lack of transparency, encouragement toward overreliance on supplemental payments, and distribution of funds based on providers’ access to local revenue instead of service to Medicaid patients. Last year, CMS made clear that LIP would not continue in its current form….We will approach review of a LIP proposal from Florida based on several key principles. First, coverage rather than uncompensated care pools is the best way to secure affordable access to health care for low-income individuals, and uncompensated care pool funding should not pay for costs that would be covered in a Medicaid expansion.…” October 15, 2015: “…Pursuant to our June 23, 2015 agreement in principle, establishing the size, duration, and distribution methodology for the Low Income Pool (LIP)…The total computable dollar limit in demonstration year 10 (2015-2016) will be $1 billion. In demonstration year 11 (2016-2017) the total computable dollar limit will be $607,825,452…” August 3, 2017: “For the extension, CMS has established an amount for the low-income pool’s (LIP) uncompensated care pool to be approximately $1.5 billion annually, based on the most recent available data on hospitals’ charity care costs.” Inconsistency in CMS’s Review of Applications for Compliance with Transparency Requirements Finally, we observed some inconsistencies in CMS’s reviews of states’ applications for their compliance with the transparency requirements for new demonstrations and extensions. Expected changes in enrollment were not always included in state public notices. In two of the four applications for new demonstrations and extensions for which we conducted in-depth reviews (Florida’s extension and Washington’s new demonstration), estimates for the expected increase or decrease in enrollment were not included in the state’s public notice documents as required. CMS officials told us that they are revising procedures to resolve such inconsistencies, including making additions to written standard operating procedures. Evaluation information was not always included in state applications. Although states seeking extensions are required to submit an interim evaluation report, Florida only included a statement in its application that it had recently executed an evaluation contract and had no findings to report. According to CMS, Florida’s evaluation design was not approved until weeks before the extension application was due. Despite not having information on whether Florida’s demonstration was meeting its goals, CMS officials considered the state’s application complete, stating that Florida had met the intent of the regulation by providing its findings to date. In 2018, we reported that there were limitations in state evaluations of demonstrations, in part, due to how CMS sets requirements for evaluations, and we made a recommendation to improve CMS’s procedures. In line with our recommendation, CMS has since developed an enhanced set of STCs that specify when evaluation reports are due, and reported in November 2018 that it is in the process of developing protocols to ensure that these requirements are consistently included in the STCs. CMS Applies Limited Transparency Requirements in Approving States’ Amendments to Existing Demonstrations Transparency Requirements for Amendments Applications for amendments to Medicaid section 1115 demonstrations are subject to requirements for seeking public input outlined in guidance the Department of Health and Human Services (HHS) issued in 1994 and those included in the special terms and conditions governing each demonstration. CMS applies limited transparency requirements to states’ applications to amend existing demonstrations, despite the fact that states may propose significant changes to demonstrations through amendments (see sidebar). CMS does not place limits on what changes can be made through amendments. From January 2017 through May 2018, CMS approved 21 amendments in 17 states, and we found that at least 17 amendment applications were pending CMS approval as of January 2019. These 17 states made a wide range of changes to their demonstrations through amendments. For example, one state amended its demonstration to cover dental services for adults with disabilities, while other amendments included such changes as requiring beneficiaries to work or participate in community engagement activities as a condition of maintaining Medicaid eligibility, as was done through amendments in Arkansas and New Hampshire during the period we reviewed. As it does with applications for new demonstrations and extensions, CMS reviews amendment applications by using a checklist, conducting a federal public comment period, and posting state demonstration documentation on the CMS website. However, the transparency requirements for amendments applied during the checklist review are more limited than the requirements for new demonstrations and extensions. (See fig. 5.) The transparency requirements for amendment applications are more limited than those for new demonstrations and extensions in the following key areas, potentially limiting CMS’s ability to ensure public transparency for the approvals of amendments. No requirement to hold a state public comment process or provide CMS a summary of public input received. For amendments, states have a range of options for seeking public input and, unlike for new and extension applications, states are not required to submit a summary of the public input received on their applications and how they responded. Instead, in amendment applications, states are only required to describe the process the state used to solicit public input. Among the three amendment approvals for which we conducted an in-depth review, California did not hold a formal public comment period and did not provide CMS information on any public input it received, neither of which is required under the transparency requirements for amendments. No requirement to include expected changes in enrollment and costs. In contrast with requirements for new demonstrations and extensions, CMS does not require states to include in amendment applications the expected increase or decrease in enrollment, and the amendment applications we reviewed included limited or no information on changes to enrollment. (See text box.) CMS also does not require information on expected changes in costs for all amendments, and we found variation in the information included in amendment applications, including limited or no information on costs. No minimum requirements for information to be included in the public notice. Unlike the transparency requirements for new demonstrations and extensions, there are no requirements specifying what information states must include in their public notices for amendments. For example, Arkansas did not include in its public notice information on the changes to enrollment estimated from any of the amendment provisions. In addition to the differences in the transparency requirements for amendment applications, we identified inconsistencies in how CMS applied the transparency requirements for amendment applications across states, particularly the requirements related to describing changes to the demonstration evaluations. For amendments, as is similar to extensions, states are required to describe how the demonstration’s evaluation will be revised to incorporate amendment provisions. The following are examples of the inconsistencies: CMS determined that Massachusetts’s amendment application, which proposed to waive non-emergency medical transport, eliminate provisional eligibility for most populations, and cover former foster care youth, was determined to be incomplete (that is, not in compliance with the transparency requirements), partially due to the state not submitting a revised evaluation design plan. In contrast, CMS determined that Arkansas’s application, which did not include a revised evaluation design plan, was complete. Arkansas noted in its application that the state planned to revise its evaluation to test two additional hypotheses. However, the added hypotheses did not address, for example, the waiver for retroactive eligibility proposed in the application. Among the 18 other demonstration amendment applications CMS approved during our review period, there was variation in the information the states included about the changes to the demonstration’s evaluation hypotheses or design. For example: Iowa submitted an amendment application, which CMS determined to be in compliance with transparency requirements, to waive retroactive eligibility for all beneficiaries, and said that it was not changing the evaluation design based on the amendment provisions. In at least two other states’ amendments—Florida and Utah—the applications, which CMS determined to be in compliance with transparency requirements, did not include any information on the changes to the evaluation due to the amendment or indicated that the state would be making changes at a later date. The potential effects of policy changes states make through amendments can be comparable to effects of new demonstrations and extensions. CMS has considered taking further steps to ensure transparency for amendments, but has not done so. Specifically, in both its response to comments in the 2012 final rule and a subsequent letter to state Medicaid directors in 2012, CMS indicated that the agency intended to evaluate the types of amendments submitted by states and issue further guidance on how the notice and comment provisions would be applied to amendments that have a significant impact. However, CMS did not issue such guidance and officials told us that they had no plans to do so. CMS officials told us that including standard requirements in demonstration STCs for submitting amendment applications helps improve the transparency of amendments. However, the standard requirements that CMS has included do not ensure that states provide information to the public or CMS on the effect of an amendment on enrollment and costs, key pieces of information for amendments that have and may continue to have a significant impact. According to federal standards for internal controls related to risk assessment, federal agencies should identify and manage risks related to achieving agency objectives. Without a policy with robust transparency requirements for amendment applications with significant impacts, there is the potential that states and CMS will fail to receive meaningful public input on the amendment and thereby lack complete understanding of the impact. As a result, CMS may not be positioned to mitigate any potential risks in the demonstrations being amended or when other states request to test similar policies in the future. CMS Has Used Public Input in Making Demonstration Approval Decisions, but the Extent to Which Input Influenced Monitoring and Evaluations Was Not Always Clear CMS Reviewed Comments Received through the State and Federal Public Input Process for its Approval Decisions, and in Some Cases Approved Changes Consistent with the Comments CMS reviewed state descriptions of issues raised during the state public input process and the state’s response as part of its application review. Applications for six of the seven approvals for which we conducted in- depth reviews summarized themes from the comments that were received and included the states’ responses to these comments. State responses included laying out changes the state made to the proposal in response to the comments, clarifying certain aspects of the proposed demonstration, or providing justification for not making a change. However, the level of detail in state summaries of their responses to these comments varied considerably. For example: Washington application for new demonstration. Washington provided an extensive summary of the comments received, categorized by themes, along with the state’s responses to each of them. One commenter suggested a 1-year implementation period to ensure that sufficient planning and preparations were undertaken before the new demonstration officially went into effect. The state agreed that this was “essential to assure operational readiness and critical success of this demonstration,” and revised its proposal to include a 9-month implementation period. Florida application to extend demonstration. In contrast to Washington, Florida’s application to extend its managed care program provided a long list of state comments and nearly all were addressed with a standard response from the state that the comments were taken into consideration, but no changes to the existing STCs were being requested. These included concerns about access and choice under current pharmacy networks, and other access issues such as difficulties in obtaining referrals to specialists. Florida officials told us that they addressed stakeholder concerns through the state public comment process, which includes public forums, and that they are not necessarily required to provide any additional explanation in the state’s application to CMS. Florida officials also stated that some demonstration elements stem from state legislation, which limits their ability to make changes in response to comments. According to CMS officials, historically, the agency has not requested the full set of comments that are submitted to the states. None of the states that we reviewed attached the actual comments that were received to the application—only summaries—though some posted them on their website or had them available upon request, according to state officials. CMS officials told us the agency has not consistently requested that states provide the actual comments received; however, in the last year and a half the agency has been making more of an effort to request the full set of comments instead of solely relying on the summaries provided in the applications. Officials said they anticipate that this will be the agency’s standard practice going forward. In making demonstration decisions, CMS reviews and summarizes all input received during the federal comment period. CMS created a summary of comments received for all seven of the demonstrations we reviewed. Officials said that these summaries are used to brief CMS leadership as part of the decision-making process. We found that the level of detail in the summaries we reviewed varied, ranging from bulleted lists identifying and detailing common themes in support, opposition, or both, to a few brief sentences covering all comments. This variation often reflected the differences in the number of comments received and the significance of the concerns raised. For example: CMS received about 100 comments on Washington’s application during the federal comment period that were predominantly supportive of the new demonstration, and CMS’s summary was a brief overview. In contrast, CMS received thousands of comments on Kentucky’s application for each of the three federal comment periods held for that new demonstration, with many opposed to or concerned with certain aspects of the application. CMS’s summaries of comments received on the Kentucky application provided an overview of the issues raised, along with counts of how many fell within different themes and how many were in support, opposition, or unrelated to what was being proposed. CMS officials explained that there are unique circumstances surrounding each demonstration—a comment period with many concerns raised or conflicting viewpoints will necessitate a longer and more detailed summary than one that has broad support and few, if any, areas of disagreement. We found instances where CMS approved changes to certain aspects of the demonstrations that were in line with concerns raised by the comments. Among the seven demonstrations we reviewed in-depth, CMS received comments for four demonstrations that included concerns about the state’s proposal: Arkansas, Indiana, Kentucky, and Massachusetts. For Arkansas and Kentucky, CMS either approved more limited changes than what the state initially proposed or required that certain beneficiary protections be in place. Arkansas: In its amendment application, Arkansas requested a waiver of the requirement to provide retroactive eligibility, among other things. Commenters were concerned that the state’s proposal to eliminate retroactive eligibility would result in gaps in coverage, adverse health outcomes, and medical debt for members. In CMS’s approval, the agency acknowledged these concerns and allowed the state to reduce the period for retroactive eligibility from 90 days to 30 days but not eliminate it completely. (See fig. 6.) Kentucky: In Kentucky, some commenters were concerned about the state’s proposal to implement a work and community engagement requirement as a condition of Medicaid eligibility, noting that individuals need to be healthy to work or look for a job. CMS said in its January 2018 approval that Kentucky was exempting medically frail individuals from this requirement, but CMS would also be requiring that the state add certain protections for vulnerable individuals, including maintaining a system that identifies, validates, and provides reasonable accommodations. We also found there were instances where CMS approved certain aspects of the demonstrations despite concerns raised by the comments. CMS’s rationale for those decisions varied across demonstrations. For example, CMS noted in one instance that sufficient controls were planned to address the concerns raised, and in another instance noted that the potential benefits of the demonstrations outweighed the risks. The following are examples of when CMS approved aspects of states’ demonstrations without changes. Arkansas: In Arkansas, some commenters were opposed to the enforcement mechanism for the state’s proposal to institute a work and community engagement requirement as a condition of maintaining eligibility. The state proposed to disenroll beneficiaries who fail to fulfill these requirements for any 3 months during a calendar year and lock them out from coverage until the start of the next calendar year. CMS approved this proposal and provided an explanation of the circumstances under which it would happen, underscoring that individuals have three opportunities (each of the months they fail to fulfill the requirements) to rectify the situation or seek an exemption before they would ultimately lose coverage. CMS indicated in the approval letter to Arkansas that it believed the health benefits of community engagement outweigh the health risks with respect to those who fail to respond. Indiana: In Indiana, some commenters were opposed to the state’s proposal to institute a work and community engagement requirement as a condition of maintaining eligibility. They argued, in part, that beneficiaries who are able to work are already doing so and the requirement is unnecessarily burdensome. CMS responded that employment is positively correlated with health outcomes and imposing these requirements serves the purposes of the Medicaid statute. (See fig. 7.) In an effort to improve transparency around its approvals, CMS began providing a high-level summary and response to public comments in the demonstration approval letters beginning in January 2018. Agency officials said this will be their standard practice going forward. Our review of the approval letters sent between January 1, 2018, and July 31, 2018, confirmed that CMS included a discussion of some of the issues that were raised in 10 of 11 letters. For example, the approval letters explained the decision to reduce the period of retroactive eligibility in Arkansas instead of eliminating it completely, as well as the decision to approve Indiana’s proposal to implement work and community engagement requirements. However, the approval letters do not respond to every concern raised. For example, a number of commenters were concerned with a request in Arkansas’s amendment application to no longer offer presumptive eligibility, but CMS did not respond to these concerns in the approval letter. CMS officials told us that the agency is striking a balance between transparency and processing applications in a timely manner. The Extent to Which CMS Used Public Comments to Inform Monitoring and Evaluation Decisions Was Not Always Clear Among the four demonstration approvals for which we conducted in-depth reviews and where public comments raised concerns—approvals for Arkansas, Indiana, Kentucky, and Massachusetts—we observed instances where CMS added specific monitoring requirements to the STCs that aligned with these concerns and other cases where the agency did not. For example: The STCs required Arkansas to submit a monitoring plan for its work and community engagement requirement in order to monitor eligibility operations and the impact on beneficiaries reapplying for coverage after being disenrolled for noncompliance. In contrast, CMS did not require a monitoring plan for the Indiana and Kentucky demonstrations, which also included work and community engagement requirements where the public raised concerns about the effects on beneficiaries. This remains the case for Indiana; however, CMS’s new approval of Kentucky’s demonstration in November 2018 included additional monitoring requirements. Specifically, the November 2018 STCs required Kentucky to submit a monitoring protocol that includes measures for monitoring enrollment, disenrollment, and eligibility suspension, among other things. In other cases where public comments raised concerns about the impact of demonstrations on beneficiaries, including changes in eligibility requirements (e.g., retroactive eligibility), we did not observe specific monitoring requirements included in the STCs. Though CMS did not provide any specific examples, officials told us that they consider public input when making decisions about monitoring requirements. Officials also said they were developing monitoring metrics and tools that they plan to use consistently going forward for states implementing work and community engagement requirements. As of January 2019, officials said these materials were in draft form and under review. Regarding evaluations, the extent to which CMS considered concerns raised through public comments for the four demonstration approvals was also not always clear, including how input informed the evaluation requirements in the STCs. For example, commenters on the applications submitted by Indiana and Kentucky raised concerns about aspects of the work and community engagement requirements proposed by each state, such as the requirements for reporting work or other activities and the circumstances under which beneficiaries would lose coverage. In the STCs for Indiana, CMS did not include specific hypotheses that the state would be required to test related to its work and community engagement policies. Instead, CMS noted that the state’s goals should inform the evaluation, subject to CMS approval. For example, Indiana’s goals included determining whether implementing work and community engagement requirements will lead to sustainable employment and improved health outcomes among beneficiaries. In the STCs for Kentucky’s initial approval in January 2018, CMS included the same language as in Indiana—that the goals should inform the state’s evaluation. However, in the STCs approved for Kentucky in November 2018, CMS added some broad guidance for Kentucky’s draft evaluation design. Specifically, CMS included a variety of hypotheses that the state must evaluate, such as the effect of work requirements on enrollment and the impact of the demonstration on uncompensated care costs. When approving evaluation designs, the extent to which CMS considers areas of risk identified through public input is also unclear at this time. As of January 2019, evaluation designs for the Arkansas and Indiana demonstrations were under review at CMS and Kentucky had not yet submitted one. Regarding Arkansas’s evaluation design, CMS sent a letter to the state providing comments and feedback that seem to align with some of the concerns raised about the demonstration through public input. Specifically, the November 2018 letter from CMS raised concerns with the state’s “broadly defined” expected outcomes of the demonstration, which included culture of work and personal life stability. CMS recommended that the state revise the design to include a list of quantifiable outcomes and measures that capture the important features, such as increased employment (e.g., hours worked, wages) and improved health (e.g., health care utilization). For Massachusetts, the one demonstration with an approved evaluation design, the extent to which CMS considered public input during approval was unclear. For example, with regard to Massachusetts’s proposal to discontinue provisional eligibility for most adults, commenters raised concerns about the potential effects on beneficiaries’ timely access to coverage and care; however, the evaluation design did not include plans to examine the effects of the policy on beneficiaries. Though CMS did not provide specific examples of how public input had informed evaluation designs, CMS officials said requirements for evaluations have been evolving as they have gained experience in understanding the public’s concerns. Officials also said they were developing robust evaluation guidance that they plan to use consistently going forward for states implementing work and community engagement requirements. As of January 2019, officials said this guidance was in draft form and under review. Conclusions While CMS has long recognized the importance of public input in the demonstration approval process, the agency has developed more robust procedures for ensuring transparency since the beginning of 2012. Despite this progress, CMS’s approach to ensuring transparency when states propose major changes to their demonstrations has significant gaps. The lack of policies for ensuring transparency when states make major changes to pending applications and limited transparency requirements applied for amendments—which are being used by some states to make major changes to their demonstrations—puts CMS’s goal of transparency at risk. These gaps may leave the agency and the public without key information to fully understand the potential impact of the changes being proposed, including on beneficiaries and costs. These risks take on increased importance given that CMS is encouraging states to use the flexibility provided under demonstrations to test changes to their Medicaid programs that could have significant effects for beneficiaries and other stakeholders. Recommendations for Executive Action We are making the following two recommendations to CMS: The Administrator of CMS should develop and communicate a policy that defines when changes to a pending section 1115 demonstration application are considered major and should prompt a new review of the application against the transparency requirements applicable to the pending application. (Recommendation 1) The Administrator of CMS should develop and communicate a policy whereby applications for section 1115 demonstration amendments that may have significant impact are subject to transparency requirements comparable to those for new demonstrations and extensions. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHS concurred with both recommendations. HHS’s comments are reproduced in appendix III. Regarding our first recommendation concerning when states submit major changes to pending demonstration applications, HHS stated that it will develop (1) standards for determining when such changes are so substantial that it would be appropriate for HHS to solicit additional public input, and (2) a process for informing states and the public about the additional comment period. These steps appear to formalize the approach CMS has already been taking as demonstrated by the agency’s response to the changes submitted by Indiana and Kentucky to their applications. Our recommendation, however, requires additional actions. In particular, we recommended that CMS develop and communicate a policy that includes standards for when changes are substantial enough to warrant a new review of the application against the transparency requirements. The transparency requirements, among other things, call for states to provide for public notice and input at the state level before they submit their applications. As such, holding an additional federal comment period would not be sufficient to meet our concerns. Regarding our second recommendation—concerning transparency requirements for amendment applications that may have significant impacts—HHS said that it has implemented enhanced processes to improve transparency and will review its current processes and develop additional policies and processes, as needed, to enhance the transparency of such applications. However, the enhanced processes HHS referred to do not apply to amendments. Thus, HHS’s planned review of its policies alone would not be sufficient to meet our concerns. HHS’s efforts should also result in actions to develop and communicate a policy that ensures amendments with significant impacts meet transparency requirements comparable to those for other applications, namely new demonstrations and extensions. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the appropriate congressional committees, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7144 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Appendix I: Information about Selected Medicaid Section 1115 Demonstration Approvals, January 2017–May 2018 Florida Massachusetts Washington Appendix II: Federal Spending for Medicaid Section 1115 Demonstrations, by State, Federal Fiscal Year 2016 State Total Medicaid expenditures (dollars in millions) State Total Medicaid expenditures (dollars in millions) Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgements Staff Acknowledgements In addition to the contact named above, Susan Barnidge (Assistant Director), Linda McIver (Analyst-in-Charge), Michael Moran, and Jessica L. Preston made key contributions to this report. Also contributing were Drew Long, Vikki Porter, and Emily Wilson.
Why GAO Did This Study Section 1115 demonstrations are a significant component of Medicaid spending and affect the care of millions of beneficiaries. The Patient Protection and Affordable Care Act required the Department of Health and Human Services (HHS) to establish procedures to ensure transparency in approvals of new demonstrations and extensions to existing demonstrations. The act did not address amendments, which are subject to long-standing guidance on public input. GAO was asked to examine the transparency of demonstration approvals. Among other things, this report examines CMS's transparency policies and procedures for new demonstrations and extensions, and amendments to existing demonstrations. To review a variety of approval types across a large number of states, GAO examined all approvals of new demonstrations and extensions of and amendments to existing demonstrations granted from January 2017 through May 2018. GAO also conducted in-depth reviews of one approval in each of seven states, selected to include at least two approvals of each type. GAO reviewed demonstration documentation for these states, and interviewed state and federal Medicaid officials. GAO also assessed CMS's procedures against federal internal control standards. What GAO Found Medicaid demonstrations allow states flexibility to test new approaches for providing coverage and delivering Medicaid services. Since 2012, the Centers for Medicare & Medicaid Services (CMS), which oversees demonstrations, has developed procedures to improve the transparency of the approval process. For example, CMS reviews demonstration applications (including for new demonstrations, extensions, and amendments to existing demonstrations) for their compliance with applicable transparency requirements, including that states seek public input on their applications. However, GAO found weaknesses in CMS's policies for ensuring transparency. Changes to pending applications for new demonstrations or extensions. CMS lacks policies for ensuring transparency when states submit major changes to pending applications. For two of the four approvals of new demonstrations or extensions GAO reviewed in-depth, states submitted changes to their applications that could have significant effects on beneficiaries (such as disenrollment or other penalties) without first obtaining public comment on these changes at the state level. Amendments to existing demonstrations. CMS's transparency requirements for amendments are limited. For example, CMS does not require amendment applications to include how the changes may affect beneficiary enrollment or report on concerns raised in state public comments. However, states have proposed major changes—such as work and community engagement requirements—through amendments, raising concerns that major changes to states' demonstrations are being approved without a complete understanding of their impact. What GAO Recommends CMS should develop policies for ensuring transparency when states (1) submit major changes to pending demonstration applications and (2) propose amendments to existing demonstrations. HHS concurred with these recommendations.
gao_GAO-19-687T
gao_GAO-19-687T_0
Background Scheduling Outpatient Appointments in VA Medical Facilities Enrollment is generally the first step veterans take to obtain health care services, within VA or through community care. VA’s Health Eligibility Center manages the process of accepting applications, verifying eligibility, and determining enrollment, in collaboration with VA medical centers. VA requires veterans’ enrollment applications be processed within 5 business days of receipt, including pending applications that require additional information from the applicant to process. Once enrolled, veterans can access VA health services by scheduling an appointment. VA’s scheduling policy establishes the procedures for scheduling medical appointments, as well as sets the requirements for staff directly or indirectly involved in the scheduling process (e.g., training). A scheduler at the VA medical facility is responsible for making appointments for new and established patients (i.e., patients who have visited the same VA medical center in the previous 24 months), which are then recorded in VA’s electronic scheduling system. VA scheduling policy requires patients who have requested an appointment and have not had one scheduled within 90 days to be placed on VA’s electronic wait list. VA determines wait times at each facility based on outpatient appointment information from its scheduling system. VA’s Public Websites with Appointment Wait-Time Information VA is required to publish information on appointment wait times at each VA medical facility for primary care, specialty care, and hospital care and medical services, which it does through two public websites. In November 2014, VA began posting monthly wait times for scheduling appointments at all VA medical facilities. One public website provides links to spreadsheets containing data for each VA medical facility, such as the average wait times for primary, specialty, and mental health care appointments and the number of patients on the electronic wait list. In April 2017, VA created a second public “Access and Quality in VA Healthcare” website to post both patient access data and information on VA medical facilities’ performance on various quality metrics. This website aims to help veterans find wait times at a specific facility. This information would allow veterans and their family members to use the wait-time data on this website to determine the best option for obtaining timely care. VA’s Community Care Programs In order to receive needed care in a timely manner, veterans may need to obtain care outside of VA medical facilities through one of VA’s community care programs. VA has purchased health care services from community providers through various community care programs since 1945. Veterans may be eligible for community care when they are faced with long wait times or travel long distances for appointments at VA medical facilities, or when a VA facility is unable to provide certain specialty care services. Since 2014, Congress has taken steps to expand the availability of community care for veterans. The Veterans Access, Choice, and Accountability Act of 2014 provided up to $10 billion in funding for veterans to obtain health care services from community providers. The law established a temporary program—called the Veterans Choice Program (Choice Program)—to offer veterans the option to receive hospital care and medical services from a community provider when a VA medical facility could not provide an appointment within 30 days, or when veterans resided more than 40 miles from the nearest VA facility or faced other travel burdens. VA contracted with two third-party administrators (TPA) to establish networks of community providers, schedule veteran appointments with those providers, and pay those providers for services rendered through the Choice Program. In June 2018, the VA MISSION Act of 2018 was enacted to further address some of the challenges faced by VA in ensuring timely access to care. The Act required VA to implement within 1 year a permanent community care program—the Veterans Community Care Program (VCCP). The act identified criteria that all veterans enrolled in the VA health care system would be able to qualify for care through the VCCP; for example, if VA does not offer the care or service needed by the veteran or VA cannot provide the veteran with care and services that comply with its designated access standards. The access standards include appointment wait times for a specific VA medical facility; for example, veterans may be eligible for care through the VCCP if VA cannot provide care within 20 days for primary and mental health care, and 28 days from the date of request for specialty care, unless veterans agree to a later date in consultation with their VA health care provider. VA Has Taken Actions to Address Deficiencies in Appointment Scheduling and Timeliness Identified in Prior Work, but Additional Actions Are Needed VA Has Taken Steps to Address Our Recommendation to Improve Wait-Time Measurement and Has Implemented Our Recommendation to Improve Implementation of Scheduling Policy VA has taken a number of actions to address our recommendations regarding deficiencies we found in wait-time measurement and implementation of its scheduling policy. For wait-time measurement, these actions included changes to the wait-time measurement definitions, provision and documentation of scheduler training, and improved oversight through audits, all of which have been in a state of flux for the past 6 years. On July 12, 2019, VA provided us additional updates on efforts to implement our related recommendations. This new information fully addresses one of our recommendations. VA Wait-Time Measurement In December 2012, we found that outpatient medical appointment wait times reported by VA were unreliable, and, therefore, VA was unable to identify areas that needed improvement or mitigate problems for veterans attempting to access care. VA typically has measured wait times as the time elapsed between the ‘start date’—a defined date that indicates the beginning of the measurement—and the ‘end date’, which is the date of the appointment. At the time of our 2012 report, VA measured wait times as the number of days elapsed from the start date identified as the desired date—the date on which the patient or health care provider wants the patient to be seen—to the date of the appointment. We found that the reliability of the reported wait-time measures was dependent on the consistency with which schedulers recorded the desired date in the scheduling system, as required by VA’s scheduling policy. However, VA’s scheduling policy and training documents for recording the desired date were unclear and did not ensure consistency. We observed that not all schedulers at VA medical centers that we visited recorded the desired date correctly. Therefore, we recommended that VA either clarify its scheduling policy to better define the desired date, or identify clearer wait- time measures that are not subject to interpretation and prone to scheduler error. VA concurred with the recommendation, which we have identified as among those recommendations that warrant priority attention. Actions VA has taken or is taking to address this recommendation include: changes to the start date and definitions for wait-time measurement, provision and documentation of scheduler training, and improved oversight through scheduler audits. In addition, we are currently assessing new information VA provided in July 2019, which will include obtaining additional evidence and clarification from VA to see whether it has fully addressed our concerns. VA’s Actions to Change Start Dates for Wait-Time Measurement While the terminology for the start dates of the wait-time measurement has changed several times over the past 6 years, we believe that the current definitions of the start dates are substantively the same as those we reviewed—and found to be deficient—in our 2012 report. VA subsequently introduced new terms with similar definitions—from “desired date” to “preferred date”—without fundamentally addressing the deficiency. See table 1 for the changes to and definitions of the start dates for measuring outpatient appointment wait times and wait-time goals since June 2010. As table 1 shows, for new patients and established patients seeking appointments without a return-to-clinic date specified by their provider, VA changed the terminology of the start date to preferred date in its July 2016 scheduling policy from what it had established in its June 2010 policy. However, the definition of preferred date is substantively the same as the definition of desired date in the previous scheduling policy, the latter of which we found to be subject to interpretation and prone to scheduler error in our 2012 report. We continue to believe that the preferred date is also subject to interpretation and prone to scheduler error, which poses concerns for the reliability of wait times measured using the patient’s preferred date. In its updated July 2016 scheduling policy, VA also changed the terminology of the start date to the “clinically indicated date” for established patients whose provider has documented a clinically appropriate return-to-clinic date in the patient’s electronic health record. The clinically indicated date is substantively the same as the definition of desired date for established patients in the previous scheduling directive. While VA has not clarified the definitions of start dates, VA has taken actions intended to improve the accurate recording of the clinically indicated date in three ways: 1. VA requires clinical leadership (such as the Associate Chief of Staff) at each VA medical facility to ensure that providers enter the clinically indicated date in the electronic health record for future appointments; 2. VA standardized the entry of the clinically indicated date in the electronic health record to improve the accuracy of the date, which was implemented across all VA medical facilities as of July 2018; and 3. VA created a technology enhancement to enable the automatic transfer of the clinically indicated date from the electronic health record to the scheduling system. As a result, the scheduler no longer has to retrieve the date from veterans’ electronic health records and manually enter it into the scheduling system. VA reported that this enhancement was implemented at all but three VA medical facilities as of January 2019. In July 2019, VA reported to us that the error rate for the patient indicated date (either the clinically indicated date, or in the absence of that date, the patient’s preferred date) was 8 percent of about 667,000 appointments audited in the most recent biannual audit cycle, ending March 31, 2019. VA cites an almost 18 percent improvement in reducing the number of errors caused by manual entry of the clinically indicated date due to the use of the technology enhancements. VA’s Actions to Provide and Document Scheduler Training Although VA updated its scheduling policy in 2016, we believe the instructions, which form the basis for wait-time measurement, are still subject to interpretation and prone to scheduler error, making training and oversight vital to the consistent and accurate implementation of the policy. VA reported that 97 percent of all staff who scheduled an appointment within 30 days completed the required scheduling training as of July 2, 2019. VA stated that the department will closely monitor compliance with scheduler training completion for the remaining staff. Given the high turnover among schedulers, it is important that VA remain vigilant about scheduler training, ensuring all who need it receive it. VA’s Actions to Improve Oversight through Scheduler Audits VA has taken a number of actions to improve oversight of the scheduling process through biannual scheduling audits at VA medical centers and second level audits, as well as completion of the first system-wide internal audit of scheduling and wait-time data. Biannual scheduler audits. VA’s July 2016 scheduling policy required biannual audits of the timeliness and appropriateness of schedulers’ actions and accuracy of entry of the clinically indicated date and preferred date, the start dates of wait-time measurement as identified by the revised scheduling policy. In June 2017, VA deployed a standardized scheduling audit process for staff at VA medical centers to use. As part of our recommendation follow-up in July 2019, VA reported 100 percent completion of the required biannual scheduling audits in fiscal year 2018. As noted above, VA reported to us that the error rate for the patient indicated date (either the clinically indicated date, or in the absence of that date, the patient’s preferred date) was 8 percent of about 667,000 appointments audited. While VA asserts that errors in the clinically indicated date have decreased, an error rate of 8 percent still yields errors in more than 53,000 appointments audited. Given these errors, we remain concerned about the reliability of wait times measured using preferred date (one part of the patient indicated date), and have requested additional information from VA about these errors. Second level scheduler audits. In November 2018, VA implemented a second-level scheduling audit (Audit the Auditors program), which is overseen by the VA integrated service networks tasked with oversight of VA medical facilities within their regions. Each medical center within a network region is paired with another medical center and they audit each other’s scheduling audit. Throughout the cycle, medical centers share their findings with each other and the network. The goal is to standardize scheduling audit practices across the network and to ensure reliability of the scheduler audit results. According to VA, the first cycle was completed April 30, 2019, by all VA medical centers. First internal system-wide audit of wait-time data and scheduling. In its first internal audit completed in August 2018, VA was unable to evaluate the accuracy and reliability of scheduling and the wait-time data. Specifically, VA was unable to determine the accuracy and reliability of the scheduling and wait-time data, databases, and data flow from the electronic health record and scheduling system to the VA Access and Quality website because they were not able to obtain the rules for calculating wait times. Given our continued concerns about VA’s ability to ensure the reliability of the wait-time data, we plan to obtain additional information from VA about its methodology and assessment of evidence underlying the audit findings. Scheduling Policy In December 2012, we also found inconsistent implementation of VA’s scheduling policy that impeded VA medical centers’ scheduling of timely medical appointments. Specifically, we found that not all of the clinics across the medical centers we visited used the electronic wait list to track new patients that needed medical appointments as required by VA’s scheduling policy, putting these patients at risk of being lost for appointment scheduling. Furthermore, VA medical centers’ oversight of compliance with VA’s scheduling policy, such as ensuring the completion of required scheduler training, was inconsistent across facilities. Scheduler training was particularly important given the high volume of staff with access to the scheduling system—as of July 2, 2019, VA reported there were approximately 33,000 staff that had scheduled an appointment within the last 30 days. We also found that VA medical centers identified the outdated and inefficient scheduling system as one of the problems that can impede the timely scheduling of appointments and may impact their compliance with VA’s scheduling policy. We recommended VA ensure that VA medical centers consistently and accurately implement VA’s scheduling policy, including use of the electronic wait list, as well as ensuring that all staff with access to the scheduling system completes the required training. VA concurred with this recommendation, which we also have identified as among those recommendations that warrant priority attention. VA’s actions to improve implementation of the scheduling policy, including updated information VA provided in July 2019, fully addresses this recommendation. VA issued an updated scheduling policy in July 2016 that provided clarification on scheduling roles and responsibilities for implementing the policy and business rules for scheduling appointments, such as using the electronic wait list, and required biannual scheduler audits. VA also ensured almost all schedulers received training on the updated scheduling policy and improved oversight through audits, as previously described. In addition, VA plans to rapidly deploy a single nationwide scheduling system that is intended to simplify the operating environment for schedulers and may mitigate challenges identified in our 2012 report. The new scheduling system will be a resource-based system where each provider’s schedule is visible on one screen, instead of requiring the need to toggle through multiple screens as it currently exists. VA plans to roll out the new scheduling system starting in 2020, which is expected to be implemented in coordination with the planned modernization of the electronic health records system across VA facilities. According to VA, the scheduling system will be available for use in advance of the completion of the electronic health record implementation at some sites. VA Has Taken Steps to Address Our Recommendations to Strengthen Enrollment Processes and Management of Initial Requests for Care That Affect Veterans’ Timely Appointments In addition to the recommendations we made to improve VA’s wait-time data and implementation of its scheduling policy, we have also made recommendations to address other factors that affect the timeliness by which veterans obtain appointments. These recommendations have targeted VA’s enrollment processes and its management of veterans’ initial requests for care. While VA has taken some steps to address these recommendations, they have not yet been fully addressed. For example, we have found that VA’s wait-time measures do not yet capture the time it takes the agency to enroll veterans in VA health care benefits, or manage a veterans’ initial request for care. Enrollment Process In September 2017, we found that VA did not provide its medical centers, who historically receive 90 percent of enrollment applications, with clear guidance on how to resolve pending applications, which led to delays in veteran’s enrollment. For example, we found instances in which pending applications remained unresolved for more than 3 months. We concluded these delays in resolving pending applications, along with previously documented delays due to errors in enrollment determinations, may result in veterans facing delays when obtaining health care services or incorrectly denied benefits. We made several recommendations to address these deficiencies, two of which we determined to be priority recommendations for VA to clearly define roles and responsibilities for (1) resolving pending applications and (2) overseeing the enrollment process. VA has made progress in addressing these priority recommendations by beginning to update, but not yet finalizing, its policies, procedures, and guidance on enrollment processing. In 2017, VA’s Health Eligibility Center began conducting secondary reviews of enrollment determinations. However, in fiscal year 2018, Health Eligibility Center staff found that 18 percent of rejected enrollment determinations and 8 percent of ineligible enrollment determinations that underwent secondary reviews were incorrect. These recommendations remain unimplemented as of July 2019. Initial Requests for Care Once enrolled, we have found that VA’s management of veterans’ initial request for care have led to delays; and although VA has clarified timeliness requirements, it has yet to fully capture the wait veterans experience in scheduling initial appointments. In a number of reports from 2015 to 2018, we found instances in which newly enrolled veterans were not contacted to schedule initial primary care appointments, and did not complete initial primary care appointments and mental health evaluations according to VA timeliness requirements. These delays may be understated in VA data, because VA’s wait-time measures do not take into account the time it takes VA medical center staff to contact the veteran to determine a preferred date (the starting point for wait-time measurement) from the veteran’s initial request or referral. We found that the total amount of time it took for veterans to be seen by providers was often much longer when measured from the dates veterans initially requested to be contacted to schedule an appointment or were referred for an appointment by another provider than when using the veterans’ preferred dates as the starting point. See figure 1 for an example of how the two wait-time calculations differ for an initial primary care appointment. We made several recommendations to VA, including a priority recommendation to monitor the full amount of time newly enrolled veterans wait to be seen by a provider. VA has taken several steps to address the priority recommendation, including revising an internal report to help identify and document newly enrolled veterans and monitor their appointment request status. The report is intended to help VA and its medical centers oversee the enrollment and appointment process by tracking the total time from application to appointment. However, VA is still in the process of enhancing its electronic enrollment system to capture the application date for all newly enrolled veterans. Until the enhancements are implemented, VA may not consistently capture the start date for newly enrolled veterans, which, in turn, affects the reliability of its wait-time data. The priority recommendation remains unimplemented as of July 2019. VA Has Not Implemented Recommendations to Address Wait Times and Other Choice Program Issues That Could Affect VCCP Implementation VA has not implemented several of our recommendations related to the Choice Program that could impact veterans’ timely access to care under the VCCP. These recommendations address (1) establishing achievable community care wait-time goals and a scheduling process consistent with those goals, (2) collecting accurate and complete data to systematically monitor veteran community care wait times, and (3) other factors that could adversely affect veterans’ access to community care. VA has begun taking steps to address these recommendations as it implements the VCCP. VA Still Needs to Establish Achievable Wait-Time Goals and a Scheduling Process Consistent with Those Goals to Ensure Veterans’ Timely Access to Care under the VCCP Our review of the Choice Program in June 2018 found that despite having a wait-time goal, VA developed a scheduling process for the Choice Program that was not consistent with achieving that goal. The Veterans Access, Choice, and Accountability Act of 2014 required VA to ensure the provision of care to eligible veterans within 30 days of the clinically indicated date or, if none existed, within 30 days of the veteran’s preferred date. However, we found that those veterans who were referred to the Choice Program for routine care because services were not available at VA in a timely manner could potentially wait up to 70 calendar days for care. Under VA’s scheduling processes, this potential wait time included VA medical centers having at least 18 calendar days to prepare veterans’ Choice Program referrals to TPAs and another 52 calendar days for appointments to occur as scheduled by TPAs. Based on this finding, we recommended that VA establish an achievable wait-time goal for the VCCP that will permit VA to monitor whether veterans are receiving community care within time frames that are comparable to the amount of time they would otherwise wait to receive care at VA medical facilities. We also recommended that VA should design an appointment scheduling process for the VCCP that sets forth time frames within which (1) veterans’ referrals must be processed, (2) veterans’ appointments must be scheduled, and (3) veterans’ appointments must occur that are consistent with the wait-time goal VA has established for the program. VA agreed with both recommendations, which remain unimplemented, and officials stated that they are in the process of finalizing metrics to capture wait-time performance and designing an appointment scheduling process. Without specifying wait- time goals that are achievable, and without designing appointment scheduling processes that are consistent with those goals, VA lacks assurance that veterans are receiving care from community providers in a timely manner. VA’s Monitoring of Care under VCCP Could Still Be Compromised by Incomplete and Inaccurate Data In June 2018, we reported that VA could not systematically monitor wait times for veterans accessing care under the Choice Program due to incomplete and inaccurate data. Without complete and accurate data, VA was not able to determine whether the Choice Program was achieving its goals of (1) alleviating the wait times veterans experienced when seeking care at VA medical facilities, and (2) easing geographic burdens veterans may have faced when accessing care at VA medical facilities. We made three recommendations to address VA’s incomplete and inaccurate data related to the Choice Program, and VA is taking steps to implement two of those recommendations. Incomplete Data We found that the data VA used to monitor the timeliness of Choice Program appointments captured only a portion of the total appointment scheduling process. Though VA had a 30-day wait-time goal to provide veterans with care under the Choice Program, VA’s timeliness data did not capture (1) the time VA medical centers took to prepare veterans’ referrals and send them to the TPAs, and (2) the time spent by TPAs in accepting the referrals and opting veterans into the Choice Program. For example, we found that it took VA medical center staff an average of 24 calendar days after the veteran’s need for care was identified to contact the veteran, compile relevant clinical information, and send the veteran’s referral to the TPAs. For those same authorizations, it took the TPAs an average of 14 calendar days to accept referrals and reach veterans to opt them into the Choice Program. In 2016, VA also conducted its own manual review of appointment scheduling times and found that wait times could be longer than the 30 days (see fig. 2). Specifically, out of a sample of about 5,000 Choice Program authorizations, VA analyzed (1) the timeliness with which VA medical centers sent referrals to the TPAs, and (2) veterans’ overall wait times for Choice Program care. VA’s analysis identified average review times when veterans were referred to the Choice Program to be greater- than-30-day wait time for an appointment at a VA medical facility. For example, for overall wait times (i.e., the time veterans’ need for care was identified until they attended initial Choice Program appointments), wait times ranged from 34 to 91 days across the 18 VA integrated service networks. The national average was 51 days. In September 2017, VA began implementing an interim solution to monitor overall wait times, but this solution relied on VA medical center staff consistently and accurately entering data on referrals, a process that is prone to error. In June 2018, we recommended that VA establish a mechanism to monitor the overall wait times under the VCCP. VA agreed with this recommendation, and stated that it is developing a monitoring mechanism that will be incorporated into a new system that will be fully implemented across all VA medical facilities by fiscal year 2021. Inaccurate Data We also reported that the clinically indicated dates included on referrals that VA medical centers sent to the TPAs, which are used to measure the timeliness of care, may not have been accurate, further limiting VA’s monitoring of veterans’ access to care. Our review of 196 Choice Program authorizations found that clinically indicated dates were sometimes changed by VA medical center staff before they were sent to the TPAs, which could mask veterans’ true wait times. We found that VA medical center staff entered later clinically indicated dates on referrals for about 23 percent of the 196 authorizations reviewed. We made two recommendations to improve the accuracy of the Choice Program data. For example, we recommended that VA establish a mechanism under the VCCP that prevents clinically indicated dates from being modified. VA agreed with our recommendation, and stated that a new system will interface with VA’s existing referral package to allow a VA clinician to enter in a clinically indicated date while restricting schedulers from making alterations to it. VA Has Not Addressed Other Factors That Could Adversely Affect Veterans’ Access to Care under the VCCP In June 2018, we also reported that numerous factors adversely affected veterans’ timely access to care through the Choice Program and could affect access under the VCCP. These factors included the following: (1) administrative burden caused by complexities of VA’s referral and appointment scheduling processes; (2) poor communication between VA and its medical facilities; and (3) inadequacies in the networks of community providers established by the TPAs, including an insufficient number, mix, or geographic distribution of community providers. VA has taken steps to help address these factors; however, none have been fully addressed. For example, to help address administrative burden and improve the process of coordinating veterans’ Choice Program care, VA established a secure e-mail system and a mechanism for TPAs and community providers to remotely access veterans’ VA electronic health records. However, these mechanisms only facilitate a one-way transfer of necessary information. They do not provide a means by which VA medical facilities or veterans can view the TPAs’ step-by-step progress in scheduling appointments or electronically receive medical documentation associated with Choice Program appointments. We made five recommendations to VA to address the factors that adversely affected veterans’ access to Choice Program care. VA agreed or agreed in principle with all five recommendations and has taken some steps in response to these recommendations. However, our recommendations remain unimplemented. As It Implements the VCCP, VA Has Taken Some Steps to Address Community Care Wait- Time Data and Monitoring Issues On June 6, 2019, VA began implementing the VCCP, which created a consolidated community care program. Under the VCCP, VA began determining veteran eligibility based on designated access standards, such as wait-time goals of 20 days for primary and mental health care and 28 days for specialty care and other criteria identified in the MISSION Act. According to VA officials, the implementation of the VCCP also included the use of the new Decision Support Tool—a system that combines eligibility and other information to help veterans, with assistance from VA staff, decide whether to seek care in the community. VA officials previously identified the Decision Support Tool along with another new system—known as the Health Share Referral Management system—as key efforts in addressing many of our recommendations related to VA’s community care wait-time data and monitoring issues. VA expects the Health Share Referral Management system, which will manage community care referrals and authorizations as well as facilitate the exchange of health information between VA and community providers, to be fully implemented across all VA medical facilities in fiscal year 2021. We began work in May 2019 to review VA’s implementation of the VCCP, including how it will address issues such as appointment scheduling. Preliminary Observations on VA’s Provision of Same- Day Services— Another Access Initiative In addition to the actions described above, VA has taken other steps to improve veterans’ access to care by, for example, offering veterans access to routine care without an appointment. We have ongoing work related to same-day services provided in VA primary care and mental health clinics. In order to improve access, VA implemented the same-day service initiative in 2016, and by 2018 offered same-day services in over 1000 facilities. As part of the initiative, VA medical facility staff are directed to address veterans’ primary care and mental health needs that day through a variety of methods, including face-to-face visits, telehealth, prescription refills, or by scheduling a follow-up appointment. Our ongoing work indicates that the six VA medical facilities we visited were generally providing same-day services prior to the initiative; however, according to VA officials, ongoing staffing and space shortages created challenges implementing the initiative. Our ongoing work also indicates that VA does not have performance goals and measures to determine same-day services’ impact on veterans’ access to care. We plan to issue our report on VA’s same-day services initiative in August 2019. In closing, we have identified various weaknesses in VA’s wait-time measurement and scheduling processes over the years. These weaknesses have affected not only VA’s internal delivery of outpatient care, but also that provided through community providers. As we have highlighted here, we have made a number of recommendations to address these weaknesses. VA has taken actions to address our recommendations, but additional work is needed for some. The implementation of enhanced technology, such as a new scheduling system, is crucial and will provide an important foundation for improvements. However, this is not a panacea for addressing all of the identified problems. Moving forward, VA must also continuously ensure that it has clear and consistent policies and processes, adequate oversight, and effective training. Chairman Takano, Ranking Member Roe, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Debra A. Draper, Director, Health Care at (202) 512-7114 or DraperD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony were Sharon Silas (Acting Director), Ann Tynan (Assistant Director), Cathy Hamann, Aaron Holling, Akbar Husain, Kate Tussey, and E. Jane Whipple. Also contributing were Jacquelyn Hamilton and Vikki Porter.
Why GAO Did This Study The majority of veterans utilizing VA health care services receive care in VA-operated medical facilities, including 172 VA medical centers and more than 1,000 outpatient facilities. For nearly 20 years, GAO has reported on the challenges VA medical facilities have faced providing health care services in a timely manner. When veterans face wait times at VA medical facilities, they may be able to receive services from VA's community care programs, which VA estimates will be 19 percent of its $86.5 billion in health care obligations in fiscal year 2020. This testimony focuses on GAO's large body of work on veterans' access to care and the status of VA's efforts to address GAO's recommendations, including those from GAO's June 2018 report on VA's community care programs and from GAO's December 2012 report on VA's scheduling of timely medical appointments that VA has provided information on through July 2019. It also includes preliminary observations on related ongoing work. What GAO Found GAO has issued several reports recommending that the Department of Veterans Affairs (VA) take action to help ensure its facilities provide veterans with timely access to medical care. VA has taken a number of steps to address GAO's recommendations to improve wait-time measurement and its appointment scheduling policy. However, additional actions are needed to fully address most of GAO's recommendations. GAO found in 2012 that outpatient appointment wait times reported by VA were unreliable because VA did not ensure consistency in schedulers' definitions of the dates by which wait times were measured. GAO recommended that VA clarify these definitions. VA concurred and has taken a number of actions in response, including improved oversight through scheduling audits. However, VA's first internal audit in August 2018 was unable to evaluate the accuracy and reliability of its wait-time data due to the lack of business rules for calculating them, indicating that additional efforts are needed to address this issue. GAO also found in 2012 that not all facilities GAO visited used the electronic wait list to track new patients that needed medical appointments, as required by VA's scheduling policy. This put patients at risk for being lost for appointment scheduling. GAO recommended VA ensure consistent implementation of its policy, and that all schedulers complete required training. VA concurred, and with the information VA provided in July 2019 GAO considers VA's actions, including updating its scheduling policy and completing scheduler training, sufficient to fully address the recommendation. While improvements to VA's scheduling policy and processes will help ensure veterans receive timely access to care, there are other factors that may also affect access that are not currently reflected in VA's wait-time data. For example, GAO found instances in which the time it took the agency to initially enroll veterans in VA health care benefits was more than 3 months. GAO has also made recommendations to improve appointment scheduling and ensure timely access to care from non-VA providers in VA's community care programs that remain unimplemented. GAO found in June 2018 that the data VA used to monitor the timeliness of the Veterans Choice Program's appointments captured only a portion of the total appointment scheduling process. Although VA had a wait-time goal of 30 days, VA's timeliness data did not capture certain processes, such as the time taken to prepare veterans' referrals and send them to a third-party administrator. GAO found that if these were accounted for, veterans could potentially wait up to 70 calendar days to see a community care provider. VA officials stated that most recommendations will be addressed with new program tools it plans to implement. For example, VA is implementing a system for referral management and appointment scheduling expected to be available in all VA medical facilities by fiscal year 2021. While technology may be an important tool, VA will also need clear and consistent policies and processes, adequate oversight, and effective training to help avoid past challenges. What GAO Recommends GAO has made a number of recommendations to VA to address timely scheduling and reliable wait-time data for outpatient appointments and through community care. VA generally agreed with GAO's recommendations. As of July 2019, VA has taken actions to fully implement one recommendation discussed in this statement. GAO continues to believe that all of the recommendations are warranted.
gao_GAO-19-529
gao_GAO-19-529_0
Background Mission and Organization of the IC The Director of National Intelligence serves as head of the IC and acts as the principal adviser to the President and National Security Council on intelligence matters related to national security. The IC is comprised of 17 executive branch agencies and organizations, generally referred to as IC elements. These IC elements include two independent agencies, eight elements within the Department of Defense, and seven elements across five other executive departments. Table 1 provides a list of the 17 IC elements. History of the IC CAE Program In its first National Intelligence Strategy, issued in 2005, ODNI highlighted the importance of a diverse talent pool to address the complex challenges the IC faced. In its most recent strategy released in 2019, ODNI reaffirmed and emphasized the IC’s commitment to developing and retaining a diverse workforce to address enduring and emerging mission requirements. The 2019 National Intelligence Strategy defines diversity as a collection of individual attributes that include, but are not limited to national origin, language, race, color, mental or physical disability, ethnicity, sex, age, religion, sexual orientation, gender identity or expression, socioeconomic status, veteran status, and family structure. The Intelligence Authorization Act for Fiscal Year 2004 directed the Director of Central Intelligence to develop a pilot project to test and evaluate alternative innovative methods to promote equality of employment opportunities in the IC for women, minorities, and individuals with diverse ethnic and cultural backgrounds, skills, language proficiency, and expertise. The first pilot was initiated at Trinity Washington University in Washington, D.C., with a 1 year contract totaling $250,000. The college developed and designed curricular components to align with IC mission skills sets and competencies and competitively selected students to participate in the college’s IC CAE Scholars Program. In the first year, the program sponsored nine students who were selected as IC CAE scholars. After the initial pilot year at Trinity Washington University, the pilot program was expanded to three additional colleges—Tennessee State University in Nashville, Tennessee; Florida International University in Miami, Florida; and Clark Atlanta University in Atlanta, Georgia. In 2005 ODNI, on behalf of the IC, established the IC CAE program. ODNI reported that by 2007, 65 scholars participated in the program from these four CAE colleges. By 2008, ODNI had expanded the pilot to six additional colleges. Overall, the 10 participating colleges increased the student population to 338 IC CAE scholars. During the 2008 to 2009 academic year, ODNI established a continuity strategy with the initial 10 IC CAE pilot colleges and the program continued to expand its academic outreach to additional colleges. In 2009, a total of 17 colleges were participating in the program and these colleges had arrangements with academic consortia that increased the total outreach to 31 colleges. During ODNI’s management of the program from 2005 through 2011, ODNI established general goals and oversaw the program’s implementation by defining and collecting performance measures on a range of IC CAE activities and working with a contractor to summarize this information in annual reports. We describe ODNI’s management of the program in more detail in appendix I. The Intelligence Authorization Act for Fiscal Year 2010 codified the IC CAE program to authorize the Director of National Intelligence to carry out grant programs to enhance the recruitment and retention of an ethnically and culturally diverse IC workforce with capabilities critical to the national security interests of the United States. In 2010, ODNI launched an Intelligence Community Efficiency Studies Initiative that included an examination of the size, structure, and functions of the ODNI. One recommendation was to consolidate and streamline the education and training programs in the IC by transferring the functions and responsibilities of the IC CAE program from ODNI to DIA. DIA began managing the program on October 1, 2011. The memorandum of understanding between ODNI and DIA implementing the decision of the transfer established that while DIA would manage the IC CAE program, ODNI would continue to provide periodic strategic guidance and regular budgetary oversight for program. Figure 1 shows various IC CAE program milestones, such as grant announcements and program transition dates, among other details. Current IC CAE Program Transition According to ODNI and DIA officials, program management and oversight of the IC CAE program is currently transitioning from DIA back to ODNI, following a DIA roles and mission review in 2018. According to ODNI and DIA officials, officials are working to complete the transition in fiscal year 2020 to enable ODNI to assume responsibility for the program. According to ODNI officials, as of April 2019, the transition plans were still in progress and ODNI was still in the planning stage of the transition. For example, officials noted they were drafting an implementation plan for the transition as well as a transfer memorandum to document the transfer. According to DIA officials, during this process, ODNI and DIA officials were also holding weekly coordination meetings and sharing program documents, such as college reports collected by DIA, and program guidance. ODNI officials also stated that in February 2019, they hired a contractor to conduct a study of the program prior to the final transition date. According to ODNI officials, the study, along with their interactions with DIA, will help ODNI determine how to manage the program, identify any challenges or successes of the program, and consolidate the data collected on the program to date. Officials expect the study to be completed by October 2019. IC CAE Senior Advisory Board The IC CAE Senior Advisory Board consists of representatives of the IC elements and key organizations that may include representatives from the National Intelligence University, a U.S. Combatant Command (rotating basis), and the Office of the Undersecretary of Defense for Intelligence. The board, which meets quarterly, was created to provide policy and guidance for the IC CAE program and ensure that participating IC elements are included in discussions of policy matters. As outlined in the board’s official charter and business rules, board members are responsible for attending board meetings, voting on issues before the board, evaluating colleges for grant funding, acting as points of contact for the program, and promoting the program as leverage to affect future IC missions. According to DIA officials, the board advises the IC CAE program manager on standards for the IC CAE program relating to college selection, strategies to foster collaboration, and other issues as needed. Programs at IC CAE Colleges The IC CAE program awards grants to colleges on a competitive basis. IC CAE grants help colleges establish new intelligence-related programs and support existing programs at selected colleges. The grants can be issued for up to 5 years. From fiscal years 2004 through 2018, a total of 29 colleges have received 46 IC CAE grants. Of these 29 colleges, 13 have formed a consortium with one or more colleges to enhance collaboration with resources from other colleges in the same geographic area. The IC considers colleges with active grants as active IC CAE colleges, and those colleges that sustain the program after grant funding ends are called legacy colleges. Figure 2 shows the location of IC CAE colleges and which colleges led an academic consortium. See appendix II for additional details on the years that grants were awarded, grant funding amount, and a list of consortium colleges. Since 2011, DIA has issued grants for the IC CAE program through a process initiated by an announcement published online by the DIA grants officer. Grant announcements vary by year, but generally include guidelines for colleges to follow in completing their grant proposal. For example, the 2014 grant announcement listed eight program components a college’s proposal would be evaluated on, to include study abroad opportunities and annual colloquium or speaker series on intelligence and national security issues, along with other requirements such as cost program management and sustainment plans. Following submission, a grants officer reviews colleges’ grant proposals for technical and financial sufficiency. The IC CAE program office then reviews grant proposals for program sufficiency. From there, the IC CAE Senior Advisory Board’s Source Selection Board reviews applications deemed sufficient and makes a recommendation on which should be funded and at what funding level. The DIA CAE program office then forwards the selected proposals to a grant officer who notifies the college of the award. The grant announcements we reviewed may add specific program components as an area of focus for a specific year. For example, the 2019 grant announcement added a program component that required colleges submitting a proposal for a grant to offer courses or programs in three or more listed science, technology, engineering, and mathematics topics of interest to the IC. Examples of some other program components included in grant announcements since 2014 include the following: IC Curriculum. A key objective of the program is to strengthen academic programs in intelligence or national security in minority- serving, historically rural and under-resourced population colleges. Specifically, colleges shall explain how they plan to creatively expand, upgrade, enrich, or integrate undergraduate and graduate course offerings to better prepare students to perform work in intelligence or national security. Foreign Language. Colleges should demonstrate a capability to offer language study programs or courses in one or more specified languages of interest to the IC. Facilitate Student Participation in Academic Programs. IC CAE students shall be involved in the program and aware of the numerous benefits. Colleges are required to facilitate student participation in on- campus programs and activities such as workshops, seminars, and other off-campus activities such as national security or intelligence conferences, seminars, or workshops. Annual Colloquium. IC CAE colleges are required to hold annual colloquium or speaker series on intelligence or national security issues. These events should invite rural and under-resourced regional colleges and universities, government speakers, and industry partners with a primary goal of maximizing relationships and outreach. The colloquium should be at least 1 day in length, or a speaker series may include shorter presentations scheduled over weeks or months, which equates in the number of hours to a daylong colloquium. Program Management and Sustainment Plans. IC CAE colleges are required to have both program management and sustainment plans. The program management plan must detail the responsibilities of personnel to attain explicitly stated, measureable, and achievable program objectives. The sustainment plan must detail what the college will do during the grant period to build sustainability of the IC CAE program at that institution after the funding expires. The IC CAE program is especially interested in colleges with diverse populations of talent and in geographic diversity—specifically, Historically Black Colleges and Universities, Hispanic-Serving Institutions, Tribal Colleges and Universities, Asian American and Pacific Islander-Serving Institutions, and majority serving institutions with significant populations of minorities or women. The IC CAE program is also interested in majority serving institutions with significant populations of minorities and women that possess credentials in disciplines and specializations that meet IC core mission requirements. Figure 3 shows the minority designation of the 29 colleges receiving grants and figure 4 shows the minority designation of the 43 consortium colleges. See appendix II for a list of schools and their minority designations. Additional IC CAE Programs As part of the IC CAE program, DIA also administers other programs that provide intelligence-related learning experiences to IC CAE students and to increase advanced capabilities in national defense. For example: IC CAE Professional Development Summit. These annual summits allow the IC to interact with the principal investigators—the individuals responsible for the IC CAE program at their respective colleges—to provide them with relevant and up-to-date information to support the creation and teaching of IC-centric curricula. According to DIA, the summit is intended to foster collaboration with the IC and college representatives by providing DIA with a platform to meet the needs of the IC. According to DIA, IC CAE Senior Advisory Board members are an integral part of the summit and provide context and perspective from the agencies they represent. National Security and Analysis Intelligence Summer Seminar. This 2-week seminar is designed to provide IC CAE students with knowledge about the intelligence career field in general, and analytic tradecraft in particular. The seminar is intended to provide students from across the IC CAE colleges an opportunity to engage directly with intelligence professionals in both seminar learning and scenario- based simulation training, focusing on threats to the U.S. homeland by extremist terrorists. According to DIA officials, the seminar is only open to a limited number of IC CAE students from active and legacy colleges. For example, two sessions were held during 2017 and a total of 80 students were competitively selected by their respective colleges to attend. According to ODNI officials, the summer seminar also holds a career fair and provides mentoring opportunities for the participating students so that those interested in an IC career have an opportunity to interact with recruiters. IC CAE Summer Internship. In the summer of 2017, the IC CAE program held its first IC CAE summer internship program. According to DIA officials, rather than establish a new IC CAE internship program, DIA leveraged the IC elements’ existing internship programs and tracked IC CAE student participation in these programs. The IC CAE internship offers IC CAE students additional opportunities, such as an opening and closing ceremony for the internship, an IC career fair at the National Security and Analysis Intelligence Summer Seminar event, and IC mentors upon request. DIA identified a total of 141 IC interns from colleges that had an IC CAE program in 2017 and 2018. However, according to ODNI officials, not all IC interns identified participated in their school’s IC CAE program. The internship opportunities among the IC elements vary. For example, according to FBI officials, their internship program is a primary pipeline for entry-level positions and, in 2017, they had 1,200 interns with 300 hired into entry-level positions. According to DIA data, the FBI identified 31 IC CAE scholars in its 2017 internship program and 21 scholars in 2018. According to Department of State’s Bureau of Intelligence and Research officials, their office has approximately 15 to 20 summer interns each year. According to Department of State officials, two of their interns were IC CAE scholars since the program began in 2017. DIA Has Implemented the IC CAE Program since 2011 by Issuing Grants to Colleges but Has Not Sufficiently Planned or Overseen the Program While DIA has continued to implement the IC CAE program by issuing grants to colleges, DIA has not sufficiently planned or overseen the program since the transition from ODNI in 2011. Specifically, we found that DIA did not fully implement five of the six key practices of sound planning that we have identified in our prior work. While DIA continued the program’s mission to increase the pool of diverse applicants for the IC, it lacked results-oriented goals, an overall strategy for the program, an evaluation of external factors, performance measures, and a plan to assess the program’s performance in order to determine the appropriateness of the goals and effectiveness of implemented strategies. Our assessment of the extent to which DIA incorporated these key practices of sound strategic management planning into the IC CAE program is reflected in table 2. Mission Statement: DIA Maintained the Original Mission for the IC CAE Program DIA annual reports for the IC CAE program and IC CAE grant announcements emphasize that the overall mission of the program is to increase the pool of diverse applicants for the IC. DIA’s annual reports describe the program’s mission as developing national security and intelligence education programs in order to increase the pool of culturally, geographically, and ethnically diverse, multidisciplinary job applicants who possess highly desired skills and competencies in areas of critical need to the IC. This mission statement is also contained in IC CAE grant funding opportunity announcements for 2014, 2017, 2018, and 2019, which also refer to broader IC human capital and diversity guidance. For example, one goal from the IC’s Equal Employment Opportunity and Diversity Enterprise Strategy (2015-2020) is to recruit from groups with lower than expected participation rates and diverse candidates who will meet the IC’s current and future mission requirements. Results-Oriented Goals and Strategies: DIA Did Not Develop Results- Oriented Goals and Strategies for the IC CAE Program Since 2011, DIA has not established results-oriented goals for the IC CAE program or an overall strategy that details the agency resources and processes that are required to achieve the program’s mission. First, DIA failed to document specific policy, programmatic, or management goals for the IC CAE program. DIA developed a business plan for the program in 2011; however, this plan describes short-term goals for program management, outreach, and education and most of these goals were intended to be complete by mid-2012. DIA’s documentation does not indicate whether these goals were achieved or whether DIA continued to use the goals to guide the program after 2012. Current DIA internal guidance states that the IC CAE program office carries out the program’s mission by providing grants to colleges to support the establishment of intelligence-centric curricula. However, this guidance fails to provide results-oriented goals that are defined in measurable terms to guide the program. For example: DIA has not described the number of potential IC employees it expects to be able to educate or make aware of IC careers by supporting intelligence programs at IC CAE colleges. This could include specific goals for targeting underrepresented populations within the IC, such as women and minorities. According to several IC element officials, IC elements use the percentage of women and minorities in the U.S. civilian labor force as a target for their own diversity recruitment efforts. However, DIA has not developed any results-oriented goals that include specific targets or milestones for recruiting potential IC employees who have participated in the IC CAE program. In addition, DIA has not developed specific goals for the program that identify how to prioritize among program requirements contained in IC CAE grant announcements . Specifically, it is not clear from IC CAE program documentation how gender and ethnic diversity is prioritized relative to other IC needs, such as the IC’s long-standing need for technical and language skills. For example, IC CAE grant announcements state a general goal of increasing the pool of qualified women and racial and ethnic minorities to the IC. At the same time, IC CAE grants have supported training in science, technology, engineering, and math, and critical languages, but DIA has not established specific targets or milestones that would allow it to track the program’s development of a diverse pool of applicants with the skills that the IC requires. Second, while DIA has developed some plans and continues to award grants for the IC CAE program, we found that DIA has not documented an overall strategy that details the agency resources and processes required to achieve the program’s mission. In 2016, DIA officials stated they began developing a document outlining the general structure of the IC CAE program, but as of May 2019, the document has not been issued. DIA has also documented its standard operating procedures for monitoring colleges’ implementation of grants in part to ensure that all programmatic goals are met, but it is not a strategic document that describes processes for achieving the program’s mission or goals. Further, DIA continues to award IC CAE grants to colleges based on program components or criteria that have changed over time, but these changes are not clearly linked to an overall program strategy. For example, in 2014, DIA added the diversity of a college’s student population as one of the criteria it used to select grant proposals. Colleges with a minority-serving designation or with a student population that is more than 75 percent ethnically and culturally diverse are given an excellent rating, while colleges with a student population that is less than 25 percent diverse are given a poor rating. In 2017, DIA then added criteria requiring colleges to be part of a consortium in a manner that promotes diversity. These two diversity criteria have been given more weight than all other criteria since 2017, while previous announcements gave greater weight to the development of national security curricula. This change in approach may align with the program’s overall mission to increase diversity in the IC, but DIA has not outlined an overall strategy that explains how such changes to the grant selection criteria would achieve a results-oriented goal like increasing the number of minority applicants to the IC. Two interconnected sound planning practices are to establish results- oriented goals and strategies to achieve those goals. These goals should be documented in measurable terms that are focused on results so that the agency can determine how it will achieve its mission. Once goals are established, strategies explain how these goals would be achieved. Since assuming responsibility for the program in 2011, DIA officials stated that their focus for managing the IC CAE program has been tactical, focusing on tasks like awarding, executing, and monitoring grants to IC CAE colleges, rather than strategic planning. In addition, DIA officials highlighted staff turnover as a challenge to managing the program and stated DIA has had five IC CAE program directors in its 8 years of program management. DIA officials stated that DIA has received little guidance about the goals of the IC CAE program from ODNI, and they instead rely on the IC CAE Senior Advisory Board to define goals and strategies that reflect the needs of IC elements. DIA officials stated that their only source of guidance from ODNI for the IC CAE program was the 2011 memorandum of understanding between DIA and ODNI, which DIA officials characterized as being high level and lacking specificity. DIA officials also said that they do not have the authority to create a strategic recruitment plan or set recruiting targets for the IC. The board only meets quarterly to advise the IC CAE program office on standards and strategies and board members occasionally review grant proposals. The IC CAE program managers are responsible for the program, and therefore, defining and documenting its goals and strategies. As the IC CAE program transitions back to ODNI, ODNI will not be able to determine whether the program is meeting the diversity goals of the 2019 National Intelligence Strategy without results-oriented goals for the program and a documented strategy showing how those goals are to be achieved. External Factors: DIA Has Identified Some External Factors Affecting the Program, but Has Not Developed a Process to Evaluate Them DIA has identified external factors that could affect the IC CAE program’s success, such as program branding and the ability of colleges to sustain the program after the grant period ends, but has not developed a process to fully evaluate them. IC CAE Program Branding One example of an external factor that could affect the IC CAE program’s success is the fact that not all students are aware of their participation in an IC CAE program. Colleges participating in the IC CAE program have not always featured participation in the program prominently, based on our analysis of selected websites, which are often managed by an academic department or institute. This can limit the visibility of the program and the IC’s support of it for both current and potential students. Since at least 2014, DIA has required colleges to demonstrate how they plan to promote their program as an IC CAE program to ensure that students, faculty, and administrators are aware of it. Colleges are also required to feature up-to-date program information on the college’s website. However, in November 2018, a DIA official noted that some colleges continue to use the IC CAE brand without oversight and accountability to provide intelligence-related courses. According to officials from selected IC CAE colleges and IC elements, students graduating from these programs are not always aware that they have participated in an IC CAE program. One college official stated that the certificates or degrees do not necessarily indicate that the student graduated from an IC CAE program. Another college official stated the college needs to directly inform students who apply to the program that they are participating in an IC CAE program. NGA and NSA officials stated that some employees at their agencies first became aware there was an IC CAE program at their college after being informed directly by their respective agency. While DIA requires that colleges develop marketing plans, it does not have a process to evaluate external factors such as the long-term effect of colleges’ efforts to advertise their programs’ connection with the IC CAE program. Without adequately advertising IC CAE programs, IC CAE colleges may not be able to recruit a strong pool of qualified students with the skills that the IC requires. IC CAE Program Sustainment Another example of external factors that could affect program success is the ability of colleges to sustain their IC CAE program. The intent of the IC CAE program has been to enable colleges to continue the program beyond the end of the grant period and maintain a continuous talent pool for the IC. However, DIA has not fully evaluated the challenges colleges may encounter if they are not able to secure continuous funding for the IC CAE program. When DIA awards grants, colleges are awarded a base year of funding and renewable up to 4 additional option years. It may take time for a college to develop intelligence-related courses and have students graduate from the IC CAE program. Colleges then need to apply for another grant in order to continue to receive federal funding following expiration of any additional option years. Since 2011, colleges have been required to demonstrate a plan to sustain their programs after the initial grant period ends. However, according to some IC CAE college officials, it is nonetheless difficult to continue the program and secure external funding once the grant is over. Some college officials have also said that the loss of grant funding can result in colleges discontinuing key aspects of the IC CAE program and can limit consortium college participation in activities. We have also observed that some colleges may have suspended their programs entirely. Specifically: Colleges may be able to sustain some, but not all components of their program once grant funding ends. For example, one college has sustained an IC CAE program since 2005 even though the college did not receive grant funding from 2008 through 2012. According to college officials, loss of grant funding resulted in the college suspending professional development activities. The program received additional IC CAE grants in 2012 and in 2017 and college officials stated they hold professional development workshops and one-on-one mentoring sessions between students and representatives from IC elements. Without grant funding, consortium colleges may not have funding for student travel to IC CAE events. Consortium colleges face a specific challenge since many of the IC CAE events are hosted by the lead IC CAE college. We spoke with faculty at two consortium colleges who said that grant funding from the program helps reduce the cost of their students’ travel to off-campus IC CAE events, such as annual colloquiums at the lead college that are attended by subject matter experts from IC elements. The distance students may need to travel can be especially challenging for colleges that are not located near the lead college, including one community college that is 400 miles away from the lead consortium college according to an IC CAE college official. DIA identifies some programs as legacy colleges, but some colleges have not updated their IC CAE program websites. For example, we reviewed the IC CAE program websites for two colleges that had received a grant from DIA after 2012, but the colleges had stopped updating their websites in 2014 and 2016. DIA has identified sustainment of the IC CAE program following termination of grant funding at colleges as a significant challenge. At a recent meeting with the IC CAE Program’s Senior Advisory Board, the head of DIA’s program office stated that the sustainment of IC CAE programs after federal funding ends tended to be a systematic failure, especially for many smaller colleges that may lack the resources of larger colleges, and that there have been no consequences for failure. While DIA acknowledges this problem, it does not have a process to systematically evaluate this issue or consider alternative approaches for colleges that may need additional support to maintain relevant curricula or professional development activities. For example, DIA has not evaluated whether some colleges’ difficulty with sustaining their IC CAE program may invalidate underlying assumptions about how the program is structured, including whether awarding grants to colleges to develop and maintain an intelligence-focused curriculum is the most effective means of establishing long-term relationships with those colleges and fostering a diverse talent pool for the IC. A key practice of sound planning is to fully evaluate key factors external to the organization that are beyond its control. IC CAE colleges decide how to brand the program as well as how to allocate resources in order to sustain their IC CAE program. These decisions could significantly affect the achievement of the IC CAE program’s mission and goals. Both ODNI and DIA officials are aware of some external factors that could affect the success of the IC CAE program, such as branding and program sustainment. As of March 2019, ODNI officials have stated that they are developing plans to address branding and sustainment as the program transitions to ODNI. DIA drafted a plan for post-grant requirements for colleges in order to maintain their IC CAE designation, though this draft plan does not address the sustainment challenges that may make it difficult for those colleges to follow these additional requirements. However, DIA internal guidance and the most recent Senior Advisory Board charter do not outline a process to identify and continuously evaluate external factors that could affect program performance. As the new program manager, ODNI may be unable to assess whether factors like program branding or sustainment might affect the IC CAE program’s implementation and potential for success without a process in place to evaluate the effect of these and other potential external factors. Metrics to Gauge Progress: DIA Has Not Defined, Collected, or Reported Comprehensive Performance Measures DIA lacks comprehensive performance measures for the IC CAE program that would allow DIA to measure program success. Specifically, DIA has not (1) clearly and consistently defined performance measures to be reported and collected, (2) collected on or reported complete information on the program, and (3) determined whether data collected may be incomplete or unreliable due to reporting challenges. Performance Measures Are Not Clearly and Consistently Defined DIA has not clearly and consistently defined the performance measures that need to be reported by the colleges in order to determine the IC CAE program’s success. DIA required colleges to provide reports on significant accomplishments related to the objectives in their grant proposals. However, we reviewed final grant reports that colleges submitted to DIA from 2014 to 2018 that revealed differences in how colleges reported measures. For example: Two colleges reported that a total of 664 students received an IC CAE certificate, 99 completed an internship, and 128 received a conditional offer of employment between 2012 and 2017. However, the report did not indicate whether these offers of employment were from IC elements or the number actually hired. A legacy college reported that 49 students received a conditional offer of employment or were hired by an IC element, but it did not indicate the total number of program participants. The final report from a legacy college that had an IC CAE program from 2013 to 2015 reported the total number of internships, but it did not report conditional offers of employment or total program participation. In 2017, DIA revised the reporting template for colleges to require progress on the goals and objectives in the approved grant proposal. However, the information colleges provided varies because DIA’s performance measures are not clearly stated so that colleges can report them consistently, and they are not scoped to evaluate specific program outcomes. For example, IC CAE programs are required to report their progress in developing critical language studies, but there is no minimum requirement on the type of information that a college should report in the updated template. Comparing the reporting template for two colleges from 2018, one college’s narrative provided a high level overview of its foreign language options at the college and reported that IC CAE scholars will be encouraged to participate in the language courses, whereas another college’s narrative provided details on the number of students participating in the foreign language program and details on stipends provided to students who studied abroad. DIA’s updated reporting template also required IC CAE colleges to report the aggregated totals of IC CAE participants, conditional offers of employment, internships, and hires into the IC. However, some colleges track different types of information for these metrics. For example, the way colleges count student participants in the IC CAE program varies. Some colleges only track students enrolled in the IC CAE certificate or degree program, while other colleges report much larger totals of participants, including those who are not enrolled in an IC CAE certificate or degree program but may participate in some IC CAE events. In addition, DIA’s updated reporting template did not clearly describe the hiring data that colleges are required to report. For example, colleges are required to report the total number of conditional offers that IC CAE scholars receive, but it does not specify whether this number is for all employers or just IC elements. Furthermore, it is not clear whether students that received a conditional job offer in one semester are being reported again as a hire in the following semester. Without clearly defined performance measures, decision makers may not be able to clearly identify the accomplishments of the program among the various participating colleges. DIA Has Not Collected or Reported Complete Information on the IC CAE Program’s Performance DIA is responsible for reporting on the IC CAE program’s performance to ODNI, but DIA has not collected complete performance measures that cover the entire program and has not reported a complete summary of the performance measures it has collected. Since 2011, the DIA program office has collected some information from IC CAE colleges in order to monitor compliance with the colleges’ grant proposals. This information was reported by IC CAE colleges in their interim and final reports that include narrative descriptions of IC CAE program activities and descriptive data about program participants. However, DIA has not collected complete information that captured relevant performance measures for the IC CAE program. For example, between 2011 and 2016, DIA officials stated colleges provided DIA a spreadsheet of information on IC CAE program activities, including descriptions of IC CAE courses and events, study abroad program participation, IC element interaction, and information about individual IC CAE scholars. However, the data provided by the colleges varied. For example, based on a review of spreadsheets that DIA provided from the fall of 2014, some colleges provided details on IC CAE sponsored events, IC element interaction, and student employment, while other colleges did not provide any information in these areas. We also found that colleges summarized this information in their final grant. DIA’s annual reports to ODNI from 2012 to 2017 reported little of the information that DIA collected over this time period. The annual reports described financial data and provided some description of select college activities, but they did not summarize information related to any of the program’s core requirements such as curriculum development, critical language study, or professional development. For example, DIA has not collected or reported data on the number of IC CAE scholars who have studied a critical language from 2012 to 2017. The reports also did not include the total number of IC internships, conditional job offers, or hires after 2012. Moreover, college officials stated they do not report on performance measures after the grant period ends, which may limit DIA’s ability to provide comprehensive data for both active grant colleges and legacy colleges each year. DIA officials stated that legacy IC CAE colleges that have sustained the program but no longer receive a federal grant are not obligated to provide reports to DIA. According to DIA officials, DIA is currently developing a plan that would require colleges to report information in order to maintain their IC CAE designation after the grant period ends. For example, a college official from a legacy program that first received a grant in 2006 stated that the college no longer shares information with DIA because DIA had not requested it do so after the grant ended. The official noted that the college is no longer receiving support to facilitate IC recruitment of its students. Data Collected May Be Incomplete or Unreliable Due to Reporting Challenges DIA officials stated they have relied on colleges rather than the IC elements themselves to report data on IC CAE scholars. DIA informs colleges through its reporting template that data on internships, conditional job offers, and hires into the IC are definitive evidence of the success and sustainability of a college’s IC CAE program. However, due to challenges with collecting these data, the information being provided to DIA by the colleges may be incomplete and unreliable. While DIA has not reported on the total number of IC CAE scholars that have been hired from 2012 to 2017, it has collected some information from IC CAE colleges. For example, three colleges from our sample reported that a total of 23 IC CAE scholars were hired by the IC between the beginning of the fall semester of 2017 and the end of the fall semester in 2018. However, according to officials at these colleges, it is difficult to provide complete data on students’ employment as they no longer have direct contact with students after they graduate and some IC elements discourage applicants from discussing their employment offers with others. As a result, the information the colleges report to DIA may be incomplete because they are not able to track all the students who have graduated from the IC CAE program. ODNI also reported similar challenges when it managed the program from 2005 through 2011. ODNI reported a total of 61 IC CAE scholars were hired into the IC between 2005 and 2011 based on IC CAE college data, but noted that the hiring data from IC elements was higher than the total reported by colleges. Further, IC elements have noted that there are security risks associated with tracking the number of IC CAE scholars that receive a conditional offer of employment or have been hired into the IC. At the February 2019 IC CAE professional development summit, for example, Senior Advisory Board members from the CIA and the FBI advised IC CAE colleges that storing or sharing information about potential IC applicants on unsecured college systems is a security risk. Some IC element officials have suggested that the best way to track applicants would be to obtain a list of IC CAE scholars from the colleges and match the names against IC element applicants. However, according to officials, the IC elements would need an individual’s full legal name and college, and some IC CAE college officials raised privacy concerns with sharing student information. An IC CAE college official stated that even during the grant period, the college only provided DIA aggregated totals on student data because of privacy concerns. DIA and ODNI have collected some data on the number of applicants from IC CAE colleges and new hires from the IC elements, but they have only recently done so in a systematic manner. Officials from DIA’s IC CAE program office said they cannot force IC elements to report employment information and that the burden is on the IC elements to track and report that data. According to ODNI officials, in response to a provision in the Intelligence Authorization Act for Fiscal Year 2017, ODNI sent out a request to IC elements for data on hiring and demographic information that included questions about the number of IC CAE graduates hired by the IC. As of April 2019, officials stated that they have collected hiring and demographic information from six of the largest IC elements that includes data about the number of IC CAE graduates hired by the IC. The officials said they expect this to be a large enough sample to report in June 2019. However, according to ODNI officials, ODNI has not yet determined how it will define performance measures for the IC CAE program or how it will continue to collect and report these performance measures. A key practice of sound planning requires the development of a set of performance measures that will be applied to gauge progress toward attainment of the plan’s goals. We have also established that key attributes of successful performance measures, which include measures that cover core program activities, are that they are clearly defined and consistent and can be reliably produced. Furthermore, Standards for Internal Control in the Federal Government state that management should use relevant data from reliable sources; process this data into high-quality information that is complete, accurate, and valid; and communicate high-quality information to all levels of the department. Comprehensive performance measures would allow DIA to gauge the success of the IC CAE program in developing a pool of diverse talent with skills needed in the IC, but DIA has not defined performance measures in program guidance and documentation. In its 2012 annual report, DIA stated that it intended to redesign ODNI’s data collection tool in order to simplify reporting. However, DIA did not report data collected with this tool and stopped collecting these data altogether in 2016 after informing IC CAE colleges that the collection effort required a lengthy approval process from the Office of Management and Budget. DIA officials continued to require colleges to report performance measures after 2016 through a reporting template. In April 2019 DIA officials stated that they intended to make improvements to this template given that the way colleges have tracked student participation has varied. However, DIA did not clearly and consistently define performance measures for all aspects of the program, process them via a data system or spreadsheet, or report them to ODNI. As the new IC CAE program manager, ODNI will not be able to gauge the success of the IC CAE program in achieving its mission without defining, collecting, and reporting on comprehensive performance measures. Program Evaluation: DIA Has Not Comprehensively Assessed the Program’s Performance Since 2012, DIA has not conducted a comprehensive assessment of the IC CAE program. According to a 2013 amendment to the memorandum of understanding with ODNI, DIA was responsible for providing ODNI with an annual review of the program’s performance and including possible outcomes, such as specific benefits to the IC. ODNI was responsible for evaluating this information to ensure the appropriate and efficient expenditure of IC resources and performance improvement. However, DIA’s annual reports to ODNI from 2012 to 2017 did not comprehensively assess the program’s performance or the extent to which the program had achieved its mission. These reports only provide a few details about IC CAE program activities and summarize grant expenditures. For example, the 2016 annual report for the IC CAE program provided information on the number of grants awarded, a list of IC CAE colleges participating in the program, funding and execution data, and a sample of IC CAE program events from three colleges. However, the report did not provide complete details on the status of the program at each IC CAE college, such as a summary of the performance metrics it had collected from all of the colleges with an active grant. DIA officials said that they only included the information in annual reports that ODNI requested in the memorandum of understanding and lacked resources to provide a comprehensive assessment. However, the memorandum of understanding requires DIA to provide an annual review of the IC CAE program’s performance to possibly include outcomes such as the number of students who completed IC CAE coursework and specific benefits to the IC. As ODNI officials work with DIA to transition the IC CAE program back to ODNI, ODNI officials began working with the MITRE Corporation in February 2019 to evaluate the IC CAE program. ODNI officials said they will rely on MITRE’s findings and their own interactions with IC CAE colleges to determine how to manage the program. Officials stated they expect the evaluation to be complete by October 2019. However, ODNI has not yet developed a plan to conduct continuous and comprehensive assessments of the IC CAE program. A key practice of sound strategic planning is the use of assessments, through objective measurement and systematic analysis. For example, an evaluation plan can assist an agency in determining the appropriateness of a program’s goals, the effectiveness of implemented strategies, and the potential need for corrective action. The memorandum of understanding between DIA and ODNI in 2011 and amended in 2013, designated performance reporting as a DIA responsibility, but DIA did not identify performance assessment as a responsibility in program guidance. The IC CAE program office’s standard operating procedures provide that the grant officer’s representative monitors an IC CAE college’s compliance with its grant assistance agreement and collects performance and financial data reports. However, there is no mention of a systematic, outcomes-based assessment of these reports or the program as a whole. Without such assessments, the IC will not be able to determine whether the IC CAE program is effectively increasing the pool of diverse applicants. Congress will also be unable to determine the return on investment in this long-standing program. Selected IC Elements Participate in the IC CAE Program to Varying Degrees, but DIA Has Not Assessed Program Participation and Roles Are Not Clearly Defined Selected IC Elements Participate in the IC CAE Program to Varying Degrees, but DIA Has Not Assessed IC Element Participation in the Program IC elements participate in the IC CAE program in a variety of ways, including by attending IC CAE college workshops and recruitment events and participating in the annual IC CAE program meeting, among other events. Table 3 shows the varying levels of participation in the IC CAE program among the eight selected IC elements, as reported by IC element officials. IC elements’ participation in the IC CAE program varies according to the specific organizational needs of each IC element. Some IC elements do not participate actively in the program because they do not directly hire employees into their intelligence office or because they conduct only limited hiring. For example, according to officials from the Department of Energy’s Office of Intelligence and Counterintelligence, the office is small and hiring is therefore limited. Further, officials stated the office often hires specialized personnel with advanced degrees and would not hire IC CAE scholars from undergraduate programs. Similarly, State Department officials from the Bureau of Intelligence and Research stated that they do not participate in events since they do not have direct hiring authority. Further, these officials stated that the State Department’s participation in IC CAE events is also constrained by limited personnel and financial resources. Other IC elements, such as the CIA and the NSA, have developed separate relationships with colleges and programs to address their specific hiring needs. CIA. The CIA has reduced its involvement with the IC CAE program to better align its needs according to CIA officials. In 2009, CIA selected senior officers to serve as advisors to 16 IC CAE colleges. The CIA advisors were directed to make a minimum of two visits per year and conducted a wide range of activities to include presenting at colleges events, counseling IC CAE scholars, and discussing CIA and IC career opportunities. However, about a third of the advisors were pulled back because, according to CIA officials, the IC CAE colleges were not meeting hiring expectations. Since 2014, CIA has focused its efforts on only six of the IC CAE colleges based on the return on investment from these colleges and alignment with CIA hiring needs. In addition, according to CIA officials, CIA has designated five universities as signature colleges to recruit skilled applicants from a range of cultures and backgrounds. According to CIA officials, the signature college program targets large, diverse colleges where the CIA has received a significant number of applications. Its criteria for selection of signature colleges include high diversity, the size of the college, and potential for developing a deep relationship. Two of the five CIA signature colleges are also in the IC CAE program and are currently receiving or have received grant funding. NSA. According to NSA officials, NSA has been involved in the IC CAE program since its inception, and its involvement includes participating in a variety of events such as colloquium, summer seminars, and recruitment events. In addition, NSA has also sponsored two types of Centers of Academic Excellence, one for cyber defense and one for cyber operations. The goal of these programs is to develop technical skills by promoting higher education and research in cyber defense and producing professionals with cyber defense expertise. In addition, the programs also aim to broaden the pool of skilled workers capable of supporting a cyber-secure nation. The programs involve awarding a designation as a Center of Academic Excellence in Cyber Defense or Cyber Operations to U.S. universities based on criteria. No funding is provided to the U.S. universities. According to NSA officials, these programs are independent of the IC CAE program and have different goals from the IC CAE program. Officials stated NSA’s CAE programs are focused specifically on increasing the pipeline of cyber talent. Further, some IC elements’ recruitment strategies incorporate the IC CAE program as part of their strategy, but it is not the only aspect of the elements’ approach to recruiting. For example, according to NGA’s Campus Recruitment Strategy, the agency targets high-quality colleges that provide access to diverse applicants in high-quality, mission-aligned degree programs across a broader geographic reach. The strategy has 31 designated colleges that were selected based on a variety of criteria, including demographic diversity and academic programs that align with the agency’s mission areas. According to NGA officials, they continue to recruit from at least seven IC CAE colleges; however, being an IC CAE college was not part of the primary selection criteria for colleges in NGA’s campus recruitment strategy. As program manager, DIA has relied on the IC CAE Senior Advisory Board and its charter as a means to engage IC elements in the program. However, not all IC elements participate on the Senior Advisory Board or in the IC CAE program. For example, in the November 2017 board meeting, only 9 of the 17 elements attended the meeting and a quorum was not established. Without a quorum, votes held during a meeting are not valid and actions cannot be approved. Moreover, during board meetings, members have raised concerns about limited attendance, citing concerns that only about half of the members regularly attend. According to some IC element officials, they do not attend IC CAE program events, including the Senior Advisory Board meetings, because the program does not meet their IC element’s organizational needs. For example, as discussed above, some IC elements have developed separate relationships with colleges not in the IC CAE program. Further, as discussed above, some IC elements have developed separate relationships with colleges and programs to address their specific hiring needs. As a result, some IC element officials have stated they have intentionally reduced their recruitment at some IC CAE colleges. Since not all IC elements participate in the IC CAE program or attend the board meetings, DIA has had to conduct other outreach to engage IC elements. According to DIA officials, since 2017 the IC CAE program office has conducted additional ad hoc outreach to engage with IC elements. For example, DIA officials have stated the IC CAE program office has utilized ODNI forums, such as the IC Recruitment Council and IC Chief Human Capital Office Council to engage with IC elements on the IC CAE program. However, DIA officials also stated that not all IC elements attend these ODNI council meetings because different offices within the IC elements are responsible for attending the meetings. Some IC elements are set up differently with regard to which office within the IC element participates in the IC Recruitment Council, so the IC element representatives to the IC CAE Board can differ from those who attend the IC Recruitment Council. While these ad hoc outreach efforts are likely a positive step to improving coordination, there remains a lack of engagement by all IC elements. Standards for Internal Control in the Federal Government state that management should establish and operate monitoring activities, to include a determination of when to revise the program baseline to address program needs. Further, the standards state that management should evaluate and document the results of ongoing monitoring and separate evaluations to identify issues. As program manager, DIA has not established a process for monitoring and assessing IC elements’ participation in the IC CAE program, and the board’s charter does not describe such a process. As result, DIA does not fully understand the reasons for the lack of engagement on the part of IC elements. IC elements that do not attend board meetings are not engaged in the discussions and decisions being made about the program. Similarly, IC elements that do not participate actively in the program have limited contact and interaction with IC CAE colleges, which has hampered the effectiveness of the IC CAE program. Without a process for monitoring and assessing IC elements’ participation in the IC CAE program, ODNI will not be able to tailor the program to meet the needs of the IC and address the overall program goal of creating a diverse pool of applicants for the IC. Assessing and addressing IC elements’ reasons for not participating in the program would increase ODNI’s understanding of the factors that inhibit participation and inform an approach to mitigating these factors and achieving program goals. IC Elements’ Roles in the IC CAE Program Are Not Clearly Defined The IC CAE program is a collaborative effort that allows IC elements to participate at college events, such as colloquia, speaker series, and campus recruitment events. The IC CAE Senior Advisory Board was created to provide policy and guidance for the IC CAE program and ensure that participating IC elements are included in decisions related to policy matters. The board’s charter states the Senior Advisory Board members are responsible for attending board meetings, voting on issues before the board, acting as points of contact for the program, and promoting the program. However, the charter does not define the expected or required level of participation of IC elements at IC CAE colleges. The IC CAE program manager, DIA, has communicated the schedule of IC CAE college events during Senior Advisory Board meetings and also asked for IC elements to participate in various events. Through the IC CAE grant process, IC CAE colleges are required to host a variety of events to educate IC CAE colleges about the IC. Based on the IC CAE grant announcements, these events are predicated on IC element participation. Specifically, recruitment fairs at colleges are facilitated by IC elements and IC element officials are speakers at colloquia events, with a primary goal of maximizing relationships and outreach. However, some colleges have experienced challenges with engaging with IC elements to attend these events. For example: An official from a legacy IC CAE college noted that it has been difficult to get IC elements to attend college events or recruit from the college. The official stated that IC element participation has been ad hoc and based on personal relationships with the IC elements rather than assistance from the IC CAE program office. For example, the official noted that at recent events the college was only able to attract 8 IC elements to a recruiting event compared to the 20 representatives across 12 IC elements who attended the events in the past. An official from an active IC CAE college also noted that some IC elements are not well informed about the IC CAE program. For example, the official noted that the college would like more IC elements to attend IC CAE college events. However, the official stated that the responsibility of developing relationships with IC elements has been placed on the college. According to the official, the IC elements should be more aware of which colleges have IC CAE programs and should be the first stop for IC element recruitment. The official also stated IC CAE colleges would like the IC element to drive the relationships with colleges. Our leading collaboration practices include (1) having participating agencies clarify roles and responsibilities and (2) ensuring that participating agencies document how they are collaborating in a written agreement and develop ways to continuously update and monitor these agreements. Roles and responsibilities can be defined through laws, policies, memorandums of understanding, or other requirements. The IC has defined the mission for the IC CAE program, but the current program manager, DIA, has not clarified IC element roles and responsibilities for program participation and the Senior Advisory Board charter does not clarify what is expected of the IC elements regarding participation at IC CAE events. According to DIA officials currently managing the program, the Senior Advisory Board charter is the key to getting IC element participation in the program and overall program success. An update to the Senior Advisory Board charter could include all relevant participants and define roles and responsibilities. Without clearly defined roles and responsibilities, the IC elements are not taking full advantage of what the IC CAE program has to offer, including participation in events and college engagement. Thus, the IC CAE colleges will not be able to fully execute their IC CAE programs and the program may not be able to meet its goal of creating a pool of diverse applicants for the IC. Conclusions In 2005, ODNI established the IC CAE program with a goal of creating an increased pool of culturally and ethnically diverse, multi-disciplinary job applicants for the IC. However, the current program manager, DIA, has not sufficiently planned and overseen the program and the IC is unable to determine whether the program has been successful in meeting its goal to create an increased pool of culturally and ethnically diverse job applicants for the IC. Specifically, DIA has not developed results-oriented goals or documented an overall strategy for the program, evaluated external factors that could significantly affect the program’s success, defined and collected comprehensive metrics, or conducted an assessment of the program’s performance. As ODNI takes over the program, it needs to address these sound planning practices in order to determine whether the program is being implemented successfully and to help ensure the IC has a trusted, diverse workforce with the right expertise. Further, without sufficient planning and oversight, decision makers will also be unable to determine the return on investment in this long-standing program. In addition, ODNI also needs to improve IC element participation in the program. The IC CAE program is a collaborative effort that encourages participation among all IC elements. However, DIA has not established a process to monitor and assess IC element participation in the program or clearly defined IC elements roles and responsibilities for the IC CAE program. A process for monitoring and assessing IC element participation and addressing IC elements’ reasons for not participating in the program will increase understanding of the factors that inhibit participation and inform ODNI’s approach to mitigating these factors and achieving its goal for the program. Further, without clearly defined roles for IC element participation in the program, IC CAE colleges may not be most effectively executing their IC CAE programs and the program overall may not be able to meet its goals. Recommendations for Executive Action We are making the following seven recommendations to the Director of National Intelligence as the IC CAE program transitions to ODNI: The Director of National Intelligence should establish and document results-oriented goals that include specific targets or milestones for the IC CAE program. (Recommendation 1) The Director of National Intelligence should establish and document strategies to achieve the results-oriented goals that are established for the IC CAE program. (Recommendation 2) The Director of National Intelligence should develop and document a process to identify and continuously evaluate external factors that could affect the program’s ability to achieve identified goals. This should include, but not be limited to, a consideration of program branding and post-grant sustainment. (Recommendation 3) The Director of National Intelligence should define and document comprehensive performance measures for the IC CAE program, collect and evaluate the completeness and reliability of information it receives from grant recipients and IC elements, and report this information on a regular basis. (Recommendation 4) The Director of National Intelligence should establish a requirement for and develop a plan to periodically evaluate the IC CAE program’s performance through objective measurement and systematic analysis. (Recommendation 5) The Director of National Intelligence should develop a process for assessing why some IC elements are not participating in the IC CAE program and address these reasons in order to ensure the program is structured to meet the needs of IC elements. (Recommendation 6) The Director of National Intelligence should clearly define IC elements’ roles and responsibilities for participation in the IC CAE program to better facilitate interagency collaboration in support of the program. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to ODNI for review and comment. In written comments, ODNI concurred with all seven of our recommendations but did not identify the steps it plans to take to address the recommendations as the IC CAE program transitions to ODNI. ODNI’s comments are reprinted in their entirety in appendix III. ODNI also provided technical comments, which we incorporated as appropriate. We also provided a draft of this report to the CIA, Department of Defense, DIA, FBI, NGA, NRO, NSA, the Department of State’s Bureau of Intelligence and Research, and the Department of Energy’s Office of Intelligence and Counterintelligence for review and comment. These agencies concurred without providing comments on the draft report. NGA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretaries of Defense, Energy, and State; the Directors of National Intelligence, DIA, CIA, NGA, NRO, and NSA; and the Attorney General. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Brian M. Mazanec at (202) 512-5130 or mazanecb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: History of the Intelligence Community Centers for Academic Excellence Program from 2005 to 2011 The Office of the Director of National Intelligence (ODNI) was the Intelligence Community (IC) Centers for Academic Excellence (CAE) program manager from 2005 through 2011. Internal documents and grant announcements from that period state that the program’s mission was to increase the pool of eligible applicants in core skills areas, specifically targeting women, racial and ethnic minorities and individuals with varied cultural backgrounds, regional expertise, and language proficiency. ODNI outlined four goals in its 2008 guidance for the program, including a focus on developing relationships with colleges, providing resources and grants to competitively selected colleges, providing technical assistance in the design and implementation of colleges’ IC CAE programs, and documenting results to improve the efficacy of the IC CAE program. Each of these goals included supporting objectives. For example, the goal of providing support, resources, and grants to competitively selected colleges included four supporting objectives, such as instituting long-term practices to increase relationships with minority-serving institutions and providing access to IC internships, co-ops, and graduate fellowships. These goals and objectives were aligned with the program’s overall mission, but they were not defined in measurable terms that would allow future assessments of whether they were being achieved. For example, ODNI did not establish targets for the goals or supporting objectives listed above that would have allowed it to determine how successful it had been at supporting long- term programs at minority-serving institutions or providing access to IC employment opportunities. In addition, ODNI defined a strategy to support its program goals, and the strategy included the following four elements: outreach to high schools; operations at colleges, including curriculum development; infrastructure at the colleges to support these operations such as faculty and administrators; and relationships between IC CAE programs and IC elements. These elements of ODNI’s strategy described specific operational requirements for the program. For example, IC CAE grant announcements in 2006, 2009, and 2011 supported a wide range of academic activities that prioritized the development of curricula in national security studies, science and technology programs, study abroad programs, courses in critical languages, and pre-collegiate outreach through activities like summer camps to raise awareness and interest in IC careers. ODNI also defined assessment and evaluation as an overarching part of the program’s strategy, as shown in figure 5. ODNI worked with a contractor to conduct annual performance evaluations through 2012. The contractor developed an evaluation methodology and reviewed colleges’ interim reports, collected and verified performance data, and developed findings and recommendations. For example, the contractor recommended that IC CAE colleges broaden their critical language offerings and increase the number of IC CAE Scholars enrolled in foreign languages courses in each of the annual reports from 2007 to 2010. ODNI defined performance measures and reported data on activities, including the number of IC CAE courses and events, demographic information, and employment outcomes. Specifically, IC CAE colleges were required to report these data quarterly, and the contractor compiled the data annually into its program reviews. Table 4 shows selected performance measures outlined in ODNI’s final report that summarized information collected from 2004 through 2011. Appendix II: List of Intelligence Community Centers for Academic Excellence Grants to Colleges and Minority Designation Table 5 and table 6 show the 46 grants managed by Office of the Director of National Intelligence (ODNI) and the Defense Intelligence Agency (DIA). The total amount of grant funding projected to be obligated from fiscal year 2005 and fiscal year 2021 is $69,053,618, not including a $250,000 contract in September 2004 to initiate a pilot Intelligence Community (IC) Centers for Academic Excellence (CAE) program at Trinity Washington University. Tables 7 and 8 list the IC CAE colleges by designation of eligibility for Department of Education funding as a minority serving institution under various statutory grant programs including programs authorized by the Higher Education Act of 1965, as amended. Eligibility for grant funding under these statutory programs as determined by the Department of Education in 2018 does not designate or certify any college as a particular type of institution, for example, as a Hispanic Serving Institution. The colleges listed in tables 5 and 6 are listed in the order that they received a grant by fiscal year and some IC CAE colleges received multiple grants. Grants fund a base year and up to 4 additional option years. The consortium colleges below are listed alongside the IC CAE college that received a grant. ODNI and DIA awarded IC CAE grants to colleges following an announcement for proposals in fiscal years 2006, 2009, 2011, 2014, 2017, 2018, and 2019. Appendix III: Comments from the Office of the Director of National Intelligence Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Kristy Williams, Assistant Director; Jason Bair; Tracy Barnes; John Bumgarner; Meeta Engle; Gina Hoover; Amie Lesser; Benjamin Licht; Ned Malone; Parke Nicholson; Alice Paszel; Sarah Veale; and Lillian Yob made key contributions to this report.
Why GAO Did This Study A trusted, diverse workforce with the right expertise is critical to ensuring the IC achieves its mission of delivering distinctive, timely insights with clarity, objectivity, and independence. ODNI established the IC CAE program in 2005 to educate highly qualified students of diverse backgrounds and encourage them to pursue careers in the IC. ODNI and DIA have provided 29 colleges a total of 46 IC CAE grants through fiscal year 2018, totaling approximately $69 million through fiscal year 2021. This report evaluates the extent to which (1) DIA has planned and overseen the IC CAE program since 2011 and (2) selected IC elements are participating in the IC CAE program and have clearly defined roles. GAO reviewed IC CAE documentation related to DIA program planning and oversight from 2011 through 2019 and applied key practices of sound planning to evaluate DIA's management of the program. GAO interviewed selected IC elements and IC CAE college officials and reviewed related documentation to assess program planning and implementation. What GAO Found The Defense Intelligence Agency (DIA) has not sufficiently planned and overseen the Intelligence Community (IC) Centers for Academic Excellence (CAE) program—intended to create an increased pool of culturally and ethnically diverse job applicants for the IC—after the program transitioned from the Office of the Director of National Intelligence (ODNI) to DIA in 2011. Specifically, DIA has not applied most of GAO's key practices of sound planning in overseeing the program (see table), thus challenging decision makers' ability to determine the program's return on investment. Specifically, while DIA has developed some short-term goals and plans for the program, DIA has not established results-oriented program goals or an overall strategy that details the agency resources and processes required to achieve the program's mission. Similarly, DIA collected some data for the program and required colleges to provide reports on significant program accomplishments, but these data are not complete or reliable and have not been used to comprehensively evaluate the program's success. As oversight responsibility for the IC CAE program transitions back to ODNI in fiscal year 2020, ODNI will not be able to determine the extent to which the program has been successful in achieving its mission without establishing and documenting goals with targets and milestones; developing strategies to achieve those goals; and defining, collecting, and reporting comprehensive performance measures. Selected IC elements are participating in the IC CAE program to varying degrees, but DIA has not established a process for monitoring and assessing IC elements' participation or clearly defining IC elements' role in the program. The IC CAE program is a collaborative effort that allows IC elements to participate in college events, such as IC CAE recruitment events. However, not all IC elements participate in the program. As IC CAE program manager, DIA has engaged with IC elements in a variety of ways, but this engagement has not resulted in consistent participation among the IC elements. Moreover, program documentation has not clearly defined IC elements' roles and responsibilities for participation. Without a process for monitoring and assessing IC elements' participation and clearly defining roles and responsibilities, ODNI will neither be able to identify reasons for the lack of IC element engagement nor ensure that IC elements are taking advantage of the IC CAE program and its goal of creating a diverse pool of applicants for the IC. What GAO Recommends GAO is making seven recommendations to the Director of National Intelligence, including that ODNI establish and document results-oriented goals and strategies for the IC CAE program; define, collect, and report comprehensive performance measures; and clearly define the roles and responsibilities of the IC elements for participation in the program. ODNI concurred with the recommendations but did not identify steps it plans to take to implement them.
gao_GAO-20-21
gao_GAO-20-21_0
Background Black lung benefit payments include both cash assistance and medical care. Maximum cash assistance payments ranged from about $670 to $1,340 per month in 2019, depending on a beneficiary’s number of dependents. Miners receiving cash assistance are also eligible for medical treatment of their black lung-related conditions, which may include hospital and nursing care, rehabilitation services, and reimbursement for drug and equipment expenses, according to DOL documentation. DOL estimates that the average annual cost for medical care in fiscal year 2019 was approximately $8,225 per miner. During fiscal year 2019, about 25,700 beneficiaries received black lung benefits (see fig. 1). The number of beneficiaries has decreased from about 174,000 in 1982 as a result of declining coal mining employment and an aging beneficiary population, according to DOL. Black lung beneficiaries could increase in the near term due to the rise in the occurrence of the disease in its most severe form, progressive massive fibrosis, particularly among Appalachian coal miners, according to the National Institute for Occupational Safety and Health (NIOSH). NIOSH reported that coal miners in central Appalachia are disproportionately affected; as many as 1 in 5 show evidence of black lung, which is the highest level recorded in 25 years. NIOSH has attributed the rise in occurrence of black lung to multiple factors, including increased exposure to silica. Black lung claims are processed by the Office of Workers’ Compensation Programs within DOL. Contested claims are adjudicated by DOL’s Office of Administrative Law Judges, which issues decisions that can be appealed to DOL’s Benefits Review Board. Claimants and mine operators may further appeal these DOL decisions to the federal courts. If an award is contested, claimants can receive interim benefits until their case is resolved, which are generally paid from the Trust Fund, according to DOL. In fiscal year 2019, about 33 percent of black lung claims were approved, according to DOL data. Final awards are either funded by mine operators—who are identified as the responsible employers of claimants—or the Trust Fund, when responsible employers cannot be identified or do not pay. Of the approximately 25,700 beneficiaries receiving black lung benefits in 2019, 13,335 were paid from the Trust Fund; 7,985 were paid by responsible mine operators; and 4,380 were receiving interim benefits, according to DOL data. DOL officials told us that the more common reasons that beneficiary claims are paid from the Trust Fund include operator insolvency and unclear employment history of miners, among other reasons (see fig. 2). The operator responsible for the payment of benefits is generally the operator that most recently employed the miner. Black Lung Insurance Federal law generally requires coal mine operators to secure their black lung benefit liability. A self-insured coal mine operator assumes the financial responsibility for providing black lung benefits to its eligible employees by paying claims as they are incurred. Operators are allowed to self-insure if they meet certain DOL conditions. For instance, operators applying to self-insure must obtain collateral in the form of an indemnity bond, deposit or trust, or letter of credit in an amount deemed necessary and sufficient by DOL to secure their liability. Operators that do not self-insure are generally required to purchase coverage from commercial insurance companies, state workers’ compensation insurance funds, or other entities authorized under state law to insure workers’ compensation. DOL regulations require commercial insurers to report each policy and federal black lung endorsement issued, canceled, or renewed in a form determined by DOL. DOL accepts electronic reporting of this information from insurers via their respective rating bureaus. DOL retains this information— insured company name, address, federal employer identification number, and policy and endorsement data—so that DOL staff can later research claims to determine which operator and insurer may be liable. As we have noted in prior reports, insurance companies are regulated primarily by the states with state law providing state regulators with the authority and funding to regulate insurance. State insurance regulation is designed to, among other things, help insurers remain solvent and able to pay claims when due. Effective insurer underwriting and risk management practices—such as reinsurance–serve a similar function. While insurer insolvency occurs infrequently, when it does state insurance commissioners are typically appointed as receiver and supervise the rehabilitation or liquidation of these insurers, and state guaranty funds may assume liability for paying covered claims of insolvent insurers that have liquidated. Some Self-Insured Operator Bankruptcies Shifted Liability to the Trust Fund, but Commercial Insurance Coverage Can Help Limit Trust Fund Exposure Self-Insured Operators Transferred About $865 Million in Estimated Liability to the Trust Fund, More than Double DOL’s Previous Estimate Of the eight coal mine operator bankruptcies we identified, three resulted in a transfer of estimated benefit liability from the coal operator to the Trust Fund and five did not, according to DOL. Using Bloomberg data, we identified coal mine operators that filed for bankruptcy from 2014 through 2016. Figure 3 shows how many operators were self-insured or commercially-insured at the time of bankruptcy, and if responsibility for benefits was shifted from the bankrupt operator to the Trust Fund. Three self-insured coal mine operator bankruptcies affected the Trust Fund. Specifically, the bankruptcies of Alpha Natural Resources (Alpha), James River Coal (James River), and Patriot Coal (Patriot) resulted in a transfer of benefit liability to the Trust Fund of an estimated $865 million, according to DOL. DOL officials said that the amount of collateral they required from these three operators to self-insure was inadequate to fully cover their estimated benefit liability. When this occurs, benefit liability in excess of the collateral can be transferred to the Trust Fund. For example, the collateral DOL required from Alpha was about $12 million and approximately $494 million of estimated benefit liability transferred to the Trust Fund, according to DOL’s estimate (see table 1). DOL estimates for how these three operator bankruptcies will affect the Trust Fund have more than doubled from what DOL had previously reported. In June 2019, we reported that DOL estimated that between $313 million to $325 million in benefit liabilities would transfer to the Trust Fund as a result of these bankruptcies. In January 2020, however, DOL provided updated estimates stating that $865 million in benefit liabilities would transfer to the Trust Fund as a result of these bankruptcies. According to DOL, their estimates increased to account for higher black lung benefit award rates that occurred from fiscal years 2016 through 2019; higher medical treatment cost inflation in recent years; and different discount rate assumptions. Additionally, DOL’s prior estimate for the Patriot bankruptcy did not account for future claims and the effect of those claims on the Trust Fund. The three other self-insured coal mine operator bankruptcies we identified did not affect the Trust Fund. Specifically, Arch Coal, Peabody Energy, and Walter Energy were also self-insured operators, but DOL officials said that their federal black lung benefit liabilities were assumed by a reorganized company or by a purchaser, and therefore did not transfer to the Trust Fund. DOL officials said that they take three key actions, as appropriate, to protect the financial interests of the Trust Fund during self-insured operator bankruptcies. 1. DOL officials said that they file a claim in every case with the bankruptcy court for the reimbursement of an operator’s full estimated federal black lung benefit liability. 2. If an operator plans to reorganize or if it is acquired by a purchaser, DOL officials said that they negotiate with the company or the purchaser, as appropriate, to help ensure benefit responsibility will be “passed through” to a reorganized operator or purchaser, rather than be discharged and become the responsibility of the Trust Fund. 3. If benefit liabilities are not “passed-through” to an operator, DOL officials said that they seek settlement agreements, whereby the Trust Fund receives an allowed general unsecured claim in an amount based on an operator’s estimated benefit liability. DOL officials said that during the bankruptcy of James River they negotiated a settlement agreement providing DOL with a general unsecured claim in an amount commensurate with its estimate of the operator’s benefit liability at the time of bankruptcy. However, these officials said that given the low priority under bankruptcy law for their general unsecured claim, the payout they received was only about $400,000, which was just a small portion of the estimated benefit liability that transferred to the Trust Fund. DOL officials said that during the bankruptcy of Alpha they negotiated both a “pass through” and a settlement agreement in which certain liabilities would be transferred to the Trust Fund, while other liabilities would be retained by Alpha. DOL officials said that they received a payout from Alpha of $7.4 million, although $494 million in estimated benefit liability transferred to the Trust Fund. Further, as a condition of the agreement, DOL officials said that they agreed to let Alpha self-insure after it emerged from bankruptcy. Since 2016, several other self-insured operators have also filed for bankruptcy, according to DOL officials, including Cambrian Coal, Cloud Peak Energy, Murray Energy, and Westmoreland Coal. DOL officials said that $17.4 million in estimated black lung benefit liability will transfer to the Trust Fund as a result of Westmoreland Coal’s bankruptcy. Given the uncertainty of the bankruptcy process in terms of whether liabilities will or will not transfer to the Trust Fund, however, DOL officials said that they could not speculate on how the other bankruptcies may affect the Trust Fund. State Insurance Regulation and Insurer Practices Help to Protect the Trust Fund from Assuming Responsibility for Paying Benefits of Commercially-Insured Operators Insurance contracts or policies to secure operators’ benefit liabilities are required by law to include a provision that insolvency or bankruptcy of an operator does not release the insurer from the obligation to make benefit payments. As previously discussed, state insurance regulation, insurer underwriting and risk management practices, and state guaranty funds also help to protect the Trust Fund from having to assume responsibility for paying black lung benefits on behalf of bankrupt coal operators. Thus, by being commercially insured, the two operator bankruptcies we identified that filed for bankruptcy between 2014 and 2016—Energy Future Holdings and Xinergy Ltd—did not affect the Trust Fund, according to DOL (see fig 3). State insurance commissioners monitor the financial health of insurers, including performing periodic examination of insurer financial statements. Further, rating agencies, such as Standard & Poor’s, Moody’s, and AM Best, issue insurer financial strength ratings, which represent the agencies’ opinions on insurers’ financial strength and ability to pay policy and contract obligations. Eight of the nine insurers that issued approximately 90 percent of the workers’ compensation policies with federal black lung coverage from 2016 through 2018, according to our review of DOL data, had at least an “A-” financial strength rating from AM Best (with the one remaining being a state insurer that was not rated). In deciding whether to provide federal black lung coverage, insurers we interviewed said they consider an operator’s historical black lung claim losses, financial condition, and mine location among other factors. However, insurance company officials identified various challenges in writing and pricing black lung coverage that produces an appropriate amount of premiums to cover expected losses. The challenges cited by these officials included the long latency period of black lung disease; changes in law regarding benefit eligibility and how the disease is defined; the ability of miners to refile claims indefinitely; and the inability of insurers and operators to settle claims. One official noted that there is much risk and little profit in black lung coverage. Insurance companies can use reinsurance to protect themselves from catastrophic losses that could threaten their solvency and ability to pay claims, and to reduce wide fluctuations in their annual losses. For example, workers’ compensation claims can take years to fully develop after premiums have been set, which in turn can adversely affect an insurer’s financial position if premiums have underestimated actual claims. Insurance company officials said that they reinsure their workers’ compensation coverage, but some said that their reinsurance policies either explicitly excluded occupational disease claims, including black lung, or cover black lung but have conditions and loss thresholds that would generally result in the exclusion of such claims. However, reinsurance, even if it does not explicitly cover federal black lung claims, can help manage the risk of workers’ compensation losses and losses in other lines of insurance that an insurer writes, thereby indirectly helping to ensure that the insurer can pay all types of claims, including federal black lung. If an insurer becomes insolvent, state guaranty funds reduce the potential for the Trust Fund to assume responsibility for paying claims. States have different rules for guaranty fund benefit coverage and limits. In the states we reviewed, state guaranty funds generally pay federal black lung benefits, although there may be certain limitations on the claims they will pay. For example, in West Virginia, there is no maximum claim limit that the state guaranty fund will pay on standard workers’ compensation claims; but in Kentucky, a state guaranty fund official told us that, in the guaranty fund’s opinion, state law limits federal black lung claims to $300,000. Also, a guaranty fund could reject a federal black lung claim, which could result in the Trust Fund having to assume responsibility for paying the claim. An official from one state guaranty fund that maintained data on rejected black lung claims said that the most common reason for rejection is that claims are filed after the date set by the bankruptcy court for receiving claims. DOL officials said it is very uncommon for the Trust Fund to assume responsibility for federal black lung claims of insolvent insurers. However, DOL does not maintain data to readily determine the extent to which this actually occurs, as discussed later in this report. DOL’s Limited Oversight Has Exposed the Trust Fund to Financial Risk, and Its New Self-Insurance Process Lacks Enforcement Procedures In overseeing coal mine operator self-insurance in the past, DOL did not estimate future benefit liability when setting collateral; regularly review operators to monitor their changing financial conditions; or always use enforcement tools available to protect the financial interests of the Trust Fund, such as by revoking an operator’s ability to self-insure, if warranted. In July 2019, DOL began implementing a new self-insurance process that, if implemented effectively, should help to address some of these past deficiencies. Specifically, DOL plans to consider an operator’s future benefit liability when setting collateral and to review self-insured operators more frequently. However, the new process still lacks procedures for self- insurance renewals and coal operator appeals, which could hinder DOL from taking enforcement actions to protect the Trust Fund as needed. Additionally, DOL does not monitor whether operators that do not self- insure maintain adequate and continuous commercial insurance coverage as required by law. DOL Did Not Estimate Future Benefit Claims When Setting Collateral and Regularly Review Self-Insured Operators Agency regulations require DOL to obtain collateral from coal mine operators applying to self-insure in an amount deemed by DOL to be necessary and sufficient to secure the payment of the operators’ liability. To determine collateral amounts under the former process, agency procedures stated that DOL first assess an operator’s net worth by reviewing, among other factors, the operator’s audited financial statement and black lung claims information. DOL then determined the amount of collateral equal to 3, 5, or 10 years of the operator’s annual black lung benefit payments made at the time of the operator’s self-insurance application, depending on its net worth. Specifically, if net worth was $1 billion or greater, agency procedures stated that DOL set collateral equal to 3 years of benefit payments. If net worth ranged from $500 million to $1 billion, DOL set collateral equal to 5 years of benefit payments. If net worth ranged from $10 million to $500 million, DOL set collateral equal to 10 years of benefit payments. Agency procedures did not permit operators with net worth less than $10 million to self-insure. DOL’s former process for determining collateral did not routinely consider potential future claims for which an operator could be responsible. DOL had periodically reauthorized coal operators to self-insure, by reviewing an operator’s most recent audited financial statement and claims information, among other things. DOL prepared memos documenting these reviews and communicated with coal operators about whether their financial circumstances warranted increasing or decreasing their collateral. Estimating future costs based on sound actuarial practice is essential to the integrity of the insurance and the risk financing system and is key to fulfilling the promises embodied in insurance contracts, according to Actuarial Standards Board standards. Additionally, in three of the four states we contacted, state insurance officials said that they used actuarial methods to assess an operator’s future estimated benefit liability when considering how much collateral should be required to self- insure. The remaining state, Wyoming, did not allow coal mine operators to self-insure. Table 2 provides information on the 22 operators that were self-insured under DOL’s former process, including the date of each operator’s last DOL reauthorization; the amount of DOL-required collateral; and the operator’s estimated black lung benefit liability, if available. Agency regulations state that DOL may adjust the amount of collateral required from self-insured operators when experience or changed conditions so warrant, but DOL did not regularly monitor these operators to reauthorize their ability to self-insure. In reviewing DOL’s most recent reauthorization memos for each of the 22 self-insured operators, we found that while some of these operators had been reauthorized more recently, others had not been reauthorized by DOL in decades. One operator in particular had not been reauthorized by DOL since 1988. DOL officials stated that from 2009 to 2012, six employees handled coal operator reauthorizations and associated work actions. Due to attrition, however, this number dropped at times to three employees, according to DOL officials. Additionally, DOL had no written procedures that specified how often reauthorizations should occur after an operator’s initial 18- month reauthorization. In contrast, in two of the four states we contacted, state insurance officials were required to review self-insured employers at least annually. DOL Did Not Always Use Enforcement Tools to Protect the Trust Fund Revoking an operator’s ability to self-insure, fining mine operators for operating without insurance, and placing liens on operator assets are tools DOL has available to mitigate financial losses to the Trust Fund. Based on our review of DOL documentation, however, we found instances when DOL did not use these tools to protect the Trust Fund, or was hindered from doing so because of an operator’s ongoing appeal or bankruptcy. In September 2001, DOL required $5 million in additional collateral from James River, which would have increased its collateral from $0.4 million to $5.4 million. Although DOL did not receive the additional collateral, it did not revoke the operator’s authority to self-insure, which is a potential option under agency regulations. Further, DOL had not reauthorized James River at any point from August 2001 until it filed for bankruptcy in April 2014. If DOL had revoked James River’s ability to self-insure, it could have potentially prevented the Trust Fund from being responsible for claims based on a miner’s employment from 2001 through 2016, when James River liquidated. Additionally, if the operator had been unable to obtain commercial insurance, DOL could have potentially fined the operator for each day it operated without insurance. Instead, DOL took no action during these years and estimated benefit liability of $141 million was shifted to the Trust Fund, according to DOL. DOL officials stated that they do not have records explaining why James River did not provide the additional collateral or why they did not revoke its authority to self-insure. In August 2014, DOL required $65 million in collateral from Patriot, increasing its collateral from $15 million to $80 million. Patriot appealed this decision and, in the 8 months that followed before Patriot filed for bankruptcy in May 2015, DOL did not obtain additional collateral, or revoke Patriot’s ability to self-insure because the appeal was still pending. DOL officials said they would not typically revoke an operator’s authority to self-insure during an ongoing appeal. As a result, DOL was hindered from using this enforcement tool. Liens on operator assets can be an effective tool to protect the Trust Fund if an operator defaults on its benefit liabilities, but DOL officials said that they are hindered from using this tool if an operator files for bankruptcy. DOL can place a lien on a coal operator’s assets under federal law if they refuse the demand to pay the black lung benefit payments for which they are liable. In the event of bankruptcy or insolvency, federal law states that the lien imposed shall be treated in the same manner as a lien for taxes due and owing to the United States under certain laws. However, DOL officials said that operators rarely stop paying benefits until after they file for bankruptcy. Once a bankruptcy occurs, DOL officials said that they are generally prevented by the court from placing a lien and taking an operator’s assets in lieu of payment of current and future benefit liabilities. Under bankruptcy law, DOL officials said that they have no special status over other creditors with outstanding financial claims. Instead, DOL officials said that obtaining sufficient collateral is a better way to protect the Trust Fund. DOL Has Implemented a New Self-Insurance Process, but It Lacks Procedures to Help Ensure Enforcement Actions In July 2019, DOL began implementing a new process for coal mine operator self-insurance that should help to address some past deficiencies if implemented effectively. Specifically, DOL is to consider an operator’s future benefit liability when setting collateral and plans to more frequently review self-insured operators (see text boxes). Under the new process, DOL officials plan to assess the risk of operator bankruptcy using various financial metrics related to profitability and solvency. As a result, DOL officials said that the amount of collateral they will require from operators to self-insure going forward will be based on both an estimate of an operator’s current and future black lung liability and the risk of default due to insolvency. As of October 2019, DOL officials said that most self-insured operators had submitted their application and supporting documentation and that they were reviewing this information to decide whether these operators should continue to be self-insured. DOL’s New Self-Insurance Process Will Include Estimates of Future Benefit Liability Coal mine operators applying to DOL to self-insure will be required to submit: a completed application; a certified consolidated financial statement for each of the 3 years prior to its application; recent black lung claims information; and a certified actuarial report on the operator’s existing and future black lung benefit liabilities. DOL plans to use the information submitted by coal mine operators to assess the insolvency risk of each operator using various financial metrics related to profitability and solvency. Depending on the results of their analysis, DOL plans to categorize the risk-level of each applicant as low, medium, or high. DOL will then set the amount of collateral required to self-insure by linking the operator’s risk category to a corresponding percentage of the operator’s actuarial estimated benefit liability. DOL policies state that they would require a high-risk operator to secure with collateral 90 percent of estimated benefit liability, a medium-risk operator to secure 45 percent, and a low-risk operator to secure 15 percent. However, in February 2020, DOL officials said they plan to revise these percentages to 100 percent, 85 percent, and 70 percent for high-risk, medium-risk, and low-risk operators, respectively. DOL’s New Self-Insurance Process Will Require More Frequent Coal Mine Operator Reviews Coal mine operators that are already authorized to self-insure will be required to submit: DOL plans to use the information self-insured operators submit to update their insolvency risk analysis. If an operator’s risk category changes (e.g., from low-to medium-risk), DOL plans to send a form to the operator requiring an additional amount or type of collateral. Upon receiving the completed form, and proof that the collateral has been obtained, DOL stated that they will notify the operator that its a self-insurance renewal application (annually); a financial summary (quarterly); a certified consolidated financial statement (annually); black lung claims information (annually); and actuarial estimate of benefit liability (to be submitted every three years). authority to self-insure has been reauthorized. DOL’s new self-insurance process made important changes, but overlooked other key internal control improvements that are needed to protect the financial interests of the Trust Fund. DOL’s new requirements for setting collateral and for the annual and quarterly review of self- insured operators are key components of internal controls, which call for agency management to implement control activities through policy. However, DOL’s new self-insurance process lacks procedures that could help to prevent past oversight deficiencies from reoccurring. Among other things, DOL’s procedures do not specify (1) the duration of an operator’s self-insurance authority, (2) the time frames for submitting renewal applications and supporting documentation, and (3) the conditions under which an operator’s self-insurance authority would not be renewed. Without such procedures, DOL has no basis to take enforcement action should an operator not submit its self-insurance renewal application and supporting documentation. DOL staff are also hindered from taking enforcement action during an operator’s ongoing appeal, as previously mentioned. DOL policies state that an operator may request reconsideration if its self-insurance application has been denied or if it believes the collateral required by DOL is too high to secure its benefit liabilities. However, DOL lacks procedures that specify, among other things, the length of time that operators have to submit supporting information. Further, DOL does not specify a goal for how much time DOL appeals decisions should take. For example, in October 2015, DOL recommended revoking Murray Energy’s (Murray) authority to self-insure due to deteriorating financial conditions. Murray appealed this decision, and DOL officials said they postponed responding to the appeal until their new self-insurance process was implemented so that they could evaluate Murray under its new process along with the other self-insured operators. However, Murray filed for bankruptcy in October 2019 and DOL had not revoked its authority to self-insure or requested additional collateral because Murray’s appeal was still pending and DOL was still evaluating how much collateral it would require from the operator under its new self-insurance process. DOL Does Not Monitor Whether Coal Mine Operators Maintain Commercial Insurance Coverage DOL does not monitor coal mine operators that do not self-insure and, thus, must commercially insure their federal black lung liabilities to make certain they maintain adequate and continuous coverage as required by law. DOL previously monitored operators' compliance with the program's insurance requirements by annually sending letters to a selection of operators seeking confirmation that they had maintained adequate coverage, but discontinued the process once the agency began receiving NCCI policy data. In order to use the policy data for the purpose of identifying operators that have not maintained coverage, DOL would, as a starting point, have to maintain a record of all employers that operate a coal mine. However, DOL officials explained that they do not currently maintain such a record. In the absence of effective DOL monitoring of operator compliance, we evaluated the potential risk that uninsured operators could pose to the Trust Fund. Specifically, in examining the 13 largest coal operators that were not approved to self-insure their federal black lung liabilities and, therefore, had to obtain commercial coverage, we found that some insurers erred in reporting endorsements and in one instance an operator did not have adequate coverage. We found six operators (parent or subsidiary) that were not insured for the entire 3 year period from 2016 through 2018, according to our review of DOL data. When we discussed our findings with DOL, agency officials had to research each operator individually and in some cases contact the operator or their insurer to find out whether or not they had been covered. DOL concluded that these entities were insured. However, the insurers had not properly reported the federal black lung endorsement on new policies or subsequent renewals, in addition to other reporting issues. One of these six operators also had, inadvertently, not maintained adequate commercial coverage for its mining operations in Texas, and had not self-insured those operations. In this instance, the operator obtained an excess loss policy that only pays claims once they exceed a high threshold and, therefore, is not sufficient by itself to secure the payment of the operator’s benefit liabilities. DOL data does not include information on excess loss policies and, while the data NCCI provides on standard workers’ compensation policies with federal black lung endorsements lists operators’ addresses, they do not provide the specific states for which endorsements apply. Designing processes to achieve agency objectives and respond to risks is a principle of effective internal controls. Without a process to monitor operator compliance with program insurance requirements, DOL risks not identifying a lapse or cancellation of operator coverage. This could result in the Trust Fund having to assume responsibility for paying benefits that would otherwise have been paid by an insurer. DOL officials said the Trust Fund infrequently pays claims on behalf of uninsured operators due to the civil penalties that it can impose on operators and certain company officers. These officials also said that operators that do not maintain insurance coverage typically employ few miners and are out of business by the time a claim is filed and, thus, cannot be held liable for benefit claims. However, DOL officials acknowledged that they do not track how often claims are paid by the Trust Fund on behalf of uninsured operators that should have been insured. We attempted to examine the extent to which claims were paid by the Trust Fund in fiscal year 2018 on behalf of uninsured operators that should have been insured. We found that DOL’s black lung claimant and payment system does not identify whether potentially responsible operators should have had commercial insurance coverage. The data on responsible operators and insurers, as well as the basis on which an operator was determined to be responsible, were not consistently recorded. DOL officials said that the data fields that identify responsible operators and their insurers should reflect the information collected from DOL’s initial determination. DOL officials said that in some cases, after an adjudication decision determined the Trust Fund was responsible for paying benefits, claim examiners may have deleted the previously recorded responsible operator and insurer data, creating potential inconsistencies in the data. DOL officials acknowledged that its processes and guidance for recording information on responsible operators and the basis for those decisions resulted in inconsistent and potentially inaccurate recording of claim and benefit data. As a result, DOL issued preliminary guidance in February 2019 to field supervisors and claims examiners. However, the revised guidance does not include how to identify potentially responsible operators that should have had commercial coverage but did not. Monitoring agency internal control systems and evaluating the results of those activities is a principle of effective internal control. Without complete and consistently recorded information on potentially responsible operators and insurers, and the basis for determination decisions, DOL is not able to effectively evaluate the financial impact claims paid on behalf of uninsured operators have on the Trust Fund. Determining the financial impact of these claims would be important to DOL’s evaluation of the effectiveness of a process for monitoring operator compliance with black lung program insurance requirements. Conclusions The Black Lung Disability Trust Fund faces financial challenges, and DOL’s limited oversight of coal mine operator insurance has further strained Trust Fund finances by allowing operator liabilities to transfer to the federal government. DOL’s new self-insurance process may help to address past deficiencies in setting collateral and reviewing self-insured operators if implemented effectively. However, DOL still lacks procedures on self-insurance renewals and coal operator appeals that could help to ensure that DOL staff will take enforcement actions when needed. Establishing clear self-insurance renewal procedures could better position DOL to take action to protect the Trust Fund should an operator not submit its renewal application and supporting documentation, or comply with DOL’s collateral requirements. Procedures that identify time lines for self-insured operators to submit documentation supporting their appeals, and that identify a goal for how much time DOL should take to make appeals decisions could help to ensure that DOL is able to revoke an operator’s ability to self-insure, when warranted. Commercially-insured federal black lung liabilities can limit the Trust Fund’s exposure to financial risk, but only if operators maintain adequate and continuous coverage as required. Currently, DOL does not identify lapses or cancellations in coverage among commercially-insured operators until after a claim is filed. Establishing a process to identify lapses and cancellations in coverage before claims get filed could help prevent the Trust Fund from becoming responsible for these claims. Recommendations for Executive Action We are making the following three recommendations to the Department of Labor: The Director of the Office of Workers’ Compensation Programs should develop and implement procedures for coal mine operator self- insurance renewal that clarifies how long an operator is authorized to self-insure; when an operator must submit its renewal application and supporting documentation; and the conditions under which an operator’s self-insurance authority would not be renewed. (Recommendation 1) The Director of the Office of Workers’ Compensation Programs should develop and implement procedures for self-insured coal mine operator appeals that identify time lines for self-insured operators to submit documentation supporting their appeals and that identify a goal for how much time DOL should take to make appeals decisions. (Recommendation 2) The Director of the Office of Workers’ Compensation Programs should develop and implement a process to monitor operator compliance with commercial insurance requirements and periodically evaluate the effectiveness of this process. This process should be designed to detect errors and omissions in reporting insurance coverage using complete, accurate, and consistently recorded data. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Labor (DOL) for review and comment. Their written comments are reproduced in appendix I. DOL also provided technical comments and clarifications, which we have incorporated, as appropriate. DOL agreed with our three recommendations and said it is acting to implement them to achieve further improvements in ensuring the effective oversight of coal mine operator insurance. DOL acknowledged the importance of improving oversight of coal mine operator insurance and commented that it made major oversight improvements in recent years. DOL commented that it began developing a new coal mine operator self-insurance process in 2015, before GAO began its review, and DOL formally approved this process in 2017. In July 2019, DOL stated that its new process was finalized when the Office of Management and Budget (OMB) approved the forms to collect financial and other information from coal mine operators. DOL stated that it is now reviewing information obtained from coal mine operators, and expects to set the amount of collateral required to self-insure under its new process in the first half of 2020. We commend DOL’s efforts to address the deficiencies of its past self-insurance process. However, we remain concerned about continuing coal operator bankruptcies and the looming unsecured black lung benefit liabilities that still threaten the Trust Fund. DOL commented that adopting GAO’s recommendations would further improve its oversight of coal mine operator insurance going forward. Specifically, DOL reported that it will (1) ensure letters granting or renewing self-insurance authority will inform operators that their authorization expires in one year and that they must submit renewal information three months in advance of the expiration date, (2) ensure letters denying self-insurance will inform operators that they have a 30- day appeal period (limited to one extension) and that DOL has set a goal of resolving all appeals within 90 days of the denial letter, and (3) modify existing computer systems to identify lapses or cancellations of commercial insurance coverage, and require operators identified as having lapsed or cancelled coverage to obtain or provide proof of coverage within 30 days. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staffs should have any questions about this report, please contact Cindy Brown Barnes at (202) 512-7215 or brownbarnesc@gao.gov, or Alicia Puente Cackley at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the U.S. Department of Labor Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contacts named above, Blake Ainsworth (Assistant Director), Patrick Ward (Assistant Director), Justin Dunleavy (Analyst-in- Charge), Monika Gomez, Courtney LaFountain, Rosemary Torres Lerma, and Scott McNulty made key contributions to this report. Also contributing to this report were James Bennett, Nancy Cosentino, Caitlin Cusati, John Forrester, Alex Galuten, Ellie Klein, Emei Li, Corinna Nicolaou, Almeta Spencer, Curtia Taylor, and Shana Wallace.
Why GAO Did This Study In May 2018, GAO reported that the Trust Fund, which pays benefits to certain coal miners, faced financial challenges. The Trust Fund has borrowed from the U.S. Treasury's general fund almost every year since 1979 to make needed expenditures. GAO's June 2019 testimony included preliminary observations that coal operator bankruptcies were further straining Trust Fund finances because, in some cases, benefit responsibility was transferred to the Trust Fund. This report examines (1) how coal mine operator bankruptcies have affected the Trust Fund, and (2) how DOL managed coal mine operator insurance to limit financial risk to the Trust Fund. GAO identified coal operators that filed for bankruptcy from 2014 through 2016 using Bloomberg data. GAO selected these years, in part, because bankruptcies were more likely to be resolved so that their effects on the Trust Fund could be assessed. GAO analyzed information on commercially-insured and self-insured coal operators, and examined workers' compensation insurance practices in four of the nation's top five coal producing states. GAO also interviewed DOL officials, coal mine operators, and insurance company representatives, among others. What GAO Found Coal mine operator bankruptcies have led to the transfer of about $865 million in estimated benefit responsibility to the federal government's Black Lung Disability Trust Fund (Trust Fund), according to DOL estimates. The Trust Fund pays benefits when no responsible operator is identified, or when the liable operator does not pay. GAO previously testified in June 2019 that it had identified three bankrupt, self-insured operators for which benefit responsibility was transferred to the Trust Fund. Since that time, DOL's estimate of the transferred benefit responsibility has grown—from a prior range of $313 million to $325 million to the more recent $865 million estimate provided to GAO in January 2020. According to DOL, this escalation was due, in part, to recent increases in black lung benefit award rates and higher medical treatment costs, and to an underestimate of Patriot Coal's future benefit claims. DOL's limited oversight of coal mine operator insurance has exposed the Trust Fund to financial risk, though recent changes, if implemented effectively, can help address these risks. In overseeing self-insurance in the past, DOL did not estimate future benefit liability when setting the amount of collateral required to self-insure; regularly review operators to assess whether the required amount of collateral should change; or always take action to protect the Trust Fund by revoking an operator's ability to self-insure as appropriate. In July 2019, DOL began implementing a new self-insurance process that could help address past deficiencies in estimating collateral and regularly reviewing self-insured operators. However, DOL's new process still lacks procedures for its planned annual renewal of self-insured operators and for resolving coal operator appeals should operators dispute DOL collateral requirements. This could hinder DOL from revoking an operator's ability to self-insure should they not comply with DOL requirements. Further, for those operators that do not self-insure, DOL does not monitor them to ensure they maintain adequate and continuous commercial coverage as appropriate. As a result, the Trust Fund may in some instances assume responsibility for paying benefits that otherwise would have been paid by an insurer. What GAO Recommends GAO is making three recommendations to DOL to establish procedures for self-insurance renewals and coal operator appeals, and to develop a process to monitor whether commercially-insured operators maintain adequate and continuous coverage. DOL agreed with our recommendations.
gao_GAO-20-188
gao_GAO-20-188_0
Background Types and Uses of Virtual Currency While there is no statutory definition for virtual currency, IRS guidance has described virtual currency as a digital representation of value that functions as a medium of exchange, a unit of account, or a store of value. Some virtual currencies can be used to buy real goods and services and can be exchanged for U.S. dollars or other currencies. A cryptocurrency is a type of virtual currency that employs encryption technology and operates on distributed ledger technology, such as blockchain. Distributed ledger technology allows for users across a computer network to verify the validity of transactions potentially without a central authority. For example, a blockchain is made up of digital information (blocks) recorded in a public or private database in the format of a distributed ledger (chain). The ledger permanently records, in a chain of cryptographically secured blocks, the history of transactions that take place among the participants in the network. For the purposes of this report, we use the term virtual currency as a broad term that includes both cryptocurrencies, which use distributed ledger technology, and digital units of exchange that do not use that technology but still meet IRS’s definition of a convertible virtual currency, as defined in Notice 2014-21. Figure 1 shows a simplified representation of how distributed ledger technology is used to circulate virtual currencies. Bitcoin, which emerged in 2009, is the first and most widely circulated blockchain-based cryptocurrency. Bitcoins are created through a process called mining. Bitcoin miners download software to solve complex equations to verify the validity of transactions taking place on the network, and the first miner to solve a problem is awarded coins in return. Once a problem is solved, the transactions are added as a new block to the distributed ledger. Users transact in virtual currencies electronically through a network, and may use virtual wallets to manage their virtual currency. Some virtual currencies can be used as investments and to purchase goods and services in the real economy. For example, some retailers accept virtual currency as a form of payment. Virtual currency exchanges provide a platform where users can transact in different types of virtual currencies or exchange them for government-issued currencies or other virtual currencies. Estimates of the Size of the Virtual Currency Market The fair market value of some virtual currencies has changed dramatically over time. For example, according to one index, the average value of one bitcoin was just under $20,000 in mid-December 2017. By early February 2018, one bitcoin was valued at about $7,000, before falling below $4,000 in December 2018, and again rising to over $9,000 in November 2019. The size of the virtual currency market is unknown due to limitations in available data. For example, one recent analysis concluded that a widely cited source for data about bitcoin trading included exaggerated data that gave an inflated impression of the size of the actual market. Nonetheless, there are data that may provide some context for the size of this market: As of April 2019, 10 major virtual currency exchanges collectively handled an average daily trading volume in bitcoin of more than $500 million, according to Bitwise. For comparison, the Federal Reserve Banks’ Automated Clearing House (a traditional payment processor) processed $103 billion in payment transactions on average per day in 2018. According to one index, the total market capitalization of bitcoin, the most widely circulated virtual currency, is estimated to have ranged between $60 billion and $225 billion between December 2018 and October 2019. As of November 2019, Coinbase, a large U.S.-based cryptocurrency exchange, reports a user base of more than 30 million. According to economists at the Federal Reserve Bank of New York, a 2018 survey they conducted found that 85 percent of respondents had heard of cryptocurrencies, 5 percent currently or previously owned cryptocurrency, and 15 percent reported that they were considering buying cryptocurrency. Regulation of Virtual Currency Federal agencies, including CFTC, FinCEN, and SEC, have jurisdiction over various aspects of virtual currency markets and market participants. In May 2014, we reported on the federal financial regulatory and law enforcement agency responsibilities related to the use of virtual currencies and their associated challenges. These challenges include money laundering, transfers of funds across borders, and consumer and investor protection issues. We also reported on the regulatory complexity for virtual currencies and the approaches that federal and state regulators have taken to their regulation and oversight. For example, CFTC has taken the position that bitcoin and ether, another virtual currency, meet the definition of a commodity provided in the Commodity Exchange Act. SEC has determined that some virtual currencies may be designated as securities, based on the characteristics of how they are offered and sold. FinCEN determined that certain virtual currency businesses would be money transmitters under the Bank Secrecy Act, subject to regulation as money services businesses. Tax Treatment of Virtual Currency According to IRS guidance, convertible virtual currencies—which have an equivalent value in real currency or act as a substitute for real currency— are to be treated as property for tax purposes. Among other things, this classification means that income, including gains, from virtual currency transactions is reportable on taxpayers’ income tax returns. Therefore, a payment for goods or services made using virtual currency may be subject to tax to the same extent as any other payment made in property. Figure 2 illustrates examples of how virtual currency transactions can affect taxes. Taxpayers using virtual currency must keep track of transaction-level information, such as the fair market value of the virtual currency at the time it was obtained, to determine tax basis and calculate gains or losses. The gain or loss from the sale or exchange of virtual currency is characterized as either a capital gain or loss or an ordinary gain or loss, depending on whether the virtual currency is held as a capital asset. Taxpayers are required to report their gains or losses from virtual currency on their tax returns, including Form 1040, U.S. Individual Income Tax Return, and Form 8949, Sales and Other Dispositions of Capital Assets, for capital gains or losses. Figure 3 shows one example of how using virtual currencies could result in a capital gain or loss. Data on Tax Compliance for Virtual Currencies Are Limited Tax and Information Returns Do Not Specifically Capture Information on Virtual Currency Income and Transactions IRS has limited data on tax compliance for virtual currency use, partly because the forms taxpayers use to report their taxable income do not require them to identify whether the source of their income is from virtual currency use. Likewise, information returns that third parties, such as employers, financial institutions, or other entities, file to report taxpayer income or transactions do not include space for, or direction to, indicate if the income or transactions reported involved a virtual currency. In 2016, the Treasury Inspector General for Tax Administration (TIGTA) found that IRS had not developed a methodology for gathering data on virtual currency use in taxable transactions that would help to analyze the risk of noncompliance and to estimate its significance. TIGTA recommended that IRS revise third-party information returns to identify the amounts of virtual currency used in taxable transactions. IRS agreed with the recommendation, but stated that it faced other higher priority funding needs, and did not consider modifying information reporting forms to be a priority at the time. As of February 5, 2020, IRS has not implemented any changes to these information returns to include information about virtual currency use. However, IRS added a question about virtual currency to Schedule 1, Additional Income and Adjustments to Income, of Form 1040 for tax year 2019. Individual taxpayers use Schedule 1 to report additional income, such as capital gains, unemployment compensation, prize or award money, and gambling winnings. IRS added a question asking if taxpayers received, sold, sent, exchanged, or otherwise acquired any financial interest in any virtual currency during the tax year. Only taxpayers who are otherwise required to file Schedule 1 or who would answer “yes” to the question need to file this schedule. According to IRS officials responsible for examining tax returns, IRS’s focus is on ensuring taxpayers are reporting all of their taxable income and it is not necessary to distinguish between virtual currency transactions and other property transactions being reported. IRS Has Data on a Small Number of Taxpayers Because IRS forms have not required taxpayers to explicitly identify income from virtual currency, IRS uses data from other sources to inform compliance decisions and research. These sources include: Searches of tax return databases. For tax years 2013 to 2015, IRS searched electronically filed Forms 8949 to identify how often taxpayers included language in the property description to indicate the transaction likely involved bitcoin, the most widely traded virtual currency at the time. For the 3 years, IRS identified fewer than 900 taxpayers who reported virtual currency activity each year. IRS officials said that due to the time and resources required to generate these data, IRS did not generate these filing statistics for tax years 2016 or later. By comparing these data to the size of the bitcoin market, IRS concluded that many taxpayers were likely not reporting income from virtual currency use. Third party information reports. To address tax noncompliance risks for virtual currencies, in December 2016, IRS served a John Doe summons to Coinbase, a U.S.-based cryptocurrency exchange. After IRS later narrowed the scope of the summons, it requested identifying and transactional data for all Coinbase users with a U.S. address, U.S. telephone number, U.S. email domain, or U.S. bank account that transacted with Coinbase between January 1, 2013, and December 31, 2015 that had the equivalent of $20,000 in any one transaction type (a buy, sell, send, or receive) in any year during that period. According to an announcement posted on Coinbase’s website, on February 23, 2018, Coinbase notified approximately 13,000 customers that it expected to deliver information about their accounts to IRS within 21 days. In addition, IRS officials stated that IRS had received information returns from a small number of virtual currency exchanges for tax year 2017. Third-party reports of potential fraud. IRS also has access to information on potential fraud reported to IRS and FinCEN by third parties. Financial institutions and money services businesses, which could include virtual currency exchanges, are to file a Suspicious Activity Report (SAR) if they observe or identify suspicious financial activity. SAR reporting can help IRS in identifying potential income underreporting, money laundering, and other potential tax-related violations and crimes. IRS may also receive information about tax noncompliance involving virtual currencies from whistleblowers and other referral programs. Voluntary disclosures by taxpayers. In March 2019, IRS updated Form 14457, Voluntary Disclosure Practice Preclearance Request and Application, to include a space specifically for taxpayers to disclose that they have unreported virtual currency income. IRS’s Criminal Investigation division (CI) reviews the forms IRS receives to ensure they meet criteria of eligibility and timeliness, and that the disclosure does not apply to illegal sources of income. CI sends forms that meet the criteria to two of IRS’s civil operating divisions—Large Business & International (LB&I) and Small Business/Self-Employed (SB/SE)—for review. According to IRS officials, the addition of virtual currency to the form was made to assist IRS employees in routing the forms to the correct subject matter experts in the civil operating divisions. IRS Included Virtual Currencies in Research Projects According to officials with IRS’s Research, Applied Analytics, and Statistics (RAAS) division, RAAS had begun some virtual currency research projects to better understand virtual currency tax compliance. One project, which RAAS completed, was to develop compliance profiles for taxpayers that LB&I had identified through its compliance efforts as having virtual currency activity. RAAS officials also said that they are enhancing their use of a range of third-party information reporting, including reporting of virtual currency activity, to improve IRS’s ability to assess compliance risks. These efforts focus on use of data from multiple sources to better understand evolving risks and improve estimates of compliance risk. These projects support LB&I, SB/SE, CI, and IRS’s broader research, analysis, and statistical reporting needs. Virtual currency has not been included in past National Research Programs (NRP)—IRS’s detailed study of voluntary tax compliance used as the basis for tax gap estimates. The most recent NRP study of individual tax returns was tax years 2011-2013, before virtual currencies became more widely used. RAAS officials said the time frame for the next NRP study of individual tax returns has not yet been determined, but virtual currency may be included in future NRP projects. IRS Has Taken Some Steps to Address Virtual Currency Compliance Risks and Has Shared Information across Multiple Agencies IRS Has Trained Staff on Virtual Currency and Begun Civil Enforcement Activities In December 2013, IRS established the Virtual Currency Issue Team (VCIT) to study virtual currencies and related compliance issues. According to IRS officials, the VCIT aimed to learn about virtual currencies, educate examiners about them, and develop examination techniques to identify and address virtual currency tax compliance risks. In 2015, the VCIT provided two training lessons for examiners on the terminology, technology, and audit issues related to virtual currencies. The VCIT is made up of about 30 individuals and continues to meet periodically to discuss virtual currency issues. In July 2018, IRS announced the launch of a virtual currency compliance campaign within LB&I to address noncompliance related to individual taxpayers’ use of virtual currency through multiple education and enforcement actions, including outreach and examinations. The goals of the compliance campaign include identifying causes of noncompliance using feedback from examination results, using information to identify additional enforcement approaches to increase compliance and decrease taxpayer burden, and improving examiner knowledge and skills as related to virtual currency transactions. According to IRS officials, the compliance campaign was initiated, in part, to analyze large amounts of data received from third-party sources. As part of the campaign, IRS developed and delivered several online and in-person training classes on blockchain technology and virtual currencies to its examiners and other staff. The trainings included details on how to identify and understand blockchain transactions and provide examiners with information on how to seek additional information from taxpayers about possible virtual currency use. According to LB&I officials, as examiners provide feedback on what new issues they are seeing in cases involving virtual currency, they will schedule follow-up training sessions to address these new issues. LB&I has also reached out to a number of external stakeholder groups to gather information and better understand the tax concerns within the virtual currency community. For example, LB&I and the IRS Office of Chief Counsel have spoken to tax practitioner groups, state tax authorities, IRS Nationwide Tax Forum participants, and tax preparation software companies. According to IRS officials, the discussions they had with tax preparation software companies led to some adding questions to their programs asking taxpayers to enter virtual currency income when preparing their tax returns. The compliance campaign also aims to assist in developing a comprehensive IRS virtual currency strategy. In addition to leading the compliance campaign, LB&I is also leading a working group focused on cryptocurrency that includes members from across IRS, including LB&I, SB/SE, CI, and the Office of Chief Counsel. This working group reports to the IRS Enforcement Committee, which includes the Deputy Commissioner for Services and Enforcement and the commissioners for each of the operating divisions and CI. CI has been assisting in analyzing data received from third-party sources to look for potential investigative leads. According to CI officials, CI first reviews the data to identify any taxpayers who are already targets of CI investigations so that LB&I does not use the information in its civil enforcement efforts. The officials also said that they were reviewing information from large virtual currency users to identify any ties to criminal activity. However, according to IRS officials, since some of the data IRS has received predate a major uptick in virtual currency activity in 2017, the data that predate these developments are less valuable than more recent data would be, other than to understand the history of an individual’s virtual currency usage. IRS has also begun civil enforcement activities to address virtual currency noncompliance as part of the compliance campaign. In April 2019, LB&I was forwarding cases identified as likely involving virtual currency for examination classification, the process IRS uses to determine which returns to select to examine. Due to the time needed to complete examinations and to allow taxpayers time to exercise their rights, IRS officials said they do not have outcome data from these efforts yet. In July 2019, IRS began sending out more than 10,000 letters to taxpayers with virtual currency transactions. These letters stated that IRS is aware that the taxpayer may have a virtual currency account. They instructed the taxpayer to ensure that virtual currency income, gains, and losses have been reported appropriately and to file or amend returns as necessary. The letters also provide taxpayers with information on where they can find resources to help them understand their reporting obligations. IRS Shares Information across Multiple Agencies, Focusing on Criminal Enforcement Efforts That Can Involve Virtual Currencies According to IRS officials, CI works with a number of federal partners, including FinCEN and the Federal Bureau of Investigation (FBI), among others, in the routine course of its work, which may involve virtual currency issues. According to CI officials, virtual currency does not constitute a new program area that would require a new specific set of policies and procedures. Instead, traditional crimes that CI might investigate may be intertwined with virtual currency use. CI participates in virtual currency issue information sharing efforts through a number of groups. For example, CI is a monthly participant in the FBI’s National Cyber Investigative Joint Task Force, which brings agencies together to share intelligence and work large-scale cases jointly. CI also has agents on site at the National Cyber-Forensics and Training Alliance, a public-private partnership, and at the European Union Agency for Law Enforcement Cooperation. Both entities work on a variety of issues, including virtual currency issues. CI also participates in some multinational information sharing groups to address virtual currency issues as part of its broader criminal enforcement goals. For example, CI participates in the Joint Chiefs of Global Tax Enforcement (J5), a group of criminal intelligence and tax officials from Australia, Canada, the Netherlands, the United Kingdom, and the United States that launched in mid-2018 to focus on shared cross-national tax risks, including cybercrimes and virtual currency. Among the goals of the J5 are to lead the international community in developing a strategic understanding of offshore tax crimes and cybercrimes, and raise international awareness that the J5 are working together to address international and transnational tax crimes. Within the Department of the Treasury (Treasury), IRS works with Treasury’s Office of Tax Policy when developing any guidance or regulation, including for virtual currency. IRS also works with FinCEN with regard to IRS’s delegated authority to administer parts of the Bank Secrecy Act, including Report of Foreign Bank and Financial Accounts (FBAR) filings. For example, FinCEN provides training materials to SB/SE examination staff who may come across virtual currency issues in the performance of a Bank Secrecy Act examination. IRS and FinCEN officials also periodically discuss how to apply the Bank Secrecy Act and its implementing regulations to virtual currency transactions. Given IRS’s unique role in administering the federal tax system, it generally does not need to coordinate with other agencies outside of Treasury in developing or issuing virtual currency guidance or taking civil enforcement actions. According to IRS officials, the work of the virtual currency compliance campaign does not involve any other federal agencies. IRS’s Virtual Currency Guidance Meets Some Taxpayer Needs, but IRS Did Not Address Applicability of Frequently Asked Questions IRS First Issued Virtual Currency Guidance in 2014 and Solicited Public Input to Identify Additional Guidance Needs IRS first issued virtual currency guidance in 2014, in response to our recommendation. In 2013, we found that IRS had not issued guidance specific to virtual currencies and that taxpayers may be unaware that income from transactions using virtual currencies could be taxable. We recommended that IRS provide taxpayers with information on the basic tax reporting requirements for transactions using virtual currencies. In response to this recommendation, IRS issued Notice 2014-21 in March 2014 and published it in the Internal Revenue Bulletin (IRB) in the form of answers to frequently asked questions (FAQs). IRS solicited public input on Notice 2014-21 through several means. Within the notice, IRS requested comments from the public regarding other aspects of virtual currency transactions that should be addressed in future guidance by providing a physical and email address to which comments could be submitted. IRS reviewed more than 200 public comments it received to identify topics that were in need of further guidance. Our analysis of the public comments found that the most common topics concerned tax forms and reporting (64 comments), realization of income (45 comments), cost basis (33 comments), and general tax liability (29 comments). Other topics included the tax implications of hard forks and airdrops, mining, and foreign reporting. Virtual currency stakeholders we spoke with, such as tax practitioners, executives at virtual currency exchanges, advocacy groups, and industry representatives also identified these topics as in need of further guidance. Additionally, LB&I officials said they held several sessions to gather information from external stakeholders, such as tax practitioner groups and state tax authorities, to develop a better understanding of what was happening in taxpayer communities. IRS’s 2019 Virtual Currency Guidance Answers Some Taxpayer Concerns, but Presents Additional Challenges for Taxpayers In October 2019, IRS issued two forms of additional virtual currency guidance, which answered some questions previously raised by the public comments and virtual currency stakeholders. According to IRS, these guidance documents were intended to supplement and expand upon Notice 2014-21. Revenue Ruling 2019-24 addresses the tax treatment of hard forks and airdrops following hard forks. Specifically, the guidance discusses whether taxpayers have gross income as a result of (1) a hard fork, if they do not receive units of a new virtual currency; or (2) an airdrop of a new virtual currency following a hard fork if they receive units of new virtual currency. Additional FAQs provide further examples of how tax principles apply to virtual currency held as a capital asset. Topics addressed include what tax forms to use when reporting ordinary income and capital gains or losses from virtual currency; how to determine fair market value of virtual currencies; when virtual currency use results in taxable income; how to determine cost basis in several scenarios; and when a taxpayer may use the First-In-First-Out accounting method, known as FIFO, to calculate their gains. However, some virtual currency and tax stakeholders with whom we spoke expressed concern that the 2019 revenue ruling and FAQs leave many questions unanswered and provide confusing responses to others. Their concerns include the following: Clarity: According to some stakeholders, Revenue Ruling 2019-24 is unclear, mostly due to confusion surrounding IRS’s usage of technical virtual currency terminology and the situations meant to illustrate IRS’s application of the law to hard forks and airdrops. Several tax and virtual currency stakeholders we spoke with said these examples do not accurately explain how virtual currency technology works and therefore may not be helpful to taxpayers looking for guidance on the tax implications of income received as a result of hard forks or airdrops. In public remarks on the new guidance in October 2019, IRS’s Chief Counsel stated that terms are not used in a uniform way in the virtual currency industry, but IRS is interested in receiving comments on how virtual currency technology should be described. Additional topics in need of guidance: The revenue ruling and additional FAQs do not address several topics raised in the public comments and by stakeholders. For example, the guidance does not clarify foreign asset reporting requirements for virtual currency. The statutory provisions commonly known as the Foreign Account Tax Compliance Act (FATCA) require taxpayers and foreign financial institutions to report on certain financial assets held outside the United States. Regulations implementing the Bank Secrecy Act separately require taxpayers to report certain foreign financial accounts to FinCEN on the FBAR form. Some practitioners told us that it is unclear whether these requirements apply to virtual currency wallets and exchanges, as we discuss later in this report. Other topics not addressed in the 2019 guidance include mining, like-kind exchanges, and retirement accounts. According to an official from the IRS Office of Chief Counsel, IRS’s focus when developing the 2019 guidance was to assist individual taxpayers. Therefore, the topics addressed by the revenue ruling and FAQs were limited to the most common issues that would be applicable to most individual taxpayers. The official told us that if IRS were to develop additional virtual currency guidance in the future, it may focus on a different audience, such as taxpayers involved in virtual currency businesses or exchanges that could be subject to third-party information reporting. Another official stated that issuing guidance on certain topics, including like-kind exchanges, would have taken additional time, and these topics were therefore left unaddressed. IRS Did Not Include That the 2019 FAQs Are Not Legally Binding IRS issues thousands of publications in a variety of different forms to help taxpayers and their advisors understand the law; however, IRS has stated that only guidance published in the IRB contains IRS’s authoritative interpretation of the law. Unlike with the virtual currency FAQs IRS issued in 2014 in the form of a notice, the 2019 FAQs were not published in the IRB. Therefore, the 2019 FAQs are not binding on IRS, are subject to change, and cannot be relied upon by taxpayers as authoritative or as precedent for their individual facts and circumstances. For FAQs not published in the IRB, tax practitioners have noted that sometimes IRS has included a disclaimer noting that the FAQs do not constitute legal authority and may not be relied upon. The new virtual currency FAQs do not include such a disclaimer. According to IRS officials, they did not include a disclaimer along with the new FAQs because the FAQs do not contain any substantial new interpretation of the law. IRS officials did not feel that a disclaimer about the limitations of the FAQs was necessary or that it would be helpful to taxpayers. However, the FAQs provide new information, such as a definition of the term “cryptocurrency” and an explanation of how taxpayers can track cost basis for virtual currency. As we have previously reported, clarity about the authoritativeness of certain IRS publications could be improved by noting any limitations, especially when FAQs provide information to help taxpayers comply with tax law. Additional explanatory language would help taxpayers understand what type of IRS information is considered authoritative and reliable as precedent for a taxpayer’s individual facts and circumstances. The first article in IRS’s Taxpayer Bill of Rights—“The Right to Be Informed”—states that taxpayers have the right to know what they need to do to comply with tax laws. The article further states that taxpayers are entitled to clear explanations of the laws and IRS procedures in all forms, instructions, publications, notices, and correspondence. As we have previously reported, just as taxpayers have the right to clear explanations in IRS instructions and publications, taxpayers should be alerted to any limitations that could make some IRS information less authoritative than others. Failing to note any limitations associated with particular guidance could lead to misinterpretation of nonauthoritative information from IRS. If taxpayers make decisions based on guidance that is nonauthoritative, including FAQs, those taxpayers’ confidence in IRS and the tax system could be undermined if the content is later updated and IRS challenges taxpayers’ positions. As we have noted in prior reports, taxpayers’ perception that IRS is fairly and uniformly administering the tax system helps further overall voluntary compliance and lowers IRS’s administrative costs. Third-Party Information Reporting on Virtual Currency Is Limited, and Foreign Account Reporting Requirements Are Unclear Limited Third-Party Information Reporting Makes It Difficult for IRS to Address Compliance Risks IRS does not receive information returns on some potentially taxable transactions involving virtual currency, which limits its ability to detect noncompliance. Some virtual currency exchanges send information returns to IRS and to customers that provide information about customers’ trading activity, but others do not. Financial institutions and other third parties are to report interest payments, property sales, and other transactions to both taxpayers and IRS using forms known as information returns. Form 1099-K, Payment Card and Third Party Network Transactions. Third parties that contract with a substantial number of unrelated merchants to settle payments between the merchants and their customers are required to issue a Form 1099-K for each merchant that meets the threshold of having more than 200 transactions totaling more than $20,000 in a year. Form 1099-B, Proceeds from Broker and Barter Exchange Transactions. Brokers use Form 1099-B to report transactions such as sales or redemptions of securities, regulated futures contracts, and commodities. For certain types of property, brokers must also report cost basis information on Form 1099-B if the information is required. Form 1099-MISC, Miscellaneous Income. Certain payments made in the course of a trade or business—including rents, prizes, and various other types of income—must be reported by the payer on Form 1099-MISC. For most types of income subject to reporting on Form 1099-MISC, payers must file the form only if they made payments totaling at least $600. According to our review of websites for nine major U.S.-based virtual currency exchanges, as of November 2019, two exchanges have policies posted online stating that they report information for some of their customers’ virtual currency transactions to IRS on Form 1099-K. One exchange states that it reports customers’ transactions on Form 1099-B, a more detailed information return that provides a breakdown of individual virtual currency transactions. Another exchange’s website states that it provides Forms 1099, but does not identify the form more specifically. Three exchanges’ websites have policies stating that the exchanges do not report customers’ transactions on tax forms. The remaining two exchanges do not state on their websites whether or not they file information returns or provide customers with tax forms. When transactions handled by third parties, such as virtual currency exchanges, go unreported on information returns, it is difficult for IRS to identify and address compliance risks. According to IRS officials and tax practitioners we interviewed, it is difficult for IRS to find out when taxable transactions involving virtual currency are occurring. As discussed earlier in this report, IRS’s virtual currency compliance campaign has identified more than 10,000 taxpayers who may not have properly reported virtual currency transactions on tax returns. However, the campaign likely has not identified all taxpayers with underreported virtual currency income. In addition, according to IRS officials, examining tax returns is more resource intensive than the automated processes IRS uses to match tax returns against information returns. For taxpayers, limited information reporting by third parties can make it difficult to complete tax returns. Tax practitioners told us that recordkeeping is a challenge for taxpayers who buy and sell virtual currencies. To report virtual currency income accurately under IRS guidance, taxpayers need to report information about each transaction, including cost basis and fair market value at the time virtual currency is disposed of, such as by selling it for cash or another virtual currency on an exchange. Some taxpayers may not keep their own records of virtual currency transactions, and as a result may lack easy access to the information that would be provided in third-party information returns. When taxpayers do keep these records, they may not know how to report virtual currency transactions on tax forms. As discussed earlier in this report, 64 of the public comments IRS received on Notice 2014-21 were about forms and reporting. For example, some of these 64 comments expressed uncertainty about how to calculate the fair market value of virtual currency at the time of sale; others requested assistance in determining which tax forms to use to report income from virtual currency transactions. Some virtual currency transactions are not subject to third-party reporting requirements. For example, unless owned by a U.S. payor (including a controlled foreign corporation), virtual currency exchanges operating outside the United States are not required to file information returns such as Forms 1099-K or 1099-B unless the customer or transaction has certain connections to the United States. Some transactions, such as transferring virtual currency directly to a merchant in exchange for goods, generally create no obligation to file any information returns. Other virtual currency transactions, such as sales of virtual currency for cash through virtual currency exchanges, may be subject to third-party reporting requirements. However, those requirements are not entirely clear, and people have interpreted them differently. Tax practitioners we spoke with generally stated that it is not clear whether current regulations require virtual currency exchanges to report customers’ trading activity on Forms 1099-K or 1099-B. According to IRS officials, virtual currency exchanges may be subject to the 1099-K reporting requirement if they fall into the legal category of “third party settlement organizations.” Exchanges are subject to the 1099-B requirement only if they are brokers or barter exchanges. IRS does not have an official position on whether virtual currency exchanges are required to report customers’ trading activity on Form 1099-B. There may also be ambiguity regarding when, if at all, reporting on virtual currency sales is required on Form 1099-MISC. Furthermore, even if exchanges are subject to the 1099-K, 1099-B, or 1099-MISC reporting requirements, these requirements do not cover all taxable transactions. Third-party settlement organizations are required to file Form 1099-K only for customers who make more than 200 transactions in a year that total more than $20,000. Taxable transactions below that threshold may not be reported. Separately, some transactions carried out by brokers do not need to be reported on Form 1099-B unless they involve cash. For example, taxpayers must report trades between different virtual currencies on tax returns, but brokers may not be required to report such trades on Form 1099-B. According to IRS, a virtual currency exchange would be required to file Form 1099-MISC if it has sufficient information, such as the recipient’s basis in the virtual currency, to determine whether a payment made to a recipient in exchange for virtual currency gives rise to income for that recipient. In addition, Forms 1099-K, 1099-B, and 1099-MISC do not always contain all the information that taxpayers need to file accurate tax returns or that IRS needs to monitor compliance. Form 1099-K provides information on the number and gross amount of payments made to the recipient, but does not provide information about individual transactions. Some tax practitioners we interviewed stated that taxpayers who receive Form 1099-K for virtual currency transactions may find the form unhelpful or confusing. Because the form does not identify specific transactions, it may be difficult to match the aggregate amounts reported on the form with taxpayers’ own records of virtual currency transactions. Form 1099-B does provide information about individual transactions, but does not always include or require cost basis information. According to IRS, a Form 1099-MISC that reports a payee’s gain does not provide information about that payee’s gross proceeds and basis. Some stakeholders we interviewed mentioned challenges that could make it difficult to implement information reporting at the individual transaction level. For example, it could be difficult to distinguish between taxable dispositions of virtual currency—such as the sale of virtual currency for U.S. dollars—and nontaxable events such as the transfer of virtual currency from a taxpayer’s account on an exchange to a personal wallet controlled directly by the same taxpayer. These stakeholders also told us that if exchanges were required to report cost basis information, additional challenges could include tracking the cost basis of virtual currency transferred between exchanges. However, as we have previously reported, cost basis reporting can be particularly valuable for tax compliance. IRS officials told us that they are studying the issue of third-party information reporting, and it is included in IRS’s priority guidance plan as of October 2019. We have reported that, in general, the extent to which taxpayers accurately report their income is closely aligned with the amount of income that third parties report to them and to IRS. For example, according to IRS data for tax years 2011-2013, taxpayers misreported more than half of their income for types of income subject to little or no third-party information reporting (see figure 4). Taxpayers misreported a much lower percentage of their income for types of income subject to at least some information reporting. Information returns that include details about individual transactions can assist taxpayers by providing information about how to report virtual currency income correctly. For example, in addition to providing transaction details, Form 1099-B instructs recipients where to report transactions on Form 8949 or Schedule D, which are forms used to report capital gains. By contrast, Form 1099-K does not include similar instructions. One of IRS’s strategic goals is to protect the integrity of the tax system by encouraging compliance through administering and enforcing the tax code. This goal includes identifying and planning for compliance risks proactively, including risks associated with the increasing complexity of the tax base. Further, internal control standards state that management should use quality information to achieve the entity’s objectives. Using quality information requires identifying information requirements and obtaining relevant data from reliable sources. As discussed above, IRS does not have quality information on many potentially taxable transactions involving virtual currency, in part because information reporting requirements for virtual currency exchanges are unclear, and in part because some information reporting does not include detailed information about specific transactions. As a result, some taxpayers may not be reporting virtual currency transactions properly on their tax returns or paying the full amount of tax owed on those transactions, contributing to the tax gap. IRS and FinCEN Have Not Clarified Whether Foreign Account Reporting Requirements Apply to Virtual Currency As previously discussed, two overlapping reporting requirements apply to taxpayers who have foreign financial assets. These two requirements are the Report of Foreign Bank and Financial Accounts (FBAR) filings required under the Bank Secrecy Act and the separate reports required by the statutory provisions commonly known as the Foreign Account Tax Compliance Act (FATCA). The federal agencies that administer these requirements have not clarified how taxpayers who hold virtual currency should interpret them. FATCA Requirements Under FATCA, taxpayers have an obligation to report certain foreign financial accounts and other assets on IRS Form 8938, Statement of Specified Foreign Financial Assets, if the value of those assets exceeds a certain amount. FATCA was enacted in 2010 to reduce offshore tax evasion, and it also requires foreign financial institutions to report detailed information to IRS about their U.S. customers. Tax practitioners we interviewed told us that there is no generally accepted view about whether FATCA filing requirements apply to virtual currency holdings, and IRS has not publicly stated a position on how, if at all, FATCA requirements apply to virtual currency holdings for either taxpayers or institutions. Some practitioners stated that in the absence of guidance or information from IRS specifically addressing virtual currency and FATCA, some of their clients report foreign virtual currency accounts because the potential penalties for failing to report, if deemed to be required, are high. Additionally, several public comments on IRS Notice 2014-21 requested clarification from IRS about whether virtual currency holdings must be reported under FATCA. The FATCA filing requirements can be difficult for individual taxpayers to interpret, in part because FATCA was enacted before the use of virtual currency became more widespread, and it was not designed to cover nontraditional assets such as virtual currencies. For example, under FATCA, taxpayers must report accounts at foreign financial institutions. A taxpayer who holds virtual currency with an exchange based outside the United States may not know whether the exchange counts as a foreign financial institution under FATCA because this determination involves applying legal criteria to specific facts about how the exchange operates. Taxpayers must also report foreign nonaccount assets held for investment (as opposed to held for use in a trade or business), such as foreign stock and securities, foreign financial instruments, contracts with non-U.S. persons, and interests in foreign entities. IRS officials told us that in some situations, virtual currencies could be foreign nonaccount assets, depending on specific facts about how an individual taxpayer holds the virtual currency. However, a taxpayer holding virtual currency may not know whether the virtual currency is considered a specified foreign financial asset because this determination involves applying legal criteria to specific facts such as whether the virtual currency has a foreign issuer, which the taxpayer may not have sufficient information to determine. According to IRS officials, they have not issued guidance about virtual currency and FATCA because the instructions for Form 8938 clearly explain how taxpayers are to interpret FATCA requirements. However, those instructions do not mention virtual currency and do not provide information needed to determine whether virtual currency holdings must be reported. For example, the instructions state that a financial account is any depository or custodial account maintained by a foreign financial institution, but do not explain under what circumstances, if any, an account that holds virtual currency could be considered a depository or custodial account. IRS’s Taxpayer Bill of Rights states that taxpayers are entitled to clear explanations of the laws and IRS procedures in all tax forms, instructions, publications, notices, and correspondence. Furthermore, one of IRS’s strategic goals is to empower taxpayers by making it easier for them to understand and meet their filing, reporting, and payment obligations. Without information about how to interpret and apply FATCA requirements to situations involving virtual currency, taxpayers will not know whether they are required to report virtual currency held outside the United States. As a result, they may be underreporting, depriving IRS of data needed to address offshore tax evasion, or overreporting by filing forms that are not required. As we have previously reported, such overreporting creates unnecessary burdens, including financial costs, for taxpayers. FBAR Requirement Separate from the requirement to file Form 8938 under FATCA, regulations implementing the Bank Secrecy Act require reporting of financial accounts maintained with financial institutions located outside the United States on the FBAR form. FinCEN’s FBAR regulations predate the widespread use of virtual currency and do not specifically mention virtual currency. Consequently, tax practitioners have raised questions about whether taxpayers are required to include virtual currency holdings in FBAR filings. In correspondence and interviews, FinCEN officials have stated that, based on their understanding of the regulations, virtual currency does not need to be reported on the FBAR. For example, FinCEN officials told us that FinCEN provides a standard response when members of the public ask FinCEN’s Resource Center about reporting virtual currency on the FBAR. The response states, in part, “as of right now, reporting [virtual currency exchange accounts] on the FBAR is not required.” Likewise, in March 2019, FinCEN responded in writing to a question from the American Institute of Certified Public Accountants by stating that the FBAR regulations do not define virtual currency held in an offshore account as a type of reportable account. While FinCEN has provided responses to direct questions, it has not made information about whether foreign virtual currency accounts are subject to the FBAR requirement readily available, such as by posting this information on its website. FinCEN officials stated that FinCEN and IRS had issued a statement on IRS’s website in 2014 informing the public that virtual currencies did not need to be reported on the FBAR. However, the officials noted that the statement was no longer available on the website, but they did not say when it may have been removed or why. Neither IRS’s FBAR Reference Guide nor FinCEN’s instructions for filing the FBAR mention virtual currencies. Internal control standards state that management should externally communicate the necessary quality information to achieve the entity’s objectives. As part of this standard, management should communicate information that allows external parties, including the general public, to assist the entity in achieving its objectives. In the absence of a readily available official statement from FinCEN that virtual currencies are not reportable on the FBAR, users of virtual currency may be filing reports that are not legally required. According to some tax practitioners we interviewed, some individuals may report foreign virtual currency accounts on the FBAR even if they believe it is unlikely that they are required to report, because of the high penalties for failing to file required FBARs. Such filings can create financial costs and unnecessary recordkeeping and other burdens for these individuals. Conclusions Virtual currencies can present challenges for enforcement of tax laws, both because they can be circulated without a central authority and because complying with current tax requirements can be confusing and burdensome. IRS has taken important steps to address these challenges, including issuing multiple sets of guidance to clarify how virtual currencies would be treated for tax purposes and carrying out a range of enforcement activities to address noncompliance. Although IRS’s 2019 virtual currency guidance addressed some issues left unresolved by its 2014 guidance, it did not address others, and it has also prompted new concerns among virtual currency stakeholders. Additionally, including information that the 2019 FAQs are not legally binding would enhance taxpayer understanding and could ultimately help enhance taxpayers’ confidence in IRS and the tax system. Currently, much trading activity in virtual currency goes unreported on information returns. In part, this lack of reporting may be because third parties are unclear about whether they are required to report. Limitations in what information returns report related to virtual currencies also constrain the utility of reported information. In general, information reporting is associated with high levels of compliance. Additionally, the rules for foreign asset reporting—specifically, the FBARs required by the Bank Secrecy Act and the separate reports required by FATCA—do not clearly address virtual currency, and tax professionals have raised questions about the applicability of these requirements to virtual currency. Clarifying the FATCA requirements and making a statement about the FBAR requirements readily available to the public would help reduce uncertainty about these rules and may result in reduced burden for some taxpayers who may be filing reports that are not required. Recommendations for Executive Action We are making a total of four recommendations, including three to IRS and one to FinCEN. Specifically, The Commissioner of Internal Revenue should update the FAQs issued in 2019 to include a statement that the FAQs may serve as a source of general information but cannot be relied upon by taxpayers as authoritative since they are not binding on IRS. (Recommendation 1) The Commissioner of Internal Revenue should take steps to increase third-party reporting on taxable transactions involving virtual currency, which could include clarifying IRS’s interpretation of existing third-party reporting requirements under the Internal Revenue Code and Treasury Regulations, or pursuing statutory or regulatory changes. (Recommendation 2) The Commissioner of Internal Revenue should clarify the application of reporting requirements under FATCA to virtual currency. (Recommendation 3) The Director of FinCEN, in coordination with IRS as appropriate, should make a statement about the application of foreign account reporting requirements under the Bank Secrecy Act to virtual currency readily available to the public. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to IRS, FinCEN, Treasury, SEC, and CFTC for review and comment. In its written comments, which are summarized below and reproduced in appendix II, IRS agreed with one and disagreed with two of the recommendations directed to it. In its written comments, which are summarized below and reproduced in appendix III, FinCEN agreed with the recommendation directed to it. IRS, Treasury, SEC, and CFTC provided technical comments, which we incorporated as appropriate. IRS agreed with the recommendation to take steps to increase third-party reporting on taxable transactions involving virtual currency (recommendation 2). IRS stated that it is working with Treasury to develop guidance on third-party reporting under section 6045 of the Internal Revenue Code for certain taxable transactions involving virtual currency. Such guidance, if it aims to increase third-party reporting, would address the intent of the recommendation. IRS disagreed with the recommendation to add a statement to the 2019 FAQs on virtual currency informing taxpayers that the FAQs provide general information but are not binding on IRS (recommendation 1). IRS stated that the FAQs are illustrative of how longstanding tax principles apply to property transactions. IRS also stated that IRS does not take positions contrary to public FAQs. We continue to believe that including such a statement would provide more transparency and help taxpayers understand the nature of the information provided in the FAQs. As we state earlier in this report, IRS has included disclaimer statements in other informal FAQs posted on its website. IRS could include a similar statement in the virtual currency FAQs at minimal cost. Alternatively, if IRS intends to be bound by the positions it takes in the current version of the virtual currency FAQs, as the response to this recommendation suggests, it could publish the FAQs in the Internal Revenue Bulletin. Doing so would render a disclaimer statement unnecessary and would satisfy the intent of the recommendation. IRS disagreed with the recommendation to clarify the application of FATCA reporting requirements to virtual currency (recommendation 3). IRS stated that U.S. exchanges and other U.S. businesses play a significant role in virtual currency transactions carried out by U.S. taxpayers, and therefore it is appropriate for IRS to focus on developing guidance for third-party reporting under section 6045, as discussed above. IRS also stated that guidance on FATCA may be appropriate in the future when the workings of foreign virtual currency exchanges become more transparent. We believe that, given the widespread uncertainty about the FATCA requirements among virtual currency stakeholders, it would benefit taxpayers for IRS to clarify these requirements to the extent possible with the information currently available. It may be appropriate to wait for future developments in the foreign virtual currency exchange industry before issuing detailed, thorough guidance on this issue. However, IRS could address the uncertainty about the FATCA requirements by clarifying in general terms how it believes they should be interpreted in situations involving virtual currency. In its comments, FinCEN agreed with the recommendation to make a public statement about whether virtual currency must be reported on the FBAR (recommendation 4). FinCEN confirmed in its letter that as of January 2020, its regulations do not require virtual currency held in an offshore account to be reported on the FBAR. Additionally, FinCEN stated that it will coordinate with IRS to determine the best approach to provide clarity to the public regarding the FBAR requirement. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of the Treasury, the Commissioner of Internal Revenue, the Director of the Financial Crimes Enforcement Network, the Chairman of the Securities and Exchange Commission, the Chairman of the Commodity Futures Trading Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe what is known about virtual currency tax compliance; (2) describe the steps the Internal Revenue Service (IRS) has taken to address virtual currency tax compliance risks; (3) evaluate the extent to which IRS’s virtual currency guidance meets taxpayer needs; and (4) evaluate whether additional information reporting could assist IRS in ensuring compliance. To describe what is known about virtual currency tax compliance and the steps IRS has taken to address virtual currency tax compliance risks, we reviewed IRS documentation on the agency’s virtual currency tax enforcement efforts, including information about the legal summons IRS issued to Coinbase and the Large Business and International (LB&I) division’s virtual currency compliance campaign. We interviewed IRS officials in the Small Business/Self Employed (SB/SE) and LB&I operating divisions, as well as the Research, Applied Analytics, and Statistics division about any data the agency had on virtual currency tax compliance, challenges in collecting such data, and plans for data analyses. We also reviewed IRS forms that taxpayers may use to report virtual currency use. We interviewed officials from the Financial Crimes Enforcement Network, Commodity Futures Trading Commission, and Securities and Exchange Commission about coordination efforts that have been made across agencies regulating virtual currencies. We also interviewed tax practitioners, tax attorneys, virtual currency industry advocates, and virtual currency exchange executives about virtual currency tax compliance issues. We took a snowball sampling approach to identify the outside stakeholders we interviewed, which involved asking stakeholders we interviewed for recommendations of others we should contact to gain additional insight into virtual currency tax compliance, and we assessed their qualifications and independence. In total, we interviewed five individual stakeholders in addition to representatives of 10 entities with expertise in tax issues related to virtual currency. Although results from these interviews are not generalizable, they provide insight into what is known about tax compliance and the steps IRS has taken to address virtual currency tax compliance risks. To evaluate the extent to which IRS’s virtual currency guidance meets taxpayer needs, we identified and analyzed all of the guidance and statements IRS has published about tax compliance for virtual currencies. To identify these documents, we searched IRS’s website and interviewed IRS officials. According to IRS officials, Notice 2014-21, issued in March 2014, and Revenue Ruling 2019-24 and Frequently Asked Questions (FAQs), issued in October 2019, are the only IRS guidance specific to virtual currencies. We also reviewed and analyzed all of the public comments IRS had received on Notice 2014-21 as of August 19, 2019, to determine the concerns raised about virtual currency tax compliance. IRS sent us 229 public comments. We identified 25 of the comments as not applicable because they were not related to Notice 2014-21, were duplicate comments, or were otherwise not relevant. Two reviewers coded the content of the 204 applicable public comments and grouped them into 13 different thematic categories. We developed these categories based on the topics or issues that commenters identified. We assigned each separate issue raised by a comment to an existing category unless it did not relate to any of the existing categories, in which case we created a new category. We also recorded the date the comment was submitted and the occupation of the commenter, if specified in the comment. To assess the reliability of these data, we reviewed relevant documentation and consulted knowledgeable IRS officials. Specifically, we requested information from IRS’s Office of Chief Counsel to identify the quality controls in place to help ensure all comments are processed. We determined that the data were sufficiently reliable for our purposes. The information we obtained from these comments may not be representative of the viewpoints of the entire U.S. public. In addition, we interviewed the stakeholders mentioned above before IRS released new guidance in October 2019 to identify any taxpayer concerns, any compliance challenges with virtual currency tax obligations, and the extent to which the guidance provided in IRS’s Notice 2014-21 was meeting taxpayer needs. We reached out to these same stakeholders in October 2019, after IRS issued a new set of FAQs and Revenue Ruling 2019-24, to determine how these new guidance documents addressed taxpayers’ concerns. Of the five individuals and 10 groups we initially interviewed, we received responses regarding the new IRS guidance from four individuals and six groups. The information we obtained from these practitioners and exchanges is not generalizable to all practitioners and exchanges because we took a snowball sampling approach, but the information provides insight into the extent to which IRS’s virtual currency guidance is meeting the needs of taxpayers. To evaluate whether additional information reporting could assist IRS in ensuring compliance, we reviewed IRS’s requirements for information reporting for virtual currency transactions, including the laws and regulations for foreign asset reporting. We interviewed IRS officials in the SB/SE and LB&I operating divisions about how IRS’s third-party and taxpayer information reporting processes and current forms assist in IRS’s work to detect noncompliance for virtual currencies. We reviewed the websites of a judgmental selection of nine virtual currency exchanges for policies or statements about tax reporting, including whether the exchanges file Forms 1099-B or 1099-K. For the website review, we selected virtual currency exchanges that were based in the United States and that were likely, because of their size or public profile within the virtual currency industry, to have established policies regarding information reporting. For each exchange, we identified and categorized any statements on the exchange’s website regarding tax or information reporting, such as a statement that the exchange does not provide any tax forms to customers or a statement that the exchange provides information on a specific form to customers and IRS. We also interviewed the stakeholders mentioned above to determine what information is being reported to IRS and whether additional information reporting would help IRS and taxpayers with ensuring tax compliance. We interviewed executives from two exchanges to determine what burden, if any, information reporting does or could impose on exchanges and virtual currency users. We attempted to contact four additional exchanges but did not receive a response. Because we used a snowball sampling approach, the information we obtained from these virtual currency industry participants is not generalizable to all virtual currency industry participants. We conducted this performance audit from October 2018 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Internal Revenue Service Appendix III: Comments from the Financial Crimes Enforcement Network Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Jeff Arkin (Assistant Director), Danielle Novak (Analyst-in-Charge), Theodore Alexander, Michael Bechetti, David Blanding, Jacqueline Chapin, Ed Nannenhorn, Bruna Oliveira, Kayla Robinson, and Andrew J. Stephens made key contributions to this report.
Why GAO Did This Study Virtual currencies, such as bitcoin, have grown in popularity in recent years. Individuals and businesses use virtual currencies as investments and to pay for goods and services. GAO was asked to review IRS's efforts to ensure compliance with tax obligations for virtual currencies. This report examines (1) what is known about virtual currency tax compliance; (2) what IRS has done to address virtual currency tax compliance risks; (3) the extent to which IRS's virtual currency guidance meets taxpayer needs; and (4) whether additional information reporting on virtual currency income could assist IRS in ensuring compliance. GAO reviewed IRS forms and guidance and interviewed officials at IRS, FinCEN, and other federal agencies, as well as tax and virtual currency stakeholders. What GAO Found Taxpayers are required to report and pay taxes on income from virtual currency use, but the Internal Revenue Service (IRS) has limited data on tax compliance for virtual currencies. Tax forms, including the information returns filed by third parties such as financial institutions, generally do not require filers to indicate whether the income or transactions they report involved virtual currency. IRS also has taken some steps to address virtual currency compliance risks, including launching a virtual currency compliance campaign in 2018 and working with other agencies on criminal investigations. In July 2019, IRS began sending out more than 10,000 letters to taxpayers with virtual currency activity informing them about their potential tax obligations. IRS's virtual currency guidance, issued in 2014 and 2019, addresses some questions taxpayers and practitioners have raised. For example, it states that virtual currency is treated as property for tax purposes and that using virtual currency can produce taxable capital gains. However, part of the 2019 guidance is not authoritative because it was not published in the Internal Revenue Bulletin (IRB). IRS has stated that only guidance published in the IRB is IRS's authoritative interpretation of the law. IRS did not make clear to taxpayers that this part of the guidance is not authoritative and is subject to change. Information reporting by third parties, such as financial institutions, on virtual currency is limited, making it difficult for taxpayers to comply and for IRS to address tax compliance risks. Many virtual currency transactions likely go unreported to IRS on information returns, due in part to unclear requirements and reporting thresholds that limit the number of virtual currency users subject to third-party reporting. Taking steps to increase reporting could help IRS provide taxpayers useful information for completing tax returns and give IRS an additional tool to address noncompliance. Further, IRS and the Financial Crimes Enforcement Network (FinCEN) have not clearly and publicly explained when, if at all, requirements for reporting financial assets held in foreign countries apply to virtual currencies. Clarifying and providing publicly available information about those requirements could improve the data available for tax enforcement and make it less likely that taxpayers will file reports that are not legally required. What GAO Recommends GAO is recommending that IRS clarify that part of the 2019 guidance is not authoritative and take steps to increase information reporting, and that FinCEN and IRS address how foreign asset reporting laws apply to virtual currency. IRS agreed with the recommendation on information reporting and disagreed with the other two, stating that a disclaimer statement is unnecessary and that it is premature to address virtual currency foreign reporting. GAO believes a disclaimer would increase transparency and that IRS can clarify foreign reporting without waiting for future developments in the industry. FinCEN agreed with GAO's recommendation.
gao_GAO-20-275
gao_GAO-20-275_0
Background Airport Security Roles and Responsibilities As the federal agency with primary responsibility for civil aviation security within the United States, TSA promulgates security requirements, primarily through regulations but also through security directives and other mechanisms, and conducts inspections to ensure that airport operators, air carriers, and other regulated entities are in compliance with these requirements. Additionally, TSA oversees security operations at airports through different types of testing and vulnerability assessments to analyze and improve security, among other activities. As of December 2019, there were approximately 430 commercial airports nationwide. Airport operators, air carriers, and other regulated entities are responsible for implementing security requirements, primarily in accordance with their TSA-approved security programs. These programs generally cover day- to-day operations, including measures that contribute to mitigating insider threats. For example: For most commercial airports, airport operators must ensure there is an adequate law enforcement presence to support operations and prevent unauthorized access to security-restricted areas through, among other measures, employee vetting, the use of personnel identification media, and implementing access control systems. For most air carrier operations, the air carriers must implement measures to ensure the security of aircraft and facilities, such as preventing unauthorized access to aircraft; searching aircraft prior to boarding passengers; randomly searching service personnel, such as caterers, and their property prior to boarding the aircraft; and training employees in security procedures. In accordance with an airport operator’s security program, an air carrier may enter into an agreement with the airport operator to assume exclusive responsibility for specified security measures for all or portions of an airport’s security-restricted areas, including access points. This is known as an exclusive area agreement. The security programs that airport operators and air carriers implement, in accordance with federal regulations, are generally consistent across similarly-situated airports and air carriers. For example, all airports operating under complete security programs generally implement TSA- approved security programs that address the same requirements. However, the details of these programs and their implementation can differ widely based on the individual characteristics of the airport. For example, methods that airport operators use to control access into security-restricted areas vary because of differences in the design and layout of individual airports, but all access controls must meet minimum performance standards in accordance with TSA requirements. Airport operators and air carriers may also choose to implement measures beyond what is required by TSA, but they may choose not to pursue incorporating these additional measures into their security programs, because if incorporated into their security programs, TSA could then hold the regulated entities accountable for implementing such additional measures. By not incorporating the additional measures into their security programs, airport operators and air carriers retain the flexibility to alter such measures without TSA approval. The security measures that airport operators and air carriers implement are generally carried out within, or to prevent access to, security- restricted areas of an airport or aircraft. These areas include: Secured areas. Areas for which security measures, such as access controls, must be carried out to prevent and detect the unauthorized entry, presence, and movement of individuals and ground vehicles. This includes areas where domestic and foreign air carriers enplane and deplane passengers and sort and load baggage, and any adjacent areas not separated by adequate security measures. Security identification display areas (SIDA). Areas for which security measures, such as personnel identification systems, must be carried out to prevent the unauthorized presence and movement of individuals. Air operations areas. Areas for which measures must be carried out to prevent and detect the unauthorized entry, presence, and movement of individuals and ground vehicles. This includes aircraft movement and parking areas, loading ramps, and safety areas for use by TSA-regulated aircraft, and any adjacent areas not separated by adequate security systems, measures, or procedures. Sterile areas. Areas that, in general, provide passengers access to boarding aircraft and to which access is controlled through the screening of passengers and property. Figure 1 illustrates the variety of security-restricted areas of a typical larger airport, such as a category X or I airport, and aviation stakeholders’ primary responsibilities for securing the area. TSA’s Insider Threat Program and Insider Threat Incidents TSA’s Insider Threat Program, which was established in 2013, consists of offices across TSA conducting different portions of the insider threat mission, with TSA’s Law Enforcement/Federal Air Marshal Service office serving as the program lead. The program’s mission is to deter, detect, and mitigate insider threats to the nation’s transportation sector personnel, operations, information, and critical infrastructure. Other TSA offices that have key responsibilities in the Insider Threat Program include TSA’s Security Operations; Enrollment Services and Vetting Programs; Inspection; Intelligence and Analysis; and Policy, Plans, and Engagement, among others. To support inter-office coordination, TSA established the Insider Threat Advisory Group in 2015, which is a multi- office team of experts who review and analyze the program’s activities, identify gaps, and develop mitigation strategies, among other activities. The group is co-chaired by two TSA offices—Law Enforcement/Federal Air Marshal Service and Intelligence and Analysis. TSA’s Insider Threat Unit, which operates within the Law Enforcement/Federal Air Marshal Service office, serves as the focal point for all referrals of potential insider threat incidents. According to TSA, an insider threat includes direct risks to TSA’s security operations, as well as indirect risks that may compromise critical infrastructure or undermine the integrity of the aviation security system. Examples of insider threat events include compromises of airport security (e.g. using access and knowledge to smuggle contraband) and sabotage (e.g. intentionally damaging equipment meant to detect unauthorized access to security-restricted areas). TSA recognizes, however, that some insider threats may arise from complacency or ignorance rather than a malicious intent to cause harm, such as when workers assume a negligent approach to policies, procedures, and potential risks. The Insider Threat Unit receives referrals from a telephone tip line and email address; daily reports from the Transportation Security Operations Center detailing security policy violations, such as aviation workers attempting to bring prohibited items not necessary to their work duties into security-restricted areas of the airport; and internal and external intelligence reports and referrals. After a referral is made, the unit is to coordinate, disseminate, and retain all information when reviewing referrals and conducting investigations into potential insider threats. Specifically, the unit is to coordinate inquiries and investigations with the appropriate lead entities to include TSA offices; federal, state, and local law enforcement and intelligence agencies; and various airport and transit law enforcement authorities. According to one TSA official, many of these referrals do not require additional investigation because they were already appropriately mitigated at the local level. Referrals that meet the unit’s criteria are accepted for further investigation—called acceptances. Criteria include, for example, whether the incident involved a prohibited item, the perpetrator has multiple violations, the perpetrator attempted to circumvent security, or the perpetrator made threatening statements. According to Insider Threat Unit data from fiscal year 2017 through fiscal year 2019, there were an average of 138 referrals and 14 acceptances per month. The majority of referrals accepted for investigation during this time period occurred at category X and I airports (63 and 25 percent, respectively). Referrals where air carrier employees and other aviation workers are the potential insider threat each account for approximately one-third of referrals accepted for investigation. Table 1 discusses examples of insider threat incidents. TSA, Airport Operators, and Air Carriers Help Mitigate Insider Threats through Various Efforts TSA has ongoing activities that help mitigate insider threats, including long-standing historical efforts and more recent efforts initiated since 2017. For example, TSA initiated operations to randomly search aviation workers at high-risk airports through pat down searches and explosives trace detection. TSA also has plans to enhance its current Insider Threat Program. Airport operators are to implement security measures, primarily in accordance with their TSA-approved security programs, which detail the day-to-day operations of those entities and their responsibilities for controlling access to security-restricted areas, among other responsibilities. Based on our analysis of TSA’s representative sample, some airport operators choose to implement security measures beyond those required by TSA. For example, some airport operators use sophisticated technologies such as fingerprint readers to control access to security-restricted areas, or offer or require training for aviation workers about topics such as insider threats. Similarly, air carriers are to implement security measures in accordance with TSA-approved security programs. For example, air carriers are required to perform regular searches of aircraft. Some air carriers we spoke to said they also choose to implement additional measures not required by TSA to enhance their security posture, such as conducting full employee screening at dedicated checkpoints. Figure 2 provides examples of the variety of security procedures and technologies used by TSA, airport operators, and air carriers at typical category X or I airports to control access to security-restricted areas of airports and help mitigate insider threats. These efforts vary by airport, local needs, and resources available, among other factors. TSA Has Ongoing Efforts to Help Mitigate Insider Threats and Plans to Further Enhance Its Insider Threat Program TSA’s Long-standing Efforts that Help Mitigate Insider Threats TSA has long-standing, established activities that the agency has conducted that help mitigate insider threats. These efforts directly or indirectly regulate or facilitate security at commercial airports and help mitigate insider threats. Specifically, TSA has programs to increase awareness of insider threats in the aviation community, analyze and disseminate intelligence, vet aviation workers and TSA staff, inspect and assess security at airports, and share information with the aviation community. We have previously reported on these efforts in our work on aviation security and perimeter and access control security at airports. Awareness and training. TSA promotes awareness of insider threats to the aviation community and disseminates materials on how to identify and report insider threats to aviation stakeholders, which they may use on a voluntary basis. Analyze and disseminate intelligence. TSA evaluates intelligence information related to both domestic and international adversaries (such as terrorists) who seek to leverage insiders and target the U.S. transportation system, among other things. TSA regularly disseminates this information to aviation stakeholders through TSA’s intelligence officers at its field offices, for example. There are approximately 80 field intelligence officers stationed throughout the U.S., Puerto Rico, and Guam, who provide information to airport officials and the aviation community on insider tactics and emerging threats, among other things. Vetting aviation workers. TSA facilitates background checks of aviation workers (e.g. baggage handlers and concessionaire employees) applying for unescorted access to security-restricted areas of airports. The background check includes a Security Threat Assessment that is generally made up of three parts: (1) near real- time vetting against terrorism watch lists and other federal databases, (2) verification of the applicant’s lawful presence in the United States, and (3) a fingerprint-based criminal history records check.Additionally, TSA staff, such as transportation security officers, undergo a pre-employment screening, including all parts of the Security Threat Assessment and other security checks, and a background investigation to determine the applicant’s suitability for the position. Depending upon their job duties, TSA staff at airports may be issued credentials for unescorted access to security-restricted areas of an airport. Inspections and assessments. Staff at TSA compliance hubs (field offices) inspect airports and air carriers and test security measures to ensure compliance with federal requirements. To further enhance airport security, TSA also performs comprehensive, targeted, and supplemental inspections and other compliance activities, such as assessments, investigations, and tests. Guidance, policies, and information sharing. TSA issues guidance and policies that, among other things, require airport operators and air carriers to implement or enhance access controls or other security measures, or share best practices on improving security and mitigating insider threats. TSA regularly communicates with aviation stakeholders to discuss security issues and policies. TSA’s Recent Efforts to Mitigate Insider Threats Since the beginning of fiscal year 2017, TSA has implemented a variety of activities to oversee and facilitate insider threat mitigation at commercial airports, either through new activities or by enhancing ongoing efforts. Among other things, TSA has taken steps to further augment vetting of aviation workers, enhance aviation worker screening, test airport security targeted toward identifying insider risks and vulnerabilities, and develop reference tools and guidance. See below for examples of TSA’s insider threat mitigation efforts initiated since the beginning of fiscal year 2017. Social media analysis. TSA augmented the vetting process for aviation workers, described above, in 2018 to include an evaluation of publically available social media information for individuals who match against a federal watch list and are applying for unescorted access to security-restricted areas of an airport. TSA uses information about the individual, including the social media information, to conduct the security threat assessment and determine whether to approve or deny the application. Proposed requirement for Rap Back enrollment. The Federal Bureau of Investigation’s Rap Back Service provides participating entities with ongoing notification of subsequent criminal activity that occurs after an individual’s initial criminal history records check. In 2019, TSA proposed requiring airport operators and air carriers to enroll in Rap Back and to subscribe covered aviation workers. As of December 2019, TSA has not yet imposed this requirement. Physical Screening of Aviation Workers Advanced Threat Local Allocation Strategy (ATLAS). TSA’s ATLAS tool generates a randomized schedule and location of procedures to physically screen aviation workers. The ATLAS tool randomly identifies the type of screening procedure by balancing on- person screenings, such as pat-down searches, and in-property screenings, such as testing for traces of explosives on workers’ property. Federal security directors may tailor the screenings and location based on local intelligence. TSA started using ATLAS in 2018 at high-risk airports to screen aviation workers entering or within security-restricted areas. Covert testing. TSA’s covert testing teams help identify security vulnerabilities in multiple aspects of aviation security (including airport access controls and vulnerabilities to insiders) and may recommend additional measures or procedures be implemented to mitigate these vulnerabilities. As described above, TSA increased the number of covert tests related to airport access controls and insider vulnerabilities in response to provisions of the Aviation Security Act of 2016. Further, in 2019, TSA began a covert test to assess vulnerabilities in TSA’s ATLAS program. Joint Vulnerability Assessment. Joint teams of TSA and Federal Bureau of Investigation officials assess vulnerabilities in multiple aspects of airport security and operations including fuel, cargo, catering, general aviation, terminal area, and law enforcement operations. The assessments are conducted at commercial airports identified as high-risk every three years and on a case-by-case basis at other airports. TSA revised the joint vulnerability assessment process in fiscal year 2017 to identify insider threat vulnerabilities and to suggest options to mitigate them. Insider Threat Mitigation Activity. In addition to the regular airport inspection and assessment duties, starting in fiscal year 2017, TSA required its aviation transportation security inspectors to conduct unannounced tests related to mitigating insider threats every fiscal year. Guidance, Notice, and Information Sharing Fraudulent identification guidance. In fiscal year 2017, TSA developed guidance for airport operators and air carriers on detecting fraudulent identification documents, including methods for detecting fraudulent identification and appropriate responses when discovered. Security directives. TSA updated a security directive in 2018 to mitigate potential insider threats by, among other things, requiring airport operators to post signs at sterile area entry points accessible by credentialed aviation workers. These signs advise individuals that they may be subject to inspection, among other things. Additionally, airport operators are required to conduct random inspections of vehicles when entering secured areas. Information Circulars. TSA issued information circulars in 2018 and 2019 that (1) recommended that airport operators and air carriers with exclusive area agreements conduct a vulnerability assessment of insider risks and develop a risk mitigation plan, and included best practices for the mitigation plan, and (2) described measures to prevent unauthorized access to aircraft and the flight deck. Efforts to Enhance the Insider Threat Program TSA has implemented efforts aimed toward enhancing its Insider Threat Program. TSA established an Executive Steering Committee with members from the program’s key offices to provide executive support and oversight across the multiple offices that compose the program. Also, TSA’s Insider Threat Advisory Group collaborated with the Aviation Security Advisory Committee (ASAC) to review and develop recommendations that would address gaps, redundancies, and vulnerabilities in the program. TSA Insider Threat Executive Steering Committee. TSA established the Steering Committee in October 2018 to be the central oversight body for managing insider risks and coordinating the agency’s mitigation strategies. Its purpose is to facilitate collaboration and decision-making across the program’s multiple offices, advance an integrated agency-wide strategy, and establish consistent executive support for TSA and ASAC efforts, among other things. Its work to date includes reviewing the 2019 ASAC recommendations described above and approving the development of the Insider Threat Roadmap, which is to describe TSA’s strategic vision. TSA Administrator’s Intent initiatives. Several objectives and initiatives from the Administrator’s Intent, published in June 2018, relate to mitigating insider threats. It identifies specific priorities, strategic goals, and objectives that the Administrator plans to accomplish by 2020. For example, one objective is to modernize TSA’s Insider Threat Program by, among other initiatives, expanding the Insider Threat Unit with dedicated staff from several key TSA offices. ASAC Subcommittee on Insider Threats. In 2018, the ASAC established a permanent, joint industry-government Subcommittee with members from TSA and various aviation stakeholders. The purpose of the Subcommittee is to provide a holistic and sustained body to research and make recommendations on risks posed by aviation workers to harm the aviation system. Previously, ASAC convened an industry-only Working Group on Airport Access Control on an as-needed basis. ASAC recommendations. In May 2019, at the request of the TSA Administrator, the ASAC issued a report to help enhance and broaden TSA’s Insider Threat Program through 21 recommendations. The recommendations span six areas of the insider threat concept: 1. threat detection, assessment, and response; 2. aviation worker vetting and evaluation; 3. aviation worker screening and access control; 4. 5. 6. governance and internal controls. TSA concurred with all 21 of the recommendations. As of October 2019, TSA officials reported that the agency had implemented one of the recommendations and created a document that details implementation steps for the remaining 20, progress on those implementation steps, and estimated timeframes for completion. According to TSA officials, previous recommendations made by ASAC have significantly contributed to the establishment and development of the Insider Threat Program, and they anticipate the 2019 report’s recommendations will have a similar positive effect. Further, TSA officials said that the next iteration of the Administrator’s Intent will incorporate these ASAC recommendations to help ensure that their implementation is tracked at the enterprise level. Many Airport Operators Reported Screening Workers, Using Access Controls, and Providing Training that Exceed Regulatory Requirements and Help Mitigate Insider Threats Overall, many airport operators help ensure the security of their facilities, including mitigating insider threats, through their efforts to comply with TSA regulations. However, airport operators may also implement additional measures beyond those required by TSA to improve their security posture. Some examples of voluntary efforts airport operators have reported implementing to help mitigate insider threats include physical screening of aviation workers at access points to SIDAs or secured areas in addition to TSA’s random screening under the ATLAS program, using sophisticated access control technologies such as biometric fingerprint readers, and offering or requiring training for aviation workers on additional security awareness topics. Aviation Worker Screening Although TSA requires airport operators to perform random aviation worker screening at sterile area access points, it does not require them to physically screen all aviation workers at all access points to security- restricted areas, at all times. However, some airport operators choose to voluntarily implement screening programs to physically search some or all workers or their property as they enter security-restricted areas. According to our analysis of TSA data collected in July through September 2019 from a representative sample of airports on their current insider threat mitigation measures, seven of 27 category X airports’ officials and 13 of 54 category I airports’ officials reported that when they screen aviation workers passing through an access point, they screen 100 percent of workers, their property, and their vehicles (if the screening operations take place at a vehicle access point). Airport officials from four of 44 sampled category II airports, 10 of 54 sampled category III airports, and one of 58 sampled category IV airports reported that they screen 100 percent of workers when screening operations are underway. At one category X airport we visited, airport officials said they implemented full worker screening, following the lead of one tenant air carrier. According to the officials, the airport has two worker screening checkpoints in the publicly-accessible baggage claim area that are used by all workers entering the security-restricted areas. These checkpoints use X-ray machines, explosives trace detection, and walk-through metal detectors to screen aviation workers and their property and ensure they do not carry items that are otherwise prohibited (e.g. firearms and illicit substances) and not required to perform their work duties beyond the worker checkpoint. Airport officials said these checkpoints are staffed by a dedicated crew of screeners employed by the airport operator, and officials believe having a consistent crew over time makes it easier for screeners to detect if a worker is behaving in an uncharacteristic or suspicious way. At one category I airport we visited, officials said that they established an insider threat program and implemented measures to mitigate insider threats in response to an illegal drug smuggling operation involving aviation workers that occurred at their airport. For example, they partner with TSA and local law enforcement to conduct full worker screening operations two to three times per week at randomly-selected times and locations, which supplements TSA’s ATLAS operations. Officials said during these operations, all arriving workers are funneled to the screening locations, and they are directed to walk through screening equipment that is capable of identifying metallic threats (e.g. guns and knives) and non- metallic threats (e.g. suicide vests and other weapons) both on person and in property. If the machines are not used, airport officials coordinate with TSA to conduct full-body pat-downs of all employees. Airport officials may also use open-and-look bag searches. At the same time, local law enforcement patrols the screening area with canine units to search for drugs and explosives. Access Control Technology at Airports In general, category X, I, II, and III airports are required to implement measures to control access and prevent unauthorized entry to security- restricted areas of the airport. Airports choose their specific access control system and technology, such as cipher or keyed locks, proximity swipe cards, PIN readers, and biometric (e.g. fingerprint) authentication, provided such technology meets the standards of their TSA-approved security program. Category IV airports—which are typically the smallest commercial airports—are generally not required to identify security- restricted areas within their security programs and thus may not have mechanisms in place to control access to such areas. However, like the larger commercial airports, security programs for category IV airports must provide for adequate law enforcement support, and airport operators at these airports may choose to establish security-restricted areas and implement access control technologies or other measures at their discretion. According to our analysis of TSA data collected in July through September 2019 from a representative sample of airports, officials from most category X, I, and II airports reported that they have systems that use more than one technology to control access to sterile and secured areas of the airport, as shown in figure 3. Among category III airports, officials from 27 of 54 also reported using multiple technologies. Among category IV airports, officials from 37 of 58 reported using some type of access control technology, the most common being locks and keys. Technology at two category X airports we visited is used specifically to prevent workers from “piggybacking,” or attempting to enter security- restricted areas by following close behind another worker without swiping a proximity card or entering a PIN for access. For example, one airport has sensor towers at high-traffic doors from unsecured to secured areas of the airport. The two towers—one on each side of the door—can detect if more than one person crosses the threshold after only a single proximity card swipe and PIN entry. According to airport officials, when this happens, the nearby security cameras will pan toward the door so that security officials who monitor the feeds can view the individuals at the door and respond appropriately. Figure 4, below, shows this technology, as well as the proximity card reader and PIN pad, a separate reader and pad for elevator access, and signs describing security rules. At a second category X airport we visited, locking turnstiles are used to prevent piggybacking. Each worker who wishes to go through the access point must present their proximity badge and provide a fingerprint. Only then will the locked turnstiles unlock to allow that worker through. The turnstiles are on a timer, so if a worker does not go through within a set time, they will have to repeat the process from the beginning. Additionally, if a badge is presented more than one time within a specified time period, an alarm is triggered in the Airport’s Security Operations Center to alert airport security staff of a potential piggybacking incident. Figure 5 shows the card reader, fingerprint reader, and turnstile in use at one access point. Behind the turnstile, a TSA agent conducting ATLAS countermeasures waits for workers to come through. Training In general, according to TSA requirements, individuals with unescorted access to security-restricted areas of category X, I, II, and III airports must be trained on, among other things, escort procedures and the display and use of identification media. All airport operators across all airport categories must ensure that training for law enforcement personnel addresses the airport’s security program, among other security-related topics. For training offerings beyond what is required by TSA, our analysis of TSA data collected in July through September 2019 from a representative sample of airports showed the majority of airport operators at category X, I, II, and III airports reported that they offered or required training for aviation workers that specifically discusses insider threats, as shown in Table 2. Moreover, although they are not required to do so by TSA, many category IV airports reported they offer or require training on a variety of security- related topics, such as insider threats and reporting suspicious behavior and unusual activity. Air Carriers in Our Review Reported Mitigating Insider Threats by Complying with TSA Requirements, and Some Reported Supplementing Their Efforts The six air carriers we spoke with reported they mitigate insider threats via their efforts to comply with federal requirements through their TSA- approved security programs. In general, federal regulations require that air carriers employ a variety of procedures to mitigate security threats. Among others, these measures may include: Preventing unauthorized access to security-restricted areas over which they have primary responsibility, such as aircraft (e.g. by performing regular searches) and areas covered by an exclusive area agreement, as applicable; Submitting applicant biographic information for criminal history records checks prior to issuing air carrier identification media or recommending that airport operators issue access credentials that grants an individual unescorted access to security-restricted areas of the airport; Using personnel identification systems that track information such as identification media expiration dates and appropriate level of access; and Providing training for workers who perform security-related duties or otherwise require access to security-restricted areas. Air carriers may also choose to voluntarily implement additional efforts to improve their security posture. As described above, these may be incorporated into an individual air carrier’s security program, but not necessarily. Air carriers we spoke with have implemented a variety of security measures. For example: To prevent unauthorized access to secured areas included in their exclusive area agreement or within their operations area, all air carriers we spoke to said they secure their facilities by employing at least one form of access control technology. The majority of air carriers (five of six) reported that they secure most access points with proximity card or fob readers, including one air carrier that reported it secures its access doors using additional measures beyond a proximity card swipe, requiring a PIN and a fingerprint as well. The sixth air carrier we spoke to said workers access security-restricted areas using keys or cipher combinations. Prospective air carrier employees may require access media credentials from the airport operator in addition to the air carrier. In some cases, the air carrier will accept the criminal history records check conducted by the airport operator to issue its own credentials, but officials from some air carriers we spoke to said they conduct more rigorous checks before issuing their air carrier credentials. For instance, one air carrier reported that it checks both the applicant’s employment history in addition to their criminal history, and it uses an additional set of disqualifying criteria beyond the regulatory minimum to determine suitability for hire. Some air carriers choose to further enhance their insider threat mitigation efforts. For example, one air carrier has a dedicated insider threat program and, at 16 airports, it implemented a screening program of workers and their belongings at dedicated checkpoints. Another air carrier created a team to monitor the use of the Known Crewmember program, a screening program that provides flight and cabin crews with expedited screening that may include a dedicated screening lane. According to air carrier officials, at its largest hub airport, the team reports on workers from all air carriers who violate the program’s rules to TSA. Some examples of such violations include crewmembers using the dedicated lane for leisure international travel or carrying other individuals’ bags through the Known Crewmember portal or passenger screening checkpoint and into sterile areas of the airport. TSA’s Insider Threat Program is Not Guided by a Strategic Plan with Goals and Objectives, nor Performance Goals to Assess Program Performance TSA’s Insider Threat Program Does Not Have a Strategic Plan with Goals and Objectives Although TSA has multiple ongoing efforts to mitigate insider threats at commercial airports carried out by a number of offices, it does not have a strategic plan in place to guide its Insider Threat Program. When the program began in 2013, TSA initially developed a 2014-2016 Insider Threat Action Plan, which described TSA’s vision of an integrated insider threat program at TSA, and it included strategic goals, each with a set of objectives. However, according to TSA officials, TSA did not fully implement this Action Plan, and TSA did not renew or revise the Action Plan after 2016 due to the departure of the key sponsoring senior leader. Further, TSA officials said that the Action Plan does not reflect all the existing activities that TSA’s Insider Threat Program currently encompasses because the program has changed since 2014. TSA is aware of the importance of strategic planning and took steps to strategically plan for other programmatic efforts at the agency. For example, in 2019, TSA revised its National Strategy for Airport Perimeter and Access Control Security. This strategy describes how TSA seeks to secure the perimeter and control access to security-restricted areas of U.S. commercial airports, which is one concern related to insider threats. In 2018, TSA published its Administrator’s Intent to outline how TSA planned to execute its agency-wide strategy in the short term. The Intent includes one strategic objective to modernize elements of TSA’s Insider Threat Program, such as vetting capabilities. Also in 2018, TSA published the Cybersecurity Roadmap 2018, which details the agency’s efforts to protect its information technology infrastructure from adversaries who might seek to cause harm. Each of these documents contains the critical elements of strategic plans that are laid out by the Office of Management and Budget, including strategic goals and objectives. These strategic planning documents contain elements related to insider threats and can be drawn upon to help develop a comprehensive strategic plan that encompasses the myriad of activities across its many offices that compose TSA’s Insider Threat Program. In October 2018, TSA established the Insider Threat Executive Steering Committee in an effort to establish consistent executive-level engagement and support from the agency’s senior management. As described above, TSA’s Insider Threat Program is carried out by multiple, distinct offices at TSA, and TSA officials have indicated that the program could benefit from a more cohesive approach and oversight. During the course of our review, the Steering Committee approved the development of an Insider Risk Roadmap (Roadmap). According to TSA officials, the Roadmap is under development as of January 2020, and when completed, is to describe the future of insider risk mitigation for TSA. TSA officials were uncertain, however, of when the Roadmap would be completed and implemented. Given that TSA did not fully implement its 2014-2016 Insider Threat Action Plan, and it was never renewed or revised, it is important that TSA remain committed to developing and implementing the Roadmap and, as it moves forward in drafting the Roadmap, ensuring that it contains the critical elements of a strategic plan, including strategic goals and objectives. Federal internal control standards establish that management should define the entity’s objectives clearly and in alignment with the entity’s mission and strategic plan. Objectives should specifically identify what is to be achieved, how, by whom, and in what time frame, and should be defined in measurable terms so that performance toward achieving such objectives can be assessed consistently. More specifically, the Office of Management and Budget clarifies that a strategic goal articulates clearly what the agency wants to achieve to advance its mission, while strategic objectives reflect the outcome or impact the agency is trying to achieve and should facilitate prioritization and assessment for planning, management, reporting, and evaluation. For example, mission-focused strategic objectives express specifically the path an agency plans to follow to achieve or make progress on a single strategic goal. Having a strategic plan for its Insider Threat Program would better position TSA to ensure it is effectively coordinating across its multiple offices and leveraging each office’s resources to mitigate insider threats, a threat which has consistently been identified as the second-highest enterprise level risk. A strategic plan, such as the ones included in other examples of TSA roadmaps, would help both to (1) link these individual efforts to the program’s strategic goals and (2) describe how they contribute to the achievement of those goals and the agency’s stated mission. TSA officials agreed that developing and implementing a strategic plan such as the ones associated with other roadmaps would help ensure that (1) its efforts to develop the Insider Threat Roadmap would continue to progress and (2) executive-level support for strategic planning would remain a priority. TSA Does Not Have Performance Goals to Assess Its Insider Threat Program Individual TSA offices have made progress developing methods to assess their individual office’s efforts, but TSA does not have a comprehensive set of performance goals that can be used to assess progress toward achieving the Insider Threat Program’s stated mission. The National Insider Threat Task Force, established under Executive Order 13587 of October 7, 2011, outlined the minimum standards and basic elements of an insider threat program as well as a Maturity Framework to help Executive Branch departments and agencies, such as TSA, increase the effectiveness of their insider threat programs, among other things. According to the Framework, program senior officials should use metrics to represent progress and better articulate the central role of its insider threat program in achieving the department or agency’s strategic objectives. The Office of Management and Budget specifies that performance goals are statements of the desired performance target to be accomplished within a certain timeframe, and a suite of performance goals should be used to assess progress toward achieving each strategic objective. Federal standards for internal control also state that entities should use performance goals to evaluate their performance in achieving their strategic objectives. Some TSA offices have developed indicators for measuring characteristics of their insider threat activities, but these do not exhibit the characteristics of performance goals as defined by the Office of Management and Budget. For example, TSA’s Security Operations office developed Key Performance Indicators for its ATLAS operations, which are operational indicators for the TSA staff carrying out the countermeasures. These include that teams must screen a percentage of workers who pass through the checkpoint and must meet their assigned screening time allotment. However, operational indicators such as these do not include baselines and timeframes for completion, which are characteristics of performance goals as described by the Office of Management and Budget. Moreover, the Insider Threat Program is without a strategic plan, and as a result, these operational indicators cannot link back to a strategic objective or show progress achieving such an objective, as called for by the Office of Management and Budget guidance. TSA identified the need to develop performance goals to assess its progress and effectiveness in its 2014-2016 Insider Threat Action Plan, which called for “a performance management system monitors and measures effectiveness of insider threat program.” According to officials, such a performance management system was never developed because of the departure of the key senior leader, as described above. Further, in its May 2019 report to the Administrator, ASAC recommended that TSA develop measures that assess the performance of its insider threat efforts. For example, ASAC recommended that TSA commission a comprehensive federally-funded research and development center to assist TSA in evaluating the performance of random or unpredictable aviation worker screening methods to mitigate insider threats. The report indicated that establishing measures of effectiveness and evaluating performance on such measures is “vital to proactive and effective insider threat management.” TSA officials said that the planned Insider Risk Roadmap may include performance goals for the Insider Threat Program, in addition to strategic goals and objectives. However, previous examples of Roadmaps for TSA efforts did not include references to specific, measurable performance goals that can be used to represent progress via targets and timeframes. Moreover, as described above, TSA officials are still drafting the Roadmap and are uncertain when it will be issued. Having documented and clearly defined performance goals that are linked to the program’s overarching strategic goals and objectives would better position TSA to understand the effectiveness of its insider threat efforts. As a result, TSA would be able to reduce the likelihood of expending resources on efforts that are not meeting the program’s stated mission. Focusing on the intended results of TSA’s insider threat efforts can promote strategic and disciplined management decisions that are more likely to be effective because managers are better able to target areas most in need of improvement and to select appropriate levels of investment. TSA could determine the success of its strategies, adjust its approach when necessary, and remain focused on results. Further, agency accountability can be enhanced when both agency management and external stakeholders—such as Congress—can assess an agency’s progress toward meeting its strategic goals. By developing such performance goals, TSA will better position itself to determine the Insider Threat Program’s progress toward achieving its mission of deterring, detecting, and mitigating insider threats to the aviation sector. Conclusions TSA has consistently identified the insider threat among its highest enterprise-level risks and characterizes it as a significant and complex risk to aviation security. In the last ten years, TSA and aviation stakeholders have faced a consistent threat posed by insiders who used their access privileges and knowledge to commit criminal acts, such as drug smuggling, gun smuggling, theft, and attempted suicide bombing. Having an effective Insider Threat Program is critical to TSA’s ability to mitigate the risk of insiders causing harm to the civil aviation system. Since establishing its Insider Threat Program in 2013, TSA has taken steps to strengthen its efforts to combat the insider threat such as by implementing a program to physically screen aviation workers at high-risk airports. However, responsibility for the Insider Threat Program is spread across multiple offices within TSA and has made it challenging to synchronize and integrate activities across each office’s efforts. As of January 2020, TSA officials said that the Insider Threat Program does not have a strategic plan. However, officials said they are developing a new strategic “roadmap” for the Insider Threat Program but are uncertain when it will be issued. Developing and implementing a strategic plan with strategic goals and objectives will help improve coordination across the program’s multiple offices and prioritize and focus TSA’s efforts to ensure that resources are targeted effectively. Additionally, TSA has also not established performance goals to help assess its overall progress in achieving its Insider Threat mission. With specific performance goals tied to strategic objectives, TSA will have the necessary mechanism to assess the extent to which the program is achieving its objectives and overall mission. TSA has numerous efforts across the agency to address insider threats; and with performance goals, the program could assess progress, identify successes, gaps, and redundancies and prioritize and allocate resources effectively. When dealing with a program designed to keep the aviation system safe from criminal and terrorist acts, agency leaders and policy makers need to know how well the government is doing implementing its objectives. Establishing performance goals will help the agency and Congress assess the progress of the overall insider threat effort, target areas most in need of improvement, and select appropriate levels of investment. Recommendations for Executive Action We are making the following two recommendations to TSA: The TSA Administrator should develop and implement a strategic plan for its Insider Threat Program that includes strategic goals and objectives. (Recommendation 1) The TSA Administrator should develop performance goals for its Insider Threat Program that assess progress achieving the strategic objectives in the insider threat strategic plan. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Homeland Security (DHS) for comment. In written comments, which are included in appendix I, DHS concurred with our two recommendations and described steps it plans to take to implement them, including an estimated timeframe for completion. TSA also provided technical comments, which we incorporated as appropriate. In response to our recommendations, DHS’s letter notes that TSA is in the process of drafting the 2020 Insider Threat Roadmap, which will include strategic goals and objectives to guide TSA in its efforts to mitigate insider threats. The letter further explains that the Roadmap will include performance measures to assess TSA’s progress achieving those strategic objectives. If fully implemented, these actions should address the intent of the recommendations. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or McNeilT@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, William Russell (Director), Kevin Heinz (Assistant Director), Winchee Lin (Analyst in Charge), Sarah Williamson, Benjamin Crossley, Dominick Dale, Daniel Gaud, Thomas Lombardi, and Amanda Miller made key contributions to this report.
Why GAO Did This Study Aviation workers using their access privileges to exploit vulnerabilities and potentially cause harm at the nation's airports is known as an “insider threat.” TSA, airport operators, and air carriers share the responsibility to mitigate all insider threats at airports. In October 2019, TSA estimated there are about 1.8 million aviation workers at the nation's airports. GAO was asked to review TSA's and aviation stakeholders' efforts to mitigate insider threats at airports. This report (1) discusses the efforts that TSA, airport operators, and air carriers have taken to help mitigate insider threats at airports and (2) evaluates the extent to which TSA's Insider Threat Program is guided by a strategic plan and has performance goals. GAO reviewed TSA guidance; analyzed TSA data from a questionnaire sent to a representative sample of airport operators; and obtained information from TSA officials, officials from selected larger U.S.-based air carriers, and a nongeneralizable sample of seven airport operators, selected, in part, based on the number of aircraft take-offs and landings. What GAO Found The Transportation Security Administration (TSA), airport operators, and air carriers mitigate insider threats through a variety of efforts. TSA's Insider Threat Program comprises multiple TSA offices with ongoing insider threat mitigation activities, including long-standing requirements addressing access controls and background checks, and compliance inspections. TSA also initiated activities more recently, such as implementing TSA-led, randomized worker screenings in 2018. Airport and air carrier officials implement security measures in accordance with TSA-approved programs and may implement additional measures to further mitigate threats. For example, many airport operators reported using sophisticated access control technologies (e.g. fingerprint readers). Additionally, some air carriers reported conducting more rigorous background checks prior to issuing identification credentials to employees. TSA‘s Insider Threat Program is not guided by a strategic plan with strategic goals and objectives nor does it have performance goals. TSA does not have an updated strategic plan that reflects the Program's current status. TSA officials said that the plan was not updated due to turnover of key senior leadership. As of January 2020, TSA officials said they were developing a roadmap that could serve as a new strategic plan for the Program. However, officials had not finalized the contents and were uncertain when it would be completed and implemented. Developing and implementing a strategic plan will help guide TSA's ongoing efforts and coordinate TSA's agency-wide approach. TSA has not defined performance goals with targets and timeframes to assess progress achieving the Program's mission. Without a strategic plan and performance goals, it is difficult for TSA to determine if its approach is working and progress is being made toward deterring, detecting, and mitigating insider threats to the aviation sector. What GAO Recommends GAO recommends that TSA develop and implement a strategic plan that has strategic goals and objectives, and develop performance goals to assess progress achieving objectives in the strategic plan. TSA agreed with GAO's recommendations.
gao_GAO-20-281
gao_GAO-20-281_0
Background Military Housing Privatization Authorities and Project Structures DOD’s policy is to ensure that eligible personnel and their families have access to affordable, quality housing facilities and services consistent with grade and dependent status, and that the housing generally reflects contemporary community living standards. From the inception of MHPI, the military departments were provided with various authorities to obtain private-sector financing and management to repair, renovate, construct, and operate military housing in the United States and its territories. These authorities included the ability to make direct loans to and invest limited amounts of funds in projects for the construction and renovation of housing units for servicemembers and their families. The projects were generally financed through both private-sector financing, such as bank loans and bonds, and funds provided by the military departments. The Army and the Navy generally structured their privatized housing projects as limited liability companies in which the military departments formed partnerships with the developers and invested funds in the partnership. The Air Force generally provided direct loans to the developers. Because privatized housing projects involve budgetary commitments of the federal government, each project was scored at inception by the Office of Management and Budget to determine the amount of funds that needed to be budgeted for that particular project. The military departments have flexibility in how they structure their privatized housing projects, but typically the military departments lease land to developers for a 50-year term and convey existing housing located on the leased land to the developer for the duration of the lease. The developer then becomes responsible for renovating and constructing new housing and for the daily management of the housing units. At the end of fiscal year 2017, 14 private partners were responsible for 79 privatized military family housing projects—34 for the Army, 32 for the Air Force, and 13 for the Navy and the Marine Corps. See appendix II for a list of all of these housing projects. Each privatized housing project is a separate and distinct entity governed by a series of legal agreements that are specific to that project, hereafter referred to as business agreements. These agreements include, among other things, an operating agreement, a property management agreement, and an agreement that describes the management of funds in the projects, including the order in which funds are allocated within the project. However, while each project is distinct, there are some common elements in how projects invest and use funds. Every project takes in revenue, which consists mostly of rent payments. Projects then pay for operating expenses, including administrative costs, day-to-day maintenance, and utilities, among other things. After that, projects generally allocate funds for taxes and insurance, followed by debt payments. Figure 1 shows a typical funding structure for a privatized housing project. In the typical privatized housing project depicted in figure 1, once debt payments are made, funds are allocated to accounts that fund scheduled maintenance, such as repair and replacement of items like roofs, heating and cooling systems, and infrastructure. After that, funds are allocated to a series of management incentive fees, such as the property management fee. Finally, the project divides these remaining funds according to a fixed percentage between accounts that (1) fund major renovations and rebuilds and (2) are provided to the developer. The percentages may vary across agreements, but according to military department documentation, typically, the majority of funds go toward the accounts funding major renovations and rebuilds. Most of the projects’ business agreements also include the option for the private partners to receive performance incentive fees based on achieving the performance metrics established in each individual project’s business agreement. These fees are intended to incentivize private partner performance. The incentive fees can be paid to private partners on an annual or quarterly basis and can be withheld in part or in total if the private partner fails to meet the established metrics. The weight each performance metric and underlying indicator carries toward the incentive fee varies by project, so incentive fees for some projects may be heavily dependent on financial performance, while others may be more heavily weighted toward resident satisfaction. DOD Goals, Roles, and Responsibilities in the Privatized Housing Program The Deputy Assistant Secretary of Defense for Facilities Management, under the authority, direction, and control of the Assistant Secretary of Defense for Sustainment, is responsible for all matters related to MHPI and is the program manager for all DOD housing, whether DOD-owned, DOD-leased, or privatized. In this capacity, the Deputy Assistant Secretary is to provide both guidance and general procedures related to military housing privatization, as well as required annual reports to Congress on privatized military housing projects. However, it is the responsibility of the military departments to execute and manage the privatized housing projects, including conducting financial management and monitoring their portfolio of projects. Each military department has issued guidance that outlines its responsibilities for privatized housing, such as which offices are responsible for overseeing privatized housing projects. See figure 2 for details on each military department’s roles and responsibilities in the MHPI program. Prior GAO Work We have previously reported on DOD’s privatized housing program. In 2002, we reported that although military installation officials were participating with developers in making improvement decisions for selected projects, DOD and military department headquarters oversight of those decisions appeared limited. We recommended, among other things, that DOD implement several changes to enhance government protections in the privatization program, such as requiring service headquarters and the OSD to review and approve privatization project reinvestment account expenditures over an established threshold. DOD generally agreed with our recommendations and took steps to implement them. Specifically, DOD revised guidance to establish new rules and thresholds for review and approval of project reinvestment expenditures, among other things. In addition, in 2006, we reported that although DOD and the individual military departments implemented program oversight policies and procedures to monitor the execution and performance of privatized housing projects, opportunities existed for improvement. Specifically, we reported that the value of DOD’s semiannual report to Congress was limited because it lacked a focus on key project performance metrics to help highlight any operational concerns. We also reported that data collected on servicemember satisfaction with housing, important for tracking satisfaction over time, were inconsistent and incomplete because DOD had not issued guidance for the standardized collection and reporting of such information. We recommended, among other things, that DOD streamline its report to Congress to focus on key project performance metrics and to provide guidance to the military departments to ensure the consistent collection and reporting of housing satisfaction information from all servicemembers. DOD generally agreed with our recommendations and took steps to implement them. For example, DOD took steps to streamline its report to Congress and update its guidance directing the services to ensure consistent reporting using a numerical rating system to rank housing satisfaction information. DOD Conducts Some Oversight of the Condition of Privatized Housing, but Efforts Are Limited in Key Areas OSD and each of the military departments conduct a range of activities to oversee both the condition of privatized housing and performance of the private partners and have recently implemented initiatives to improve this oversight—such as increasing the frequency of the physical inspection of homes and issuing guidance to ensure consistency in the framework used to measure project performance. However, we found that these oversight efforts remain limited. Specifically, our review showed (1) the scope of oversight of the physical condition of privatized housing has been limited; (2) performance metrics focused on quality of maintenance and resident satisfaction may not accurately reflect private partner performance related to the condition of privatized housing; (3) there is a lack of reliable or consistent data on the condition of privatized housing; and (4) past DOD reports to Congress on resident satisfaction are unreliable due to the inconsistent handling and calculation of the data and therefore may be misleading. Military Departments Conduct Some Oversight of the Physical Condition of Privatized Housing, but the Scope of These Efforts Is Limited The military departments have taken steps to oversee the condition of their privatized military housing inventory and each has issued guidance that outlines their respective oversight roles and responsibilities, but the scope of these oversight efforts has been limited. Military department oversight activities generally fall into two categories—(1) daily oversight of management and operations and (2) periodic reviews of compliance with each project’s business agreements. Daily oversight of management and operations. Each installation has a military housing office that is responsible for conducting daily oversight of a project’s management and operations. Military housing officials told us that activities to monitor the physical condition of housing units generally include reviewing sample work order requests, following up with a sample of residents to check on their experience with recently completed work, and inspecting homes during the change-of-occupancy process. However, the implementation and scope of these activities varies and can be limited. For example, during our site visits conducted from June through August 2019, we identified the following installation- specific practices: The rate of inspections of homes following change-of-occupancy maintenance at the installations we visited varied. For example, at the time of our site visits, military housing office officials at Tinker Air Force Base, Oklahoma, told us that they inspect 100 percent of homes that have completed change-of-occupancy maintenance, while officials from Langley Air Force Base, Virginia, stated that they inspect 10 to 20 percent of these homes. In November 2019, Air Force officials told us that they are moving to a 100-percent inspection policy. Similarly, the Army issued an order in March 2019 directing military housing office officials to inspect 100 percent of homes where change-of-occupancy maintenance has been completed. Officials from Army installations we visited noted that this was an increase from previous practices, and for one installation was a change in practice from conducting inspections only during the move-out process, which occurs prior to change-of-occupancy maintenance. According to Department of Navy officials, the Navy’s business agreements stipulate that Navy and Marine Corps installations have access to all work order information. However, practices for following up on work order records varied among some of the Navy and Marine Corps installations we visited. For example, military housing office officials at Camp Pendleton, California, told us that for one of the two partners that own housing on the base, they had access to only 3 percent of completed work orders from the previous month. For the other partner that owns housing on the base, military housing office officials noted that the partner provided them with nine work orders of varying priority each month to review. One military housing office official added that these were the minimum requirements needed for monthly reporting and that they were working with the private partner to increase their access to work order records. Following a different practice, military housing office officials at Naval Station Norfolk, Virginia, told us that they had access to the private partner’s maintenance record system and would pull reports on homes that had made six or more maintenance calls in a 30-day period. Periodic reviews of compliance with each project’s business agreements. Periodic reviews of compliance with a project’s business agreements are a joint effort between the local military housing office, the private partners, military department installation commands, and other echelons of command. These reviews can include neighborhood tours to view project amenities such as community centers, playgrounds, and pools, all of which are owned, maintained, and operated by the private partner companies, as well as exteriors of housing units. However, similar to the daily oversight activities, these annual reviews have been narrow in the scope of their assessment of the physical condition of the housing units, as interior walk-throughs were, at times, focused on just a few homes at each installation. For example: The Air Force Civil Engineer Center is the primary oversight and governance body for the Air Force’s privatized housing projects. The Air Force oversight process includes periodic compliance reviews of all privatized housing projects. To accomplish this task, the Air Force is to use a compliance checklist to review the private partner’s compliance with a project’s business agreements. In addition to the compliance reviews, guidance states that Air Force Civil Engineer Center officials visit projects annually, and officials told us that they tour a sample of homes and interview private partner representatives, military housing office staff, and residents during these visits. However, according to selected annual site visit reports we reviewed and a discussion with an Air Force official, annual site visit reports typically include only an evaluation of three to four housing units on an installation and can be restricted to empty units or units that have completed change-of-occupancy maintenance, limiting the robustness of the assessment of the installation’s housing units’ physical condition. According to Department of the Navy officials, the Navy and the Marine Corps provide oversight of privatized housing projects through a tool called the monitoring matrix. Officials from the various organizational entities involved with privatized housing—to include the Commander, Naval Installation Command; the Naval Facilities and Engineering Command; and the military housing office—are to use this monitoring matrix to periodically review private partner compliance with a project’s business agreements. The matrix contains a condition assessment component, which includes a tour of privatized housing neighborhoods and a visual inspection of individual privatized housing units. However, similar to the Air Force, according to select assessments we reviewed and a discussion with a military housing office official, the visual inspections are typically focused on two to three homes in each neighborhood on an installation and to homes that have recently undergone change-of-occupancy maintenance. Army guidance calls for the U.S. Army Corps of Engineers to conduct an annual ground lease inspection to review private partner compliance with a project’s business agreements. The guidance also calls for the Army’s program manager to conduct an annual installation visit to each project to evaluate performance and ensure a project’s compliance with the business agreements. The visit is to include a recommended site tour, described in guidance as a brief visual inspection tour of community elements, and a walk-through visual inspection of at least four housing units—two renovated and two recently built—including one unit designated as an accessible home under federal guidelines. However, according to a May 2019 report by the Army Inspector General, these requirements were inconsistently met, and the results did not include a follow-up process and were not communicated to senior commanders. Through the recent housing reviews that they have conducted, each military department’s internal oversight body has recognized that the departments’ oversight guidance has been limited in addressing the condition of privatized homes and provides little clarity to housing officials about their roles and responsibilities in assessing the physical condition of homes. For example, in May 2019, the Department of the Army Inspector General reported that senior commanders and garrison staffs expressed confusion concerning the roles, responsibilities, and authorities regarding privatized housing and that oversight, governance, and synchronization were insufficient to identify current housing challenges. Similarly, an April 2019 report from the Air Force Inspector General noted that ambiguous guidance had resulted in inconsistent action and uneven performance across Air Force housing projects. In addition, a November 2019 report by the Naval Audit Service identified nine separate guidance documents for the oversight of privatized housing and found that personnel at installation and regional levels were unclear on the guidance and requirements for performing oversight of privatized housing. According to military department officials, each department has completed initiatives and is undertaking initiatives to revise guidance and standardize daily oversight activities in an effort to provide consistent oversight across projects and installations and to increase the focus on the physical condition of housing. In addition, the military departments have initiatives to increase staffing levels, improve training for military housing office officials, and ensure that military department housing officials have independent access to work order data to strengthen their oversight activities. Figure 3 outlines examples of completed and ongoing initiatives by military department to improve the oversight of privatized housing. However, each military department is working to implement service- specific initiatives with minimal guidance from OSD on the level of oversight expected as it relates to the condition of privatized housing. OSD guidance as it pertains to the condition of privatized housing is limited compared with the guidance OSD provides for monitoring the condition of military-owned housing. Specifically, OSD guidance is focused on the oversight of the implementation of projects, the construction of new housing units, and project financial monitoring. The guidance stipulates that after privatized housing projects are awarded, monitoring should include descriptions of deal structure and strategies for project monitoring. In contrast, OSD guidance for military-owned housing provides clearly defined objectives to the military departments for oversight, including the physical condition of the homes. For example, the DOD manual for housing management directs the military departments to provide managerial oversight of DOD’s government-owned family housing to ensure that (1) the required inventory is being provided and maintained in good condition, (2) the program is being operated in an effective and cost-efficient manner, and (3) servicemembers and their families have adequate housing choices. Further, the manual provides specific objectives for the condition of DOD’s government-owned family housing, stating that for DOD family housing to be considered adequate overall, it must meet minimum standards for configuration, privacy, condition, health, and safety. It also states that military service condition assessments shall use private-sector housing industry and DOD standards or codes as a basis for assessing inventory adequacy. The manual adds that for DOD government-owned family housing to be considered in adequate condition, the construction cost for all needed repairs and improvements cannot exceed 20 percent of the replacement cost. According to DOD’s housing manual, program assumptions for privatized housing are that privatization allows the military departments to work with the private sector to generate housing built to market standards. While the military departments’ policies provide for some measureable oversight activities, such as requiring a certain number or type of home to be inspected, OSD has not provided guidance to the military departments clearly defining oversight objectives for monitoring the physical condition of privatized housing units. DOD’s housing manual further states that because privatization creates a long-term governmental interest in privatized housing, it is essential that projects be attentively monitored. The 50-year term for the ground leases creates a long-term interest in monitoring the privatized housing assets, to include the physical condition of the housing units. However, unless DOD updates its guidance on the oversight of privatized housing with objectives for overseeing the physical condition of housing units, it cannot be assured that the military departments’ oversight activities will be sustained over time or be sufficiently consistent across projects, raising the risk that private partners may not provide adequate quality housing. Notably, the military departments have entered into privatized housing agreements with some of the same companies, and members of different military services may live at installations managed by military services different than their own. As such, it is important that oversight expectations generally be consistent across the military departments and the projects they manage. Moreover, all military departments have an interest in ensuring that residents feel confident that the private partners will be held to a consistent standard for maintaining the condition of their homes. Participants in 8 of our 15 focus groups stated that they will no longer live in privatized housing following their current experience, and participants in 6 of our 15 focus groups stated that their current experience with privatized housing will affect the future career decisions for their family. One participant stated that he plans to exit the service after 8 years, noting that his decision is largely based on his experience with privatized housing. In addition, in our online tool we asked residents if their experience with privatized housing would impact their future career and housing decisions. For those residents that responded to these questions, the majority said their experience will make them less likely to continue to live in privatized housing in the future. For example, one respondent stated that while living in privatized housing is a benefit to being in the military, living in housing that is subpar and where nothing seems to be getting fixed or at least acknowledged makes the family hesitant to live in privatized housing again. Some residents also indicated that their experience would impact their future career decisions. DOD Uses Several Metrics to Monitor Private Partner Performance, but the Indicators Underlying Those Metrics May Not Provide Meaningful Information on the Condition of Privatized Housing The military departments each use a range of project-specific performance metrics to monitor private partner performance. However, the indicators underlying the metrics designed to focus on resident satisfaction and on the quality of the maintenance conducted on housing units may not provide meaningful information or reflect the actual condition of the housing units. For example, in April 2019 the Air Force Inspector General reported that the current incentive structure measures many things with precision, but does not measure the right things. Private partner performance is commonly measured through four key metrics—resident satisfaction, maintenance management, project safety, and financial management. To determine how well the private partners are performing under these metrics, military housing office officials told us that they rely on a range of indicators established in the project business agreements. Table 1 provides examples of various indicators that the performance metrics comprise. According to officials from each military department, the performance metrics and their underlying indicators are a key tool that each military department uses to hold private partners accountable for providing quality management of the privatized housing projects. However, we found that the indicators themselves may not reflect how the private partner is performing in terms of providing servicemembers and their families with quality services and housing. For example: Maintenance management: One commonly used indicator of performance in maintenance management measures how often the property manager’s response time to work orders meets required time frames established in the project’s business agreements. While this indicator measures the timeliness of the private partner’s response, it does not measure or take into account the quality of the work that was conducted or whether the resident’s issue was fully addressed. As such, a property manager may fully meet the metric for maintenance management, even if a given repair has not been adequately completed. Residents in 13 of our 15 focus groups noted that they typically have had to submit multiple work order requests before an individual maintenance issue has been fully addressed. For example, a resident who participated in one of our focus groups provided us with a copy of work orders she had submitted related to a single maintenance issue in her home. The first work order was marked completed on time, yet the resident had to submit a work order for the same issue a week later. Further, an official at one Army installation told us that since the incentive fee for the project is awarded on a quarterly basis, judging property managers only on the basis of work orders completed on time for that quarter could mask persistent ongoing housing problems. This is because many smaller work orders get closed out each quarter, while work orders for more complicated issues might stay open over multiple quarters. Some projects include indicators that aim to more directly measure quality, such as the number of work orders placed during the first 5 business days of residency. This type of indicator may more clearly indicate the extent to which change-of-occupancy maintenance was complete on a given home. Resident satisfaction: One example of an indicator of resident satisfaction is whether a project has met target occupancy rates established in the project’s business agreements. An OSD official and private partner representatives told us they use occupancy as an indicator of satisfaction, based on the assumption that residents would move if they are dissatisfied with their home’s condition. However, according to the Army’s Portfolio and Asset Management Handbook, occupancy rates are not a recommended metric to monitor private partner performance because occupancy rates already impact project finances. Our focus groups and the responses we received to our online tool also indicate that this may not be a reliable assumption. Although most residents are not required to live in military housing, residents in each of our 15 focus groups and responses to our online tool indicated a variety of reasons for choosing to live in privatized housing, many of which do not have to do with their satisfaction with the quality or condition of their homes. For example, residents in our focus groups cited other factors influencing their decision to live in privatized housing, such as living in close proximity to military medical or educational services for children or other family members that are part of the military’s Exceptional Family Member Program, a lack of safe and affordable housing in the surrounding community, and access to quality schools. Volunteers that responded to our online tool also cited accessibility to base services, commute time, and safety as reasons for choosing to live in privatized housing. Another commonly used indicator of resident satisfaction is the results of various resident satisfaction surveys, such as maintenance surveys and leasing surveys, as well as the annual satisfaction survey. The military departments and the private partners use these survey tools to gauge resident satisfaction with the maintenance conducted on their homes, service provided by property managers, and amenities provided in their community, among other things. However, residents in 4 out of our 15 focus groups indicated that the surveys they receive related to maintenance performed on their homes do not ask questions about the quality of maintenance work. For example, residents told us that maintenance surveys, which they generally receive after maintenance work is completed on their homes, ask if the maintenance worker was courteous, but not about the quality of the work performed on the home. We reviewed maintenance surveys from 3 of the 10 installations we visited and found that the surveys asked residents to provide feedback on the quality of the work, with questions asking them to rate their satisfaction with the quality of the maintenance work completed. In addition, we reviewed a quarterly Army survey from one of the installations we visited and found that this survey asked residents about their satisfaction with the courteousness and professionalism of the maintenance team and the responsiveness and timeliness of maintenance work, but did not specifically ask about their satisfaction with the quality of the maintenance work completed. We also found that the information used to support the indicators can vary. For example, officials at one Army installation—Fort Huachuca, Arizona—use quarterly resident surveys, the Army’s annual survey, and action plans on Army annual survey results as indicators of resident satisfaction. However, officials at another Army installation—Fort Knox, Kentucky—use residential community office relationship management and point of service surveys. Similarly, we found differences in the information used as indicators of the maintenance management metric. For example, officials at both Hickam Air Force Base, Hawaii, and Davis- Monthan Air Force Base, Arizona, rely on the timeliness and quality of change-of-occupancy maintenance as an indicator of maintenance management. However, officials at Hickam Air Force Base also use work order response and completion times as indicators of the maintenance management metric, whereas officials at Davis-Monthan Air Force Base, Arizona, only use work order response times. Standards for Internal Control in the Federal Government state that management should evaluate performance and hold individuals accountable for their internal control responsibilities. If management establishes incentives, management should recognize that such actions can yield unintended consequences and evaluate incentives so that they align with the entity’s standards of conduct. The standards further state that management should use quality information to achieve the entity’s objectives, including relevant data from reliable sources. In October 2019, OSD, in collaboration with the military departments and private partners, issued new guidance standardizing the performance incentive fee framework across the military departments. The new guidance provides a framework for standardizing the minimum and maximum percentages of the fee that each metric can account for, allowing for some flexibility in the weight each metric will carry for an individual project. Specifically, maintenance management and resident satisfaction can account for between 60 and 90 percent of the fee, project safety can account for between 5 and 15 percent of the fee, and financial performance can account for between 5 and 15 percent of the fee. However, despite DOD’s efforts to ensure more focus on the condition and quality of, and resident satisfaction with, privatized housing through the standardization of metrics across the military departments, the metrics may be misleading if the specific underlying indicators used to determine whether a metric has been reached are not reevaluated on an ongoing basis to ensure they are accurate measures of the private partners’ performance and an accurate reflection of the condition and quality of privatized homes. OSD and military department officials have recognized that the current indicators for measuring performance do not consistently focus on or prioritize the private partners’ performance with maintaining housing units and ensuring resident satisfaction. For example, Army officials told us they are no longer using occupancy rate as an indicator of resident satisfaction and have taken steps to standardize performance indicators across all Army projects, while still allowing for flexibility at the installation level to modify the weight of indicators to provide incentives reflective of the specific needs of the installation. Limitations to the current indicators may hinder the military departments’ ability to accurately determine private partner performance. OSD and the military department officials told us they have not yet reevaluated the specific indicators used to determine whether a private partner has met a specific metric because doing so will require negotiation with each of the private partners for each project. However, without reviewing the specific indicators used to award performance incentives, OSD and the military departments do not have assurance that the information the military departments are using to award these incentives reflects the actual condition of the housing. DOD and Private Partners Collect Maintenance Data on Privatized Housing, but These Data Are Not Captured Reliably or Consistently for Use in Ongoing Monitoring of Housing Units Maintenance data collected by the private partners are not captured consistently or reliably across projects for use in ongoing monitoring of the condition of privatized housing units over time. The privatized housing projects’ business agreements typically include a requirement for the private partner to maintain a records management system to record, among other things, maintenance work requested and conducted on each housing unit. According to private partner representatives from all 14 companies, each company uses commercial property management software platforms for activities such as initiating maintenance work orders and dispatching maintenance technicians. Some private partner representatives stated that while data from the work order tracking systems are primarily used to prioritize and triage maintenance work, the data were never intended to monitor the overall condition of privatized housing units. Military department officials told us that efforts are underway to monitor work order data from the private partners’ work order tracking systems in an effort to increase the military departments’ oversight and accountability of the private partners for providing quality housing to servicemembers. For example, the Army and the Navy are taking steps to create data dashboards to track installations’ work orders by priority, status, and category. However, while data from these work order tracking systems may be useful for point-in-time assessments of work order volume at a given installation, we found that these data are not captured reliably or consistently for use in ongoing monitoring of the condition of privatized housing units across projects and over time. We received and reviewed data from each of the 14 private partners’ work order tracking systems covering each of the 79 privatized housing projects. Based on our review of these data and discussions with private partner representatives, we found two primary factors that would limit the reliability or consistency of using these data for ongoing monitoring of the condition of privatized housing units over time—(1) inconsistent use of terminology in work order records and (2) differing practices for opening and closing work orders. Inconsistent Use of Terminology in Work Order Records Data in these work order tracking systems include information such as records of resident requests for service, history of work conducted on specific housing units, change-of-occupancy maintenance performed, and work completed on common areas. Residents may request service for a broad range of issues, such as lost keys, broken appliances, ceiling or wall damage, lack of hot water, or water leaks or floods. According to private partner representatives, work orders can be entered into the system by property management office staff, maintenance technicians, or call center representatives for those companies that use offsite call centers to process resident service request calls. At some installations, residents can also enter work orders into the work order tracking system through online portals or mobile applications. However, we noted cases where work orders were inconsistently entered into the work order tracking systems with respect to two primary factors— (1) how the request is described by the resident or interpreted by the individual entering the data, which can differ for each work order; and (2) the existing range of pre-established service category options in the private partner’s work order tracking system, which differ among the partners. According to private partner representatives, the individual responsible for entering the work order into the system—property management office staff, maintenance technicians, call center representatives, or residents—makes a judgment on how to categorize the work order. These factors create challenges for looking at the data across projects. Private partner representatives from one installation we met with stated that the quality of the work order data is dependent on the data input into the system. In some cases, the data input can be inaccurate or imprecise, depending on the specificity with which a resident describes his or her maintenance issue or how a staff person enters the data into the system. A private partner representative from another installation we visited stated that reporting on data from the work order tracking system can be challenging because individuals across installations inputting data may have a different interpretation of a resident’s reported issue. Private partner representatives from another installation noted that the work order tracking system they used could not be easily updated with a new category if needed, making it more difficult to identify systemic issues. For example, there is one category for all exterior repairs, but no way to break that category down into what the specific repairs are, such as roofs. In the event that there is an issue with several roofs in the same area, the private partner representative said it would be hard to identify the issue because the only option available is to look through the notes section. According to this individual, the regional maintenance technicians, not the work order tracking system, are the best resource for identifying trends or recurring issues. This inconsistent entering of information into the work order tracking systems, which occurs both within and across installations, means that the military departments cannot readily use the data to capture the prevalence of a particular issue, such as mold, among the homes in a reliable manner. For example, if someone wanted to use work order data to track instances of mold, he or she would find that these may be represented in the work order systems under a variety of service categories, such as mold or mildew, plumbing and bath, heating and cooling, or general. To isolate service requests related to mold, one may have to rely on using the service comments for each request, which can vary in their level of detail. In addition, service requests for mold issues may be entered into the work order systems under different priority levels, such as routine, urgent, or emergency. As a result of the variation in the type and amount of information collected in the work order tracking systems, work order data alone cannot be used to determine the validity of a service request, the severity of the problem, or whether the work was completed to a quality standard. Figure 4 shows examples of differences in how a perceived mold issue can be captured in these systems based on our review of the data provided by the private partners. Military department officials found similar limitations when analyzing the work order data. According to some officials, one challenge in using the work order data for oversight is that, while there are good data in the individual records, people report and record things differently. Specifically, a Navy official working with these data told us they have to consider these differences and create unique algorithms to query data for each partner. Differing Practices for Opening and Closing Work Orders At some installations we visited, private partners noted changes in practices for opening or closing work orders, limiting the usefulness of data in monitoring the status of work orders over time and thus the condition of privatized housing. For example, according to private partner representatives at one installation we visited, a practice for tracking emergency work orders in the work order tracking system had changed in 2013. Work that comes in under an emergency priority may take several steps to complete: A maintenance technician may first have to stop the emergency, then clean up any resulting damage, before repairing the root cause and completing any finishing work. Prior to 2013, maintenance technicians would open and close new work orders for each step in the process. Under the new practice, the original work order is kept open until completion. Representatives from a different private partner described a similar change in practices, noting that if a work order was closed or recategorized before the work was finished there could be issues for how it is tracked, such as getting dropped out of the system and the work not getting done. A third partner noted the same practice, but added that an emergency work order can be downgraded to urgent or routine status during the time that the work is taking place. As a result, work order data alone may not accurately identify the number of open work orders at any given time, the time it took to address a maintenance issue, or if a maintenance request has been fully completed. Additionally, we identified anomalies in the work order data provided to us from each of the 14 partners. For example, we identified instances of, among other things, duplicate work orders, work orders with completion dates prior to the dates that the resident had submitted the work order, and work orders still listed as in-progress for more than 18 months. According to military department officials, they have increased their efforts to review data from the private partners’ work order tracking systems and have found similar anomalies. For example, a Navy official working with work order data found that a couple of homes had six or seven unique work order records in the system, but each contained identical information in the various data fields. Officials from both the Navy and Air Force have come across work order records that were marked as complete within minutes of being entered into the system or marked as complete with a date prior to the work order being open, which signaled the need for further scrutiny. Each military department has efforts underway to monitor private partner work order data in an effort to increase oversight of the quality of privatized housing. However, because neither OSD nor the military departments have identified minimum data requirements, established consistent terminology or practices for data collection, or developed processes for the military departments to validate the work order data collected by the private partners, data from these work order tracking systems are not reliable for use in the ongoing monitoring of the condition of privatized homes. Further, military department data monitoring efforts are department-specific, even though the departments have entered into privatized housing agreements with some of the same companies. Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity’s objectives and design information systems and related control activities to achieve objectives and respond to risks. Information, among other things, should be complete and accurate. The standards also state that management should define the identified information requirements at the relevant level and requisite level of specificity for appropriate personnel. Without direction from OSD to establish minimum data requirements and consistent terminology or practices for data collection, as well as a requirement for the military departments to validate the data, the military departments’ ability to use data from the private partners’ work order tracking systems to monitor the condition of privatized homes over time is limited and may vary across projects. DOD Provides Reports to Congress on Resident Satisfaction with Privatized Housing, but Data in These Reports Are Unreliable, Leading to Misleading Results DOD has provided periodic reports to Congress on the privatized housing program; however, reported results on resident satisfaction have been unreliable and are misleading due to (1) variances in the data the military departments collect and provide to OSD and (2) OSD’s calculation and presentation of the data. DOD is statutorily required to provide reports to Congress that include, among other things, information about military housing privatization projects’ financial health and performance and the backlog, if any, of maintenance and repairs. These reports have also included information on resident satisfaction based on the results of the annual military department satisfaction surveys. In May 2019, DOD issued its report for fiscal year 2017, which stated that overall resident satisfaction for calendar year 2017 was 87 percent. However, this number is misleading due to issues associated with the collection and calculation of the data DOD used. The military departments provide data on resident satisfaction to OSD for inclusion in DOD’s submission to Congress based on information from the annual resident satisfaction surveys. Specifically, OSD’s instructions to the military departments for the fiscal year 2017 report required the military departments to report the following: The month and year of the most recently completed tenant satisfaction survey. The number of residents surveyed and the total number of tenants who completed the survey during the reporting period. Resident responses to the question that asks: “Would you recommend privatized housing?” Results should indicate how many tenants responded “Yes,” “No,” or “Don’t Know.” However, instead of asking whether residents would recommend privatized housing, the military departments’ annual resident satisfaction survey asked residents the following: “How much do you agree or disagree with the following statement, ‘I would recommend this community to others.’” The difference in the wording between the question asked of residents and the question reported to Congress is notable, as a resident’s satisfaction with his or her community and inclination to recommend it to others may not be reflective of satisfaction with either the privatized housing unit or privatized housing in general. We also found differences in how the military departments interpreted responses to the question they asked. When asked whether they would recommend their community to others, residents were provided the following response categories on a scale of five to zero: (5) strongly agree, (4) agree, (3) neither agree nor disagree, (2) disagree, (1) strongly disagree, and (0) not applicable, no opinion, don’t know, or no answer. However, we found that the ways in which the military departments translated these responses into the “yes,” “no,” or “do not know” categories differed across the military departments, and in the case of the Army differed from year to year. For the fiscal years 2015 through 2017 reports, Navy officials told us they counted responses reported in categories 5 (strongly agree) and 4 (agree) as “yes,” responses in categories 2 (disagree) and 1 (strongly disagree) as “no,” and responses in categories 0 (not applicable, no opinion, don’t know, or no answer) and 3 (neither agree nor disagree) as “don’t know.” For the same time period, Air Force officials told us they counted responses in categories 5 (strongly agree), 4 (agree), and 3—neither agree nor disagree—as “yes,” responses in categories 2 (disagree) and 1 (strongly disagree) as “no,” and responses in category 0 (not applicable, no opinion, don’t know, or no answer) as “don’t know.” If 3 had not been counted as “yes,” the reported resident satisfaction rate would have been lower. For example, for Lackland Air Force Base, Texas, if officials had not counted responses in category 3 as “yes,” the resident satisfaction rate for newly constructed units would have been more than 20 percent lower than what was reported. The Army calculated responses differently for fiscal years 2015, 2016, and 2017. Specifically: For the fiscal year 2017 report, the Army counted responses in categories 5 (strongly agree) and 4 (agree) as “yes,” responses in categories 2 (disagree) and 1 (strongly disagree) as “no,” and responses in categories 0 (not applicable, no opinion, don’t know, or no answer) and 3 (neither agree nor disagree) as “don’t know.” For the fiscal year 2016 report, the Army counted responses in categories 5 (strongly agree) and 4 (agree) as “yes,” responses in categories 2 (disagree), 1 (strongly disagree), and 0 (not applicable, no opinion, don’t know, or no answer) as “no,” and responses in category 3 (neither agree nor disagree) as “don’t know.” For the fiscal year 2015 report, the Army counted responses in categories 5 (strongly agree), 4 (agree), and 3 (neither agree nor disagree) as “yes,” responses in categories 2 (disagree) and 1 (strongly disagree) as “no,” and responses in category 0 (not applicable, no opinion, don’t know, or no answer) as “don’t know.” In addition, we identified errors and inaccuracies in how OSD calculates these data and reports them to Congress. Specifically, we found missing data points and incorrect formulas, among other errors, in OSD’s calculation of the data submitted by the military departments. For example: The formula used by OSD to calculate overall resident satisfaction for fiscal year 2017 did not include data for several projects, including four Army projects—Fort Bragg, North Carolina; Fort Knox, Kentucky; Joint Base Lewis-McChord, Washington; and Presidio of Monterey/Naval Postgraduate School, California. As of September 30, 2017, these four projects accounted for over 18 percent of the Army’s total housing inventory. The formula used by OSD to calculate resident satisfaction by project double counted resident satisfaction data for new and unrenovated homes for Vandenberg Air Force Base, California, by incorrectly using the Vandenberg Air Force Base data for both the Vandenberg and for the Fort Huachuca/Yuma Proving Ground project. As a result, incorrect data were reported for the Fort Huachuca/Yuma Proving Ground project for some categories of homes. OSD did not include resident satisfaction data for New Orleans Naval Complex, Louisiana, in its fiscal year 2017 report to Congress, even though the Navy had included data for this project when submitting its data to OSD. OSD also reported identical resident satisfaction data for Wright-Patterson Air Force Base, Ohio, in fiscal years 2015, 2016, and 2017, despite the fact that Air Force officials noted in their submissions to OSD that the annual resident satisfaction data was from the annual resident satisfaction survey for Wright-Patterson Air Force Base conducted December 2013. Further, Army data provided to OSD had calculation errors that OSD did not reconcile. Specifically, the Army provided OSD the total number of surveys received for a project, as well as the number of surveys broken out by different housing categories. However, we found instances where the sum of the data broken out by different housing categories was not equal to the reported total number of surveys received. For example, when we reviewed data for Fort Rucker, Alabama, the calculated sum of surveys broken out by different housing categories was 1,372, but the Army reported a total of 530 surveys received, a difference of 842 surveys. Further, the presentation of data in OSD’s report to Congress is misleading because OSD did not explain the methodology it used to calculate the overall resident satisfaction percentage or include caveats to explain limitations to the data. Specifically, OSD did not include information on overall response rates to the annual satisfaction survey for each military department, nor did it include response rates by project. Low response rates can create the potential for bias in survey results. For example, in its fiscal year 2017 report, OSD reported that 25 percent of residents living in renovated homes at the MHPI project including Fort Detrick, Maryland/Walter Reed Army Medical Center, Washington, D.C., were satisfied with their housing. However, only four residents provided responses to this question, meaning that just one resident reported being satisfied. In addition, we found that OSD did not include an explanation in the report for why five projects were listed as not applicable. According to OSD officials, this error was a quality control issue that they plan to address, but the officials told us there are no plans for quality control in development at this time. The Fiscal Year 2020 NDAA includes a provision requiring each military installation to use the same satisfaction survey for tenants of military housing—including privatized military housing—the results of which are not to be shared with private partners until reviewed by DOD. The statute also states that DOD’s reports to Congress shall include additional information, such as the results of residence surveys, as well as assessments of maintenance response times, completion of maintenance requests, the dispute resolution process, overall customer service for tenants, and other factors related to the condition of privatized housing. OSD’s report to Congress states that, given DOD’s objective of improving the quality of life for its servicemembers, the degree of satisfaction military families experience in privatized housing is a critical indicator of overall program success and the military departments and private partners use tenant surveys to help assess the quality of privatized housing. Additionally, Standards for Internal Control in the Federal Government state that management should obtain relevant data from reliable internal and external sources in a timely manner based on identified information requirements. Relevant data have a logical connection with, or bearing upon, the identified information requirements. Reliable internal and external sources provide data that are reasonably free from error and bias and faithfully represent what they purport to represent. Management should evaluate both internal and external sources of data for reliability, and obtain data on a timely basis so they can be used for effective monitoring. However, the errors we identified in OSD’s data calculations, as well as the differences in how the military departments translate data provided to OSD, indicate the need for better internal controls, including a process for collecting and calculating resident satisfaction data from the military departments, and explanation of the data collected and reported on resident satisfaction to ensure they are reasonably free from error and bias and represent what they purport to represent. According to an OSD official responsible for preparing the reports to Congress, her office inherited the MHPI report process from its predecessors and had to quickly catch up on reports because DOD was behind on its reporting requirement. However, she noted her office is working with the military departments to review the resident satisfaction survey questions and will be identifying and implementing measures to ensure an accurate and reliable process to compile, calculate, report, and compare MHPI residents’ satisfaction by military department and across DOD. Additionally, for future survey data reporting, OSD officials told us they plan to research the possibility of directly collecting resident survey data from the survey administrator to minimize data transcription errors. Until OSD makes these changes to the data collection and calculation efforts that make up the department’s report to Congress and provides explanations of the data in the reports, OSD will not be able to provide Congress with an accurate picture of resident satisfaction with privatized housing. Military Housing Offices Have Not Effectively Communicated Their Role as a Resource for Servicemembers Experiencing Challenges with Privatized Housing Military housing offices located at each installation are available to provide resources to servicemembers experiencing challenges with their privatized housing, among other services, but these offices have not always clearly and systematically communicated this role to residents of privatized housing. Military housing office officials noted that servicemembers living in privatized military housing primarily interact with their installation’s military housing office when they first receive orders to move to an installation. The military housing office provides new residents with information on their local housing options, to include referral services for housing options. However, military department guidance calls for the military housing office to provide continued assistance to servicemembers and their families living in privatized housing. For example, each military department has guidance that establishes the role of its housing offices in the resident dispute resolution process—specifically, if servicemembers are experiencing a dispute with the private partner: Army policy states that each installation should have an official tasked with providing support to servicemembers regarding resident issues that cannot be resolved by the private property manager. This individual is also in charge of resolving every resident complaint, and the military housing office, if required, can request mediation by the garrison commander. Air Force policy directs installation commanders to establish regular meetings with the private partners to discuss resident disputes and develop resolutions for residents’ issues. Also, the Air Force business agreements for each project are to establish Management Review Committees, in which the private project owner, Air Force housing office officials, and the Air Force Civil Engineer Center meet quarterly to review and facilitate the resolution of prevalent issues. The Navy announced a standardized two-step resolution process in May 2019 for housing residents who have issues or concerns with their current homes. The first step is to report any issue to the local property manager. If the issue is not resolved in either a timely manner or to quality standards, residents are asked to contact their local Navy housing service center, which directly reports to the installation commanding officer, or the servicemember’s chain of command. Prior to the standardization of this process, Navy guidance established a general responsibility to assist residents in the dispute resolution process and each project’s tenant lease includes specific dispute resolution processes. The Marine Corps has established a three-step dispute resolution process for residents to follow when they are experiencing a dispute with the private partner. Further, Marine Corps policy calls for each of the private partners to establish standard operating procedures that should include complaint resolution procedures. Despite established military department guidance, we found that residents were sometimes confused and lacked awareness of the availability of the military housing office to assist them with issues they were experiencing with privatized housing. For example, residents who participated in our focus groups and responded to our online tool expressed the following concerns: At least one resident in each of our focus groups noted being sometimes confused about the military housing office’s roles and responsibilities with regard to the maintenance of their home. These residents indicated they did not know the military housing office existed or could serve as a resource. Further, some individuals that responded to our online tool indicated that they did not know they could reach out to military housing office officials or their chain of command with issues related to the condition of their home. Residents in at least three of our focus groups indicated they perceived that the military housing office was not working independently of the partner or in the residents’ best interest. For example, residents in at least three focus groups noted that they viewed the military housing office as an extension of the private partner. Other residents noted that they did not know what the military housing office was or what role the office plays in managing privatized housing. In addition, residents we solicited information from through our online tool indicated that they felt they have not had any recourse in resolving issues and disagreements with private partners. For example, one individual who responded to our online tool stated that she was glad she moved off post because she now has legal recourse if the landlord does not meet maintenance requirements. The military department oversight agencies have found that the military departments have not clearly and systematically communicated their roles to residents, and resident confusion and a lack of awareness regarding the role of the military housing offices is an issue. In April 2019, the Air Force Inspector General reported that less than half of the residents interviewed used their military housing office to resolve complaints, and at some installations officials visited many residents did not know the military housing office had an oversight role. Similarly, in May 2019, the Army Inspector General reported to the Secretary of the Army that at 82 percent of Army installations with privatized housing, residents did not know how to escalate issues with either the private partner or the Army housing office. Additionally, the Army Inspector General reported that installation command teams and staff cited multiple circumstances where military housing offices and tenant advocacy roles and responsibilities were unclear. Further, military housing office officials with whom we spoke during our site visits acknowledged the gap in resident awareness regarding the existence and purpose of the military housing office. Officials also noted that at times residents were unaware of the difference between the military housing office and the private partner office due, in part, to their physical co-location and unclear building signage. For example, a military housing office official at Fort Bragg, North Carolina, told us the military housing office was the best kept secret on the installation. Moreover, residents that participated in our four focus groups at Fort Bragg expressed confusion in differentiating the Army military housing office officials from private partner representatives. Similarly, officials at the military housing office at Tinker Air Force Base, Oklahoma, told us that many residents were confused by their office’s role because the private partner office goes by the name “Tinker Housing Office.” Further, we observed that both private partner representatives and some military housing office officials are located in the same building, and signage does not distinctly indicate that the office houses both military officials and private partner representatives. In contrast, the military housing office at Camp Pendleton, California, is intentionally branded as the “Camp Pendleton Joint Housing Office” and signage indicates the office houses officials from both the Marine Corps and the installation’s private partners. See figure 5 for examples of the varying level of detail in military housing office signage. Some military housing office officials told us they have taken steps to improve resident awareness, such as increasing advertising of the military housing office’s role and contact information, using town hall meetings to inform residents of their roles and responsibilities, and rebranding their military housing offices to differentiate them from the private partners. For example, the Army housing office at Fort Sill, Oklahoma, changed its name from the “Residential Communities Initiative Housing Office” to the “Garrison Housing Office” to more clearly denote that the military housing office is not associated with the private partner. In addition, a Marine Corps housing office official provided us with a flyer, which is distributed to residents by the private partner, informing residents of housing office contact information and the service’s three-step dispute resolution process. See figure 6 for a copy of the flyer. According to DOD officials, the military departments generally decreased their staffing and oversight of daily privatized housing operations after the MHPI was enacted, which led to less ongoing resident interaction. For example, Army officials we spoke with in January 2019 told us they typically filled 80 percent of available military housing office positions across their installations. Additionally, officials stated that housing offices were generally staffed with two or three officials responsible for assisting servicemembers with housing needs both on the installation as well as in the local community. Further, the officials told us that the team at Fort Bragg, North Carolina, was decreased from about 15 to 3 positions. According to OSD officials, while housing offices should generally not require the number of personnel that were necessary prior to privatization, reductions following sequestration reduced housing staff below the level necessary to fully perform required privatized housing oversight as it was originally envisioned at the outset of the program. OSD has also recognized that the military departments’ communication with residents about their role as a resource for residents has not been clear or systematic. In February 2019, the Assistant Secretary of Defense for Sustainment testified before Congress that a way forward in addressing resident concerns would require focus in three key areas: communication, engagement, and responsiveness. In support of this, OSD and the military departments are collaborating with each of the private partners on several initiatives aimed at improving the residents’ experience with privatized housing and ensuring a consistent resident experience across installations. These initiatives include: establishing a tenant bill of rights that will clearly define tenants’ rights establishing a resident advocate position that is planned to provide advice, education, and support to the resident and advocate on the resident’s behalf in disputes with private partners; developing a common lease that provides a common framework and language in residential leases across all privatization projects; and developing a standardized formal dispute resolution process to ensure the prompt and fair resolution of disputes that arise between privatized housing landlords and residents. Despite the development of initiatives aimed at improving the resident’s experience with privatized housing and various ad hoc efforts to better brand and advertise the roles and responsibilities of some military housing offices, the military departments have not systematically or clearly communicated these efforts to residents, and military officials we met with acknowledged that there still appears to be a gap in residents’ awareness of the military housing office and its role in the dispute resolution process. Standards for Internal Control in the Federal Government state that management should externally communicate the necessary quality information to achieve the entity’s objectives. Management communicates this externally through reporting lines so that external parties can help the entity achieve its objectives and address related risks. Moving forward, having plans in place to clearly and systematically communicate the difference between the military housing office and the private partners— including the military departments’ roles, responsibilities, and military housing office locations and contact information—will better position the military departments to achieve the intended objectives of the initiatives they are currently developing with OSD. DOD and Private Partners Are Implementing Initiatives to Improve Privatized Housing, but May Face Challenges DOD and Private Partners Are Implementing Initiatives to Improve MHPI OSD, the military departments, and the private partners have identified and begun collaborating on a series of initiatives aimed at improving residents’ experience with privatized housing. According to an OSD official, a series of initiatives have been identified and are currently in various phases of development and implementation. Tri-service working groups, each chaired by a designated military department and including officials and legal counsel from each military department as well as private partner representatives, are leading efforts to develop and implement the initiatives. In addition, in the Fiscal Year 2020 NDAA, Congress established several requirements aimed at addressing military privatization housing reform. Several of the statutory requirements provide specific provisions that DOD will need to incorporate into its development and implementation of existing MHPI initiatives, as well as additional requirements aimed at improving the oversight of privatized housing. Table 2 outlines key initiatives aimed at improving privatized housing, as well as additional selected requirements mandated by the Fiscal Year 2020 NDAA. In addition to the provisions noted in table 2, the Fiscal Year 2020 NDAA included requirements for increased oversight of the physical condition of privatized housing. Specifically, the legislation required the following: The Secretary of Defense is to designate a Chief Housing Officer to oversee housing units, including the creation and standardization of policies and processes regarding housing units. The Secretary of Defense is required to establish a uniform code of basic standards for privatized military housing and plans to conduct inspections and assessment of the condition of privatized homes. The military departments are required to create a council on privatized military housing for the purposes of maintaining adequate oversight of the military housing program and serving as a mechanism to identify and resolve problems regarding privatized military housing. The head of the installation military housing office is responsible for conducting a physical inspection and approving the habitability of a vacant housing unit for the installation before the landlord managing the housing unit is authorized to offer the housing unit available for occupancy; conducting a physical inspection of the housing unit upon tenant move-out; and initiating contact with a tenant regarding the tenant’s satisfaction with his or her housing unit not later than 15 days after move-in, and again 60 days after move-in. Each installation is required to use the same satisfaction survey for tenants of military housing, including privatized military housing, and results are not to be shared with partners until reviewed by DOD. Initiatives to Improve MHPI May Face Implementation Challenges DOD and private partner representatives have cited several challenges that could affect their ability to implement initiatives aimed at improving MHPI. Specifically, key challenges include the timeliness with which they are able to implement initiatives, a lack of resources needed for implementation, and concerns that implementation could have unintended negative impacts on the financial viability of the privatized housing projects. Timeliness of implementation due to the need to collaborate with and obtain input and agreement from the large number of stakeholders involved in privatized housing. According to DOD officials and private partner representatives, many of the initiatives designed to improve privatized housing require not only agreement between DOD and the private housing partners, but also discussion with and, in some cases, approval by the project bond holders. Because DOD does not have the ability to unilaterally make changes to existing business agreements, this need for stakeholder agreement limits DOD’s control over the implementation timeline of any initiative that requires changes to a project’s business agreement—such as the implementation of a standardized dispute resolution process. Additionally, the private partners noted that the bond holders may be reluctant to agree to changes to the business agreements that could result in higher project costs. The need for more military department staff with targeted expertise. As noted earlier, the military departments had reduced their involvement in daily privatized military housing operations as part of the overall privatization effort. This included reducing staffing levels at the installations, and military housing office officials at over half of the installations we visited stated that reduced staffing levels had impacted their ability to carry out oversight duties, such as work order data analysis and housing inspections. Further, until recent issues surfaced over concerns of the quality of privatized housing, the military departments had distanced themselves from involvement in daily military housing operations. For example, the Army issued a memorandum in 2013, which has since been rescinded, stating that garrison commanders were not to authorize, direct, or permit Army representatives to initiate health and welfare inspections of privatized housing. Each of the military departments has plans to increase the military housing office staffing at each installation to allow for enhanced oversight. In particular, according to military department officials, these positions will focus on quality control and quality assurance of the maintenance of privatized homes. However, improvements to the condition of privatized housing may not be fully realized until DOD establishes a uniform code of basic standards for privatized military housing, as required by the Fiscal Year 2020 NDAA, and these new personnel are trained in these standards. The potential for unintended negative financial impacts on the projects that could outweigh the intended benefits of the initiatives. OSD officials and private partner representatives have expressed concern that some proposed initiatives could result in unintended financial consequences for the housing projects. In particular, private partner representatives noted costs from things such as legal fees associated with the development of a common lease and the various addendums that would be required and the unanticipated costs of hiring outside third party inspections. In particular, some of the private partners noted that the financial impact of unfunded requirements to projects that are already experiencing financial distress could result in even fewer funds available to reinvest in improvements to the current and future physical condition of the homes. Moreover, OSD officials told us they have concerns that some initiatives—such as increased frequency of change-of-occupancy inspections that may result in homes remaining vacant longer than planned and therefore not collecting rent—may unintentionally impact a project’s cash flow. Officials noted that some installations have large-scale housing turn over at the same time and inspections may not be able to be completed in the required time frames. For example, OSD officials said that at Fort Leavenworth, Kansas, the vast majority of homes generally turn over during a 2-week time period. Officials said that in a location like this, new oversight requirements may have a negative impact on residents’ move-in timelines, which could subsequently impact occupancy rates and project cash flow as a result of delays in rent payments. OSD officials also stated that residents’ ability to have their basic allowance housing payments segregated and held in escrow may present financial challenges to both the resident and the project. These officials noted that they did not yet know how the withholding of these payments would be implemented. According to OSD officials, as of January 2020, there are many questions surrounding the implementation of the Fiscal Year 2020 NDAA provisions. Officials told us that they have not yet assessed the impact of increased oversight on the financial viability of the MHPI projects, but stated that as they develop processes to implement each new statutory provision, the financial impact is something that needs to be considered. DOD’s Military Housing Privatization Initiative Performance Evaluation Report for fiscal year 2017 stated that the government’s interests are not always aligned with those of the private sector, and that oversight and engagement are required and expected in a public-private partnership over the long term to ensure success. We have previously reported that the military departments have not defined their risk tolerance levels for privatized housing relative to the program’s objective of providing quality housing that reflects community living. Specifically, we recognized that the Office of Management and Budget guidance on the preparation, submission, and execution of the federal budget suggests that public- private partnerships, such as privatized military housing projects, contain some elements of risk to the government. Standards for Internal Control in the Federal Government state that management should identify, analyze, and respond to risks related to achieving defined program objectives. While DOD is in the process of developing and implementing initiatives to improve privatized military housing, OSD and the military departments have not assessed the risk of the proposed initiatives on the financial viability of the privatized housing projects. According to an OSD official, the intention of privatization was to reduce the government’s role in the management of military housing and put more responsibility on the private partners. As described earlier in this report, the military departments have ramped up their oversight efforts in response to recent concerns about the condition of privatized housing by, for example, revising guidance and hiring additional staff. However, OSD has not assessed the impact of these activities on the financial viability of the MHPI projects. For example, OSD has not determined how increasing the frequency of housing office inspections and residents’ withholding of rent could impact the bottom line of some of its privatized projects. Without assessing risks to the financial viability of the MHPI projects associated with the implementation of these initiatives aimed at improving privatized housing, DOD’s efforts to improve the privatized housing program could be compromised. Further, DOD has a long-term interest in ensuring the financial health of the properties privatized under MHPI. As we have reported, typically the titles to the homes that were conveyed to the private partners and any improvements made to these homes during the duration of the ground leases will automatically revert to the military departments upon expiration or termination of the leases. Conclusions DOD’s oversight of privatized housing is critical to ensure that residents are being provided with affordable, quality housing that generally reflects contemporary community living standards. In light of recent concerns about the effect of inadequate and poor quality housing on servicemembers and their families, the military departments have recently implemented steps to increase the oversight of the condition of privatized housing. However, OSD has not provided the military departments with specific objectives for this monitoring. The newly established Chief Housing Officer position, intended to standardize guidance and processes for the oversight of privatized housing, provides DOD with an opportunity to ensure that revised guidance provided to the military departments includes objectives for increased oversight. In addition to oversight of the condition of homes, DOD has taken initial steps to standardize performance incentive metrics across the military departments. However, unless efforts are made to ensure that the indicators driving these metrics accurately reflect private partners’ performance in maintaining the condition and quality of privatized homes, DOD’s ability to hold private partners accountable will remain limited. Further, while the military departments continue to increase the access to and use of work order data to monitor and track the condition of privatized housing, without consistent terminology and practices for work order data collection and processes for validating data collected from the private housing partners, the use of these data may not result in reliable findings. Finally, DOD has frequently reported high customer resident satisfaction rates as a key indicator of the success of the privatization initiative. However, the process used to collect and calculate the data used for determining these rates and limitations in its presentation to Congress raise questions about the reliability of DOD’s reports and their usefulness as an indicator of program success. By improving oversight guidance, mechanisms for measuring private partner performance, the reliability of housing data, and reporting on resident satisfaction, DOD can better ensure that MHPI is providing servicemembers with quality housing. Despite a decreased role in the daily management of privatized housing, the military departments still maintain responsibility for providing servicemembers with resources for seeking resolution to any issues identified in their privatized homes. However, without plans in place to communicate military housing office roles, responsibilities, and locations to residents of privatized housing, these individuals may not receive the full benefits of the assistance that the military housing offices provide. In light of the increased focus by DOD and Congress in ensuring that residents are aware of their rights and responsibilities, improved communication with residents about the military housing offices’ roles and responsibilities can help ensure that residents are utilizing the full range of resources available to them if they have issues with privatized housing. As OSD, the military departments, and the private partners take steps to improve the resident experience with privatized military housing and increase the department’s focus on the condition of privatized homes, ensuring that their efforts do not inadvertently harm the financial viability of these projects is key. Without assessing and mitigating the potential risk program improvements may have on the financial viability of the MHPI projects, DOD cannot ensure that these initiatives and the implementation of new statutory requirements will ultimately result in improvements to the condition of privatized housing. Recommendations for Executive Action We are making a total of 12 recommendations—six to the Office of the Secretary of Defense, two to the Secretary of the Army, two to the Secretary of the Air Force, and two to the Secretary of the Navy: The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in collaboration with the military departments, provide updated guidance for the oversight of privatized military housing, to include oversight objectives for each service to monitor the physical condition of privatized homes over the remaining duration of the ground leases. (Recommendation 1) The Secretary of the Army should take steps, in collaboration with the Army’s private housing partners, to review the indicators underlying the privatized housing project performance metrics to ensure they provide an accurate reflection of the condition and quality of the homes. (Recommendation 2) The Secretary of the Air Force should take steps, in collaboration with the Air Force’s private housing partners, to review the indicators underlying the privatized housing project performance metrics to ensure they provide an accurate reflection of the condition and quality of the homes. (Recommendation 3) The Secretary of the Navy should take steps, in collaboration with the Navy and Marine Corps’ private housing partners, to review the indicators underlying the privatized housing project performance metrics to ensure they provide an accurate reflection of the condition and quality of the homes. (Recommendation 4) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in collaboration with the military departments and private housing partners, establish minimum data requirements and consistent terminology and practices for work order data collection for comparability across installations and projects and to track trends over time. (Recommendation 5) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment require the military departments to establish a process to validate data collected by the private housing partners to better ensure the reliability and validity of work order data and to allow for more effective use of these data for monitoring and tracking purposes. (Recommendation 6) The Secretary of Defense should ensure the Assistant Secretary of Defense for Sustainment, in collaboration with the military departments, develop a process for collecting and calculating resident satisfaction data from the military departments to ensure that the data are compiled and calculated in a standardized and accurate way. (Recommendation 7) The Secretary of Defense should ensure the Assistant Secretary of Defense for Sustainment provides additional explanation of the data collected and reported in future reports to Congress, such as explaining the limitations of available survey data, how resident satisfaction was calculated, and reasons for any missing data, among other things. (Recommendation 8) The Secretary of the Army should develop and implement a plan to clearly and systematically communicate to residents the difference between the military housing office and the private partner. At a minimum, these plans should include the Army housing office’s roles, responsibilities, locations, and contact information and should ensure that all residents are aware that they can directly contact Army housing office officials. (Recommendation 9) The Secretary of the Air Force should develop and implement a plan to clearly and systematically communicate to residents the difference between the military housing office and the private partner. At a minimum, these plans should include the Air Force housing office’s roles, responsibilities, locations, and contact information and should ensure that all residents are aware that they can directly contact Air Force housing office officials. (Recommendation 10) The Secretary of the Navy should develop and implement a plan to clearly and systematically communicate to residents the difference between the military housing office and the private partner. At a minimum, these plans should include the Navy housing office’s roles, responsibilities, locations, and contact information and should ensure that all residents are aware that they can directly contact Navy housing office officials. (Recommendation 11) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in collaboration with the military departments, assess the risks of proposed initiatives aimed at improving the privatized military housing program on the financial viability of the projects. (Recommendation 12) Agency Comments We provided a draft of this report to DOD for review and comment. In written comments, reprinted in their entirety in appendix III, DOD concurred with 10 of our recommendations and partially concurred with 2, identifying actions it plans to take to address each of them. DOD also provided technical comments, which we incorporated as appropriate. DOD partially concurred with our recommendation that the Assistant Secretary of Defense for Sustainment, in collaboration with the military departments and private housing partners, establish minimum data requirements and consistent terminology and practices for work order collection. The department noted that neither the Assistant Secretary of Defense for Sustainment nor the military departments could mandate changes to existing privatized housing project ground leases or legal agreements. DOD further noted that it cannot unilaterally make changes to the project ground leases and associated legal documents without concurrence from the private partners. However, the department noted that to the maximum extent practical, it would work to establish minimum data requirements and consistent terminology and practices for work order collection. DOD also partially concurred with our recommendation that the Under Secretary of Defense for Sustainment, in collaboration with the military departments, develops a process for collecting and calculating resident satisfaction data because there is no Under Secretary of Defense for Sustainment. Based on the department’s comments, we revised the addressee of this recommendation, directing action to the Assistant Secretary of Defense for Sustainment. However, the department noted that effective with the survey collection effort for Fiscal Year 2021, it would refine the process for collecting and calculating resident satisfaction data from the military departments to ensure that DOD compiles and calculates data in a standardized and accurate way. We are sending copies of this report to the appropriate congressional committees; Senator Catherine Cortez Masto; Senator Mark Warner; Representative Gus Bilirakis; the Secretary of Defense; and the Secretaries of the Departments of the Army, the Navy, and the Air Force. In addition, the report is available at no charge on our website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or FieldE1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The Conference Report accompanying a bill for the Fiscal Year 2019 Department of Defense Appropriations Act included a provision for us to review ongoing issues within privatized military housing. This report examines the extent to which the Office of the Secretary of Defense (OSD) and the military departments (1) conduct oversight of privatized military housing for servicemembers and their families, (2) have communicated their roles and responsibilities to servicemembers and their families, and (3) have developed and implemented initiatives to improve privatized housing. We included all privatized housing projects in each military department. For each of our objectives, we reviewed OSD and military department policies and guidance for the implementation of the Military Housing Privatization Initiative (MHPI) program, including guidance on the authority, roles, and responsibilities for oversight and management of privatized housing. We evaluated the extent to which the evidence we collected aligned with OSD policy and stated goals for oversight and management of privatized housing, and whether the evidence adhered to the principles in Standards for Internal Control in the Federal Government. We conducted interviews with officials from the Office of the Assistant Secretary of Defense for Sustainment, Office of Facilities Management; the Office of the Deputy Assistant Secretary of the Army (Installations, Housing and Partnerships); the Army Installation Management Command; the Army Assistant Chief of Staff for Installation Management; the Assistant Secretary of the Air Force for Installations, Environment, and Energy; the Air Force Civil Engineer Center; the Commander, Navy Installations Command; the Commander, Naval Facilities Engineering Command; the Marine Corps Installation Command; and representatives from each of the 14 private partners that are currently responsible for privatized housing projects. We visited a non-generalizable sample of 10 installations selected to represent each of the military departments, six private partners—including the five largest who own the majority of privatized military housing—and geographic and climate diversity. The selected sites in our non- generalizable sample were three Army installations—Fort Bragg, North Carolina; Fort Huachuca, Arizona; and Fort Sill, Oklahoma; two Navy installations—Naval Station Norfolk, Virginia, and Naval Base San Diego, California; two Marine Corps installations—Marine Corps Base Camp Lejeune, North Carolina, and Marine Corps Base Camp Pendleton, California; and three Air Force installations—Davis-Monthan Air Force Base, Arizona; Langley Air Force Base, Virginia; and Tinker Air Force Base, Oklahoma. We reviewed the ground leases and other MHPI project documents for housing projects at each of these locations, and at each installation we met with officials from the installation commander’s office and conducted interviews with officials from both the installation military housing office and representatives from the private partners. To collect input from residents of privatized housing, we facilitated 15 focus groups with a self-selected group of current residents of privatized military housing. During the focus groups, a methodologist led participants through a structured questionnaire, which we pretested with 11 residents of privatized housing prior to the first focus group. To solicit participants for our focus groups, we requested that local military housing office officials email all current residents of privatized housing prior to our visit to inform them of our focus groups. Individuals interested in participating in our focus group sessions were instructed to contact us directly for further information. We had over 70 residents participate in our focus groups. In addition to the 15 focus groups, we conducted an additional five sessions in which fewer than three residents attended. We collected information from these residents, but we did not include their input in our focus group analysis. Comments from focus group participants are not generalizable to all residents of privatized military housing. We also developed and administered a publically available online tool that provided an opportunity for any resident of privatized military housing to voluntarily submit information on their experiences. Participants had the option to remain anonymous and make multiple submissions in order to provide us information on their experience at more than one installation. We developed our tool in conjunction with a survey methodologist to ensure it met our requirements for publically available anonymous data collection instruments, and conducted five pretests of the questions with residents of privatized housing. Our online tool was made available to the public from June 17, 2019, through August 31, 2019. We received a total of 658 responses. In analyzing information provided through the online tool, we took steps to identify responses that did not meet our criteria, including removing 13 responses for reasons such as responses with duplicative usernames or Internet Protocol (IP) addresses that described the same experience or had been started but not fully completed, responses from DOD officials that informed us they had provided responses to test our tool, and responses from residents living on installations outside of the United States. In reporting results from our online tool, we used the following qualifiers in presenting our results— most (to indicate 80 percent or higher); majority (to indicate 51-79 percent); and some (to indicate less than 50 percent). Findings from our focus groups and online tool are not generalizable to all privatized military housing residents. To determine the extent to which DOD conducts oversight of privatized military housing for servicemembers and their families, we conducted the following additional data analysis. Through the steps described in the following bullets, we determined these data to be reliable for the purposes of our findings: To determine the extent to which performance incentive fee metrics assessed the condition of privatized housing, we collected information on the structure of the incentive fees from private partners for 74 privatized housing projects and received confirmation that there are 5 projects that do not have incentive fee plans as part of their business agreements. We reviewed all of the incentive fee plans and identified commonly used metrics and indicators. We met with officials from the military housing offices, the military departments, and private partner representatives to discuss the administration and measurement of the incentive fee structures. To gain an understanding of how performance incentive fees are used, we reviewed documents and guidance from OSD and the military departments that explains the processes for developing and awarding performance incentive metrics and fees. In addition, we obtained information from residents through focus groups and our online tool and spoke with military housing office officials to obtain anecdotal information regarding the extent to which the metrics are adequately measuring the condition of the housing. To assess the extent to which private partner work order data could be used to monitor and track the condition of privatized homes, we collected and reviewed private partner work order data from October 2016 through April 2019 from each of the 79 MHPI projects and discussed these data with the private partners and military department officials. Given that we requested the work order data from the private partners in April and May 2019, we selected the October 2016 through April 2019 time frame to include complete data for fiscal years 2017 and 2018 and the most comprehensive data available at the time for fiscal year 2019. Prior to requesting these data, we contacted representatives from each of the 14 private partner companies to discuss our forthcoming data request and to better understand each company’s data system and potential limitations for using the data. Subsequently, we requested that each partner provide us with data for all work orders across all data elements for each installation under their management. We received data on over 8 million work orders among the 14 private partners. We performed manual testing on initial data files received by each partner to identify issues that would impact the validity and reliability of using these data for ongoing monitoring and tracking of the condition of privatized housing units. In doing so, we identified instances of anomalies in work order data from each of the 14 partners. For 12 of the 14 partners, we found at least one of the following anomalies in the initial work order data files received for the time period requested: (1) duplicate work orders; (2) work orders with completion dates prior to the dates that a resident had submitted the work order; and (3) work orders still listed as in-progress for more than 18 months. We reviewed work order data from at least one installation for each private partner to check for instances of these anomalies. We also held follow-up discussions with 10 of the 14 private partners to discuss anomalies found in the data and potential factors contributing to the presence of these anomalies. In addition to the initial data collected on all of the work orders, we requested a second data run of work orders over the same time period—October 1, 2016, through April 30, 2019—for service requests related to lead- based paint, mold, and pest/rodent/vermin infestation. As part of this request, we asked that partners provide the criteria used for querying the data they provided us. We reviewed these data to determine how requests for data on specific hazards were getting sorted into the work order tracking systems by category and priority among the various partners. To identify differences in terminology and business practices used by the private partners in their work order tracking systems, we requested and reviewed private partner documentation of data definitions and protocols for managing work order data. In addition, we conducted interviews with military department officials to discuss ongoing efforts by the military departments to collect and analyze work order data. To evaluate resident satisfaction data reported in OSD’s reports to Congress on privatized housing, we reviewed the processes for collecting, calculating, and reporting these data for the three most recently issued reports for fiscal years 2015, 2016, and 2017. We reviewed the instructions OSD provided to the military departments outlining how the military departments are to submit resident satisfaction data to OSD. We also reviewed the question the military departments asked on their annual surveys to gauge resident satisfaction. We then requested the survey data the military departments submitted to OSD to be included in the annual report to Congress for fiscal years 2015, 2016, and 2017. We performed data quality checks and identified inaccuracies on a random sample of data reported by OSD. We reviewed how military departments calculated overall resident satisfaction for each privatized housing project. Further, we discussed these data with OSD and the military departments to assess the validity and reliability of using these data for identifying overall tenant satisfaction with the condition of privatized housing. To determine the extent to which the military departments have communicated their respective military housing office roles and responsibilities to residents, we reviewed military department policies and guidance related to their roles and responsibilities for working with residents of privatized housing. During our site visits to 10 installations, we interviewed military department housing office officials and private partner representatives to discuss their specific roles and responsibilities. We asked questions soliciting information about residents understanding of the roles and responsibilities of the military housing office and the dispute resolution process during our 15 focus groups. We also solicited resident feedback in our online tools regarding residents’ experience reporting maintenance issues and working with military housing offices and private partners to get maintenance issues resolved. To determine the extent to which DOD and private partners have developed and implemented initiatives to improve privatized housing, we interviewed OSD and military department officials to discuss ongoing initiatives developed over the course of our audit work aimed at improving MHPI and reviewed relevant guidance. We met with private partner representatives to discuss their involvement in developing these initiatives, as well as to gain an understanding of any challenges or concerns that may impact the implementation of these initiatives. Following the passage of the National Defense Authorization Act for Fiscal Year 2020, we reviewed provisions of the statute designed to improve the condition of privatized housing and evaluated the extent to which these provisions would impact ongoing or planned DOD initiatives or provide new oversight roles and responsibilities for OSD and the military departments. We discussed these provisions with OSD officials and private partner representatives to understand how, if at all, their implementation may impact the privatized housing projects, as well as any potential barriers to implementation in the current legal construct of the program. We conducted this performance audit from November 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Privatized Military Housing Projects as of September 30, 2017 In table 3, we provide the complete listing of the Department of Defense’s 79 privatized military housing projects, as of September 30, 2017. This list reflects information that the Office of the Assistant Secretary of Defense for Sustainment provided in its annual report to Congress for the time period of October 1, 2016, through September 30, 2017. The report was provided to Congress in May 2019. The projects can consist of one or multiple installations. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Elizabeth A. Field, Director, (202) 512-2775 or FieldE1@gao.gov. Staff Acknowledgments In addition to the contact above, the following are key contributors to this report: Kristy Williams (Assistant Director), Tida Barakat Reveley (Analyst in Charge), Austin Barvin, Ronnie Bergman, Vincent Buquicchio, William Carpluk, Juliee Conde-Medina, Mae Jones, Jordan Tibbetts, Kelly Rubin, Monica Savoy, and John Van Schaik. Related GAO Products Military Housing: Preliminary Recommendations to Strengthen DOD's Oversight and Monitoring of Privatized Housing. GAO-20-471T. Washington, D.C.: March 3, 2020. Military Housing Privatization: Preliminary Observations on DOD’s Oversight of the Condition of Privatized Military Housing. GAO-20-280T. Washington, D.C: December 3, 2019. Defense Infrastructure: Additional Actions Could Enhance DOD’s Efforts to Identify, Evaluate, and Preserve Historic Properties. GAO-19-335. Washington, D.C.: June 19, 2019. Military Housing Privatization: DOD Should Take Steps to Improve Monitoring, Reporting, and Risk Assessment. GAO-18-218. Washington, D.C.: March 13, 2018. Defense Infrastructure: Army Has a Process to Manage Litigation Costs for the Military Housing Privatization Initiative. GAO-14-327. Washington, D.C.: April 3, 2014. Military Housing: Information on the Privatization of Unaccompanied Personnel Housing. GAO-14-313. Washington, D.C.: March 18, 2014. Military Housing: Enhancements Needed to Housing Allowance Process and Information Sharing among Services. GAO-11-462. Washington, D.C.: May 16, 2011. Military Housing Privatization: DOD Faces New Challenges Due to Significant Growth at Some Installations and Recent Turmoil in the Financial Markets. GAO-09-352. Washington, D.C.: May 15, 2009. Military Housing: Management Issues Require Attention as the Privatization Program Matures. GAO-06-438. Washington, D.C.: April 28, 2006. Military Housing: Further Improvement Needed in Requirements Determination and Program Review. GAO-04-556. Washington, D.C.: May 19, 2004. Military Housing: Better Reporting Needed on the Status of the Privatization Program and the Costs of Its Consultants. GAO-04-111. Washington, D.C.: October 9, 2003. Military Housing: Opportunities That Should Be Explored to Improve Housing and Reduce Costs for Unmarried Junior Servicemembers. GAO-03-602. Washington, D.C.: June 10, 2003. Military Housing: Management Improvements Needed as the Pace of Privatization Quickens. GAO-02-624. Washington, D.C.: June 21, 2002. Military Housing: DOD Needs to Address Long-Standing Requirements Determination Problems. GAO-01-889. Washington, D.C.: August 3, 2001. Military Housing: Continued Concerns in Implementing the Privatization Initiative. GAO/NSIAD-00-71. Washington, D.C.: March 30, 2000. Military Housing: Privatization Off to a Slow Start and Continued Management Attention Needed. GAO/NSIAD-98-178. Washington, D.C.: July 17, 1998.
Why GAO Did This Study Congress enacted the Military Housing Privatization Initiative in 1996 to improve the quality of housing for servicemembers. DOD is responsible for general oversight of privatized housing projects. However, private-sector developers are responsible for the construction, renovation, maintenance, and repair of about 99 percent of military housing in the United States. Recent reports of hazards, such as mold and pest infestation, have raised questions about DOD's oversight of privatized military housing. Conference Report 115-952 included a provision for GAO to review ongoing issues within privatized housing. This report assesses, among other things, the extent to which OSD and the military departments (1) conduct oversight of privatized housing and (2) have developed and implemented initiatives to improve privatized housing. GAO reviewed policies and guidance; visited a non-generalizable sample of 10 installations; analyzed work order data; and interviewed DOD officials and private partner representatives. What GAO Found The Office of the Secretary of Defense (OSD) and the military departments conduct a range of oversight activities, but some of these activities have been more extensive than others. Specifically, GAO found that: DOD provides reports to Congress on the status of privatized housing, but some data in these reports are unreliable, leading to misleading results. DOD provides periodic reports to Congress on the status of privatized housing, but reported results on resident satisfaction are unreliable due to variances in the data provided to OSD by the military departments and in how OSD has calculated and reported these data. OSD has made progress in developing and implementing a series of initiatives aimed at improving privatized housing. In addition, Congress established several requirements addressing privatization housing reform. However, DOD officials and private partner representatives have identified challenges that could affect implementation of these various initiatives. These include concerns that implementation could have unintended negative impacts on the financial viability of the privatized housing projects. However, DOD has not assessed the risk of the initiatives on project finances. What GAO Recommends GAO is making 12 recommendations, including that DOD take steps to improve housing condition oversight, performance indicators, maintenance data, and resident satisfaction reporting as well as to assess the risk of the initiatives on project finances. DOD generally concurred with the recommendations and identified actions it plans to take to implement them.
gao_GAO-20-105T
gao_GAO-20-105T_0
Selected Agencies Collect Some Information from Commenters and Accept Anonymous Comments through Regulations.gov and Agency-Specific Websites Consistent with the discretion afforded by the APA, Regulations.gov and agency-specific comment websites use required and optional fields on comment forms to collect some identity information from commenters. In addition to the text of the comment, agencies may choose to collect identity information by requiring commenters to fill in other fields, such as name, address, and email address before they are able to submit a comment. Regardless of the fields required by the comment form, the selected agencies all accept anonymous comments in practice. Further, because the APA does not require agencies to authenticate submitted identity information, neither Regulations.gov nor the agency-specific comment websites contain mechanisms to check the validity of identity information that commenters submit through comment forms. Regulations.gov and agency-specific comment websites also collect some information about public users’ interaction with their websites through application event logs and proxy server logs, though the APA does not require agencies to collect or verify it as part of the rulemaking process. This information, which can include a public user’s Internet Protocol (IP) address, browser type and operating system, and the time and date of webpage visits, is collected separately from the comment submission process as part of routine information technology management of system security and performance, and cannot be reliably connected to specific comments. Most Selected Agencies Have Some Internal Guidance Related to Commenter Identity Seven of 10 selected agencies have documented some internal guidance associated with the identity of commenters during the three phases of the public comment process: intake, analysis, and response to comments. However, the focus and substance of this guidance varies by agency and phase of the comment process. As shown in table 1, for selected agencies that have guidance associated with the identity of commenters, the guidance most frequently relates to the comment intake or response to comment phases of the public comment process. The guidance for these phases addresses activities such as managing duplicate comments (those with identical or near-identical comment text but varied identity information) or referring to commenters in a final rule. Agencies are not required by the APA to develop internal guidance associated with the public comment process generally, or identity information specifically. Selected Agencies’ Treatment of Identity Information Collected during the Public Comment Process Varies Within the discretion afforded by the APA, the 10 selected agencies’ treatment of identity information during the comment intake, comment analysis, and response to comments phases of the public comment process varies. Selected agencies differ in how they treat identity information during the comment intake phase, particularly in terms of how they post duplicate comments, which can lead to identity information being inconsistently presented to public users of comment systems. With regard to the comment intake phase in particular, the variation in how agencies identify duplicate comments and post comments results in identity information being inconsistently presented on Regulations.gov or the agency-specific websites. Generally, officials told us that their agencies either (1) maintain all comments within the comment system or (2) maintain some duplicate comment records outside of the comment system, for instance, in email file archives. For example, according to officials of one participating agency—the Wage and Hour Division (WHD)—all duplicate comments are stored in Regulations.gov. Our analysis of WHD comments did not suggest that any comments were missing from Regulations.gov. However, in one example, almost 18,000 duplicate comments were included in attachments under one individual’s name in the comment title. While all of the comments are included within 10 separate attachments, none of the identity information included with these comments can be easily found without manually opening and searching all 10 attachments, most of which contain approximately 2,000 individual comments. Selected agencies’ treatment of identity information during the comment analysis phase also varies. Specifically, program offices with the responsibility for analyzing comments place varied importance on identity information during the analysis phase. Finally, all agencies draft a response to comments with their final rule, but the extent to which the agencies identify commenters or commenter types in their response also varies across the selected agencies. Selected Agencies’ Practices Associated with Posting Identity Information Are Not Clearly Communicated to Public Users of Comment Websites Our analysis of Regulations.gov and agency-specific comment websites shows that the varied comment posting practices of the 10 selected agencies are not always documented or clearly communicated to public users of the websites. In part to facilitate effective public participation in the rulemaking process, the E-Government Act of 2002 requires that all public comments and other materials associated with a given rulemaking should be made “publicly available online to the extent practicable.” Additionally, key practices for transparently reporting open government data state that federal government websites—like those used to facilitate the public comment process—should fully describe the data that are made available to the public, including by disclosing data sources and limitations. We found that the selected agencies we reviewed do not effectively communicate the limitations and inconsistencies in how they post identity information associated with public comments. As a result, public users of the comment websites lack information related to data availability and limitations that could affect their ability to use and make informed decisions about the comment data and effectively participate in the rulemaking process themselves. Regulations.gov and Participating Agency Websites Public users of Regulations.gov seeking to submit a comment are provided with a blanket disclosure statement related to how their identity information may be disclosed, and are generally directed to individual agency websites for additional detail about submitting comments. While additional information is provided in the Privacy Notice, User Notice, and Privacy Impact Assessment for Regulations.gov, public users are not provided any further detail on Regulations.gov regarding what information, including identity information, they should expect to find in the comment data. Additionally, there is not enough information to help public users determine whether all of the individual comments and associated identity information are posted. Available resources on Regulations.gov direct public users to participating agencies’ websites for additional information about agency-specific review and posting policies. Seven of the eight participating agencies’ websites direct public users back to Regulations.gov and the Federal Register, either on webpages that are about the public comment process in general, or on pages containing information about specific NPRMs. Three of these participating agencies—the Environmental Protection Agency (EPA), Fish and Wildlife Service (FWS), and Food and Drug Administration (FDA)—do provide public users with information beyond directing them back to Regulations.gov or the Federal Register, but only FDA provides users with details about posting practices that are not also made available on Regulations.gov. The eighth participating agency—the Employee Benefits Security Administration (EBSA)—does not direct public users back to Regulations.gov, and instead re-creates all rulemaking materials for each NPRM on its own website, including individual links to each submitted comment. However, these links go directly to comment files, and do not link to Regulations.gov. While EBSA follows departmental guidance associated with posting duplicate comments, which allows some discretion in posting practices, the agency does not have a policy for how comments are posted to Regulations.gov or its own website. Further, in the examples we reviewed, the content of the NPRM-specific pages on EBSA’s website does not always match what is posted to Regulations.gov. Because participating agencies are not required to adhere to standardized posting practices, Regulations.gov directs public users to participating agency websites for additional information about posting practices and potential data limitations. However, these websites do not describe the limitations associated with the identity information contained in publicly posted comments. As allowed for under the APA, all of the participating agencies in our review vary in the way in which they post identity information associated with comments—particularly duplicate comments. However, the lack of accompanying disclosures may potentially lead users to assume, for example, that only one entity has weighed in on an issue when, actually, that comment represents 500 comments. Without better information about the posting process, the inconsistency in the way in which duplicate comments are presented to public users of Regulations.gov limits public users’ ability to explore and use the data and could lead users to draw inaccurate conclusions about the public comments that were submitted and how agencies considered them during the rulemaking process. Agency-Specific Comment Sites Both nonparticipating agencies use comment systems other than Regulations.gov and follow standardized posting processes associated with public comments submitted to their respective comment systems, but the Securities and Exchange Commission (SEC) has not clearly communicated these practices to the public. Although it appears to users of the SEC website that the agency follows a consistent process for posting duplicate comments, at the time of our June 2019 report, this practice had not been documented or communicated to public users of its website. In contrast, FCC identifies its policies for posting comments and their associated identity information in a number of places on the FCC.gov website, and on its Electronic Comment Filing System (ECFS) web page within the general website. Regarding comments submitted to rulemaking proceedings through ECFS, public users are informed that all information submitted with comments, including identity information, will be made public. Our review of ECFS comment data did not identify discrepancies with this practice. Although the public comment process allows interested parties to state their views about prospective rules, the lack of communication with the public about the way in which agencies treat identity information during the posting process, particularly for duplicate comments, may inhibit users’ meaningful participation in the rulemaking process. While the APA does not include requirements for commenters to provide identity information, or for agency officials to include commenters’ identity as part of their consideration of comments, key practices for transparently reporting open government data state that federal government websites— like those used to facilitate the public comment process—should fully describe the data that are made available to the public, including by disclosing data sources and limitations. In our June 2019 report, we made eight recommendations. Specifically, we recommended that five of the selected agencies establish a policy for posting comments, and that those five agencies plus three others take action to more clearly communicate their policies for posting comments, particularly with regard to identity information and duplicate comments. The eight agencies generally agreed with our recommendations and identified actions they planned to take in response, such as developing policies for posting duplicate comments and communicating those in various ways to public users. Since issuing our June 2019 report, SEC has taken action that is responsive to the recommendation we made to it. Specifically, in September 2019, SEC issued a memorandum that reflects SEC’s internal policies for posting duplicate comments and associated identity information. In addition, SEC has communicated these policies to public users on the SEC.gov website by adding a disclaimer on the main comment posting page that describes how the agency posts comments. Chairmen Portman and Lankford, Ranking Members Carper and Sinema, and Members of the Subcommittees, this concludes my prepared remarks. I would be happy to answer any questions you may have at this time. GAO Contact and Staff Acknowledgments For further information regarding this testimony, please contact Seto J. Bagdoyan, (202) 512-6722 or bagdoyans@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are David Bruno (Assistant Director), Elizabeth Kowalewski (Analyst in Charge), and Dahlia Darwiche. Other individuals who also contributed to the report on which this testimony is based include Enyinnaya David Aja, Gretel Clarke, Lauren Kirkpatrick, James Murphy, Alexandria Palmer, Carl Ramirez, Shana Wallace, and April Yeaney. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Federal agencies publish on average 3,700 proposed rules yearly and are generally required to provide interested persons (commenters) an opportunity to comment on these rules. In recent years, some high-profile rulemakings have received extremely large numbers of comments, raising questions about how agencies manage the identity information associated with comments. While the APA does not require the disclosure of identifying information from a commenter, agencies may choose to collect this information. This testimony summarizes GAO's June 2019 report on public comment posting practices (GAO-19-483). In that report, GAO examined (1) the identity information collected by comment websites; (2) the guidance agencies have related to the identity of commenters; (3) how selected agencies treat identity information; and (4) the extent to which selected agencies clearly communicate their practices associated with identity information. The agencies were selected on the basis of the volume of public comments they received on rulemakings. For this testimony, GAO obtained updates on the status of recommendations made to the selected agencies. What GAO Found The Administrative Procedure Act (APA) governs the process by which many federal agencies develop and issue regulations, which includes the public comment process (see figure). In June 2019, GAO found that Regulations.gov and agency-specific comment websites collect some identity information—such as name, email, or address—from commenters who choose to provide it during the public comment process. The APA does not require commenters to disclose identity information when submitting comments. In addition, agencies have no obligation under the APA to verify the identity of such parties during the rulemaking process. GAO found in the June 2019 report that seven of 10 selected agencies have some internal guidance associated with the identity of commenters, but the substance varies. This reflects the differences in the way that the selected agencies handle commenter identity information internally. GAO also found that the selected agencies' practices for posting public comments to comment websites vary considerably, particularly for duplicate comments (identical or near-identical comment text but varied identity information). For example, one agency posts a single example of duplicate comments and indicates the total number of comments received, but only the example is available to public users of Regulations.gov. In contrast, other agencies post all comments individually. As a result, identity information submitted with comments is inconsistently presented on public websites. The APA allows agencies discretion in how they post comments, but GAO found that selected agencies do not clearly communicate their practices for how comments and identity information are posted. GAO's key practices for transparently reporting government data state that federal government websites should disclose data sources and limitations to help public users make informed decisions about how to use the data. If not, public users of the comment websites could reach inaccurate conclusions about who submitted a particular comment, or how many individuals commented on an issue. What GAO Recommends In June 2019, GAO made recommendations to eight of the selected agencies regarding implementing and communicating public comment posting policies. The agencies generally agreed with the recommendations and identified action they planned to take in response. Since the June 2019 report, one agency has implemented GAO's recommendation.
gao_GAO-19-444T
gao_GAO-19-444T_0
SBA’s STEP Grants Management Process Does Not Provide Reasonable Assurance of Compliance with Some Requirements of Applicable Law Our report found that SBA’s STEP grants management process does not provide reasonable assurance that STEP grant recipients meet two of the three TFTEA requirements we reviewed before the grant is closed out. TFTEA contains specific requirements for STEP, including: Proportional distribution requirement. SBA must distribute grant funds in a way that caps the amount of grant funds distributed to the 10 states with the largest numbers of eligible small businesses at 40 percent of the total amount awarded each year. This requirement ensures that states with fewer eligible small businesses receive funding, and is known as the “proportion of amounts” clause in the law. Total match requirement. States must provide either a 25 percent or 35 percent nonfederal total match to the federal grant amount. Cash match requirement. A state’s match cannot be less than 50 percent cash. SBA’s Process Provides Reasonable Assurance of Compliance with TFTEA’s Proportional Distribution Requirement First, we found that OIT has established a process for ensuring compliance with the TFTEA requirement outlined in the “proportion of amounts” clause of the statute. OIT officials told us they review data from the Department of Commerce’s Census Bureau that show the number of exporting small and medium-sized businesses in each state, and then use these data to determine the top 10 states. According to OIT officials, they use the most recent data available, with an approximately 2- to 3- year lag. OIT officials told us that they planned to use available 2016 Census data to determine the top 10 states for the fiscal year 2018 award cycle and then, after receiving applications, determine award amounts that would comply with this requirement. SBA’s Review Process Did Not Document that States Met TFTEA’s Total Match Requirement before Grant Closeout Second, we found SBA’s process did not document that states met TFTEA’s total match requirement before grant closeout. TFTEA requires that states provide matching funds, and the total match is typically 25 percent of the combined state-federal amount. At least half of the total match must be cash. Matching share requirements are often intended to ensure local financial participation, and may serve to hold down federal costs. If SBA determines that a state is not providing sufficient matching funds, it can withhold reimbursement for expenses incurred under the grant. Figure 1 illustrates the STEP funding proportions described above. In our report, we identified four instances where, according to OIT’s documentation, states reported insufficient total matches—one in fiscal year 2015 and three in fiscal year 2016. OIT’s documentation showed that these four states failed to meet the required total matching funds by about $76,000 combined over these 2 years of the program. SBA told us they nevertheless had closed these grants. OIT officials provided several explanations for their actions. First, OIT officials told us that of these four states, two submitted additional information after the grant had closed, indicating that the states had met the matching requirement. OIT officials stated that they did not verify the accuracy of the total match information before grant closure because of OIT staff error. With respect to the other two states, OIT initially stated that it was working with OGM to verify that the total match requirement had not been met, and how best to recover the funds. Subsequently, OIT reported OGM’s determination that one state had in fact met the match requirement, but that the other had not. In the case of the state that did not meet the requirement, OGM determined that SBA had overpaid federal funds to that state by about $19,600. However, after contacting the state and looking into the matter further, OGM conducted a review of quarterly reporting documentation for this state, and determined that the state had in fact exceeded its required match by about $3,800. Though all four of the states initially identified were eventually determined to have met the total match requirement, SBA did not have an adequate process in place to ensure documentation of a full match before grant closeout. Federal internal control standards state that management should design control activities. By designing and executing appropriate control activities, management helps fulfill its responsibilities and address identified risks in the internal control system. Without a process for effectively documenting that the total match requirement has been met and reviewing this documentation before grant closeout, SBA does not have reasonable assurance that states have complied with TFTEA’s total match requirement, and risks overpayment of federal funds. SBA Does Not Monitor States’ Compliance with TFTEA’s Cash Match Requirement Third, we found that OIT’s process does not provide reasonable assurance that states have complied with the TFTEA cash match requirement. As previously noted, TFTEA requires that states provide at least half of their total match in the form of cash. TFTEA allows for the remaining half to be any mixture of cash, in-kind contributions, and indirect costs. OIT collects information about the types of expended matching funds, including the proportion provided in cash; however, OIT does not have a process in place to use this information to monitor states’ compliance with this requirement. OIT documents show that while proposed cash match amounts are recorded, OIT does not track or analyze states’ expended cash matching funds during or at the close of the grant cycle. OIT officials told us that this information is included in the states’ quarterly detailed expenditure worksheets, and therefore can be reviewed for compliance on a case-by- case basis. However, OIT program officials told us that they do not regularly analyze this information to determine what proportion of the total match the cash portion constitutes. The program’s authorizing legislation does not define “cash,” and neither does the Uniform Guidance. OIT considers the salaries of state trade office staff who work on administering the grant to be a form of cash and, according to OIT officials, most states use state staff salaries as their total match, including the required cash portion. In addition, we found that OIT does not have a process for ensuring that states reporting staff salaries as their required cash match are not also using grant funds from STEP to pay for portions of these same salaries. As such, SBA cannot consistently determine whether states are meeting the TFTEA cash match requirement, and risks closing out grants for which states have not met the cash match requirement. Using part of the grant to cover the cost of the state’s matching requirement in this way could have the effect of reducing the match below the thresholds mandated by TFTEA. In our discussions with officials from 12 low-use states that received STEP grants in fiscal year 2015, 2 states reported using the grant to offset state staff salaries. When we asked OIT officials what process they had in place to determine whether states were using staff salaries paid for with STEP funds as part of their match amount, OIT officials told us that they were not aware that STEP grantees had engaged in this practice, and therefore did not monitor for it. SBA’s grants management standard operating procedure states that the agency should monitor grantees for compliance with the terms and conditions of the awards, which includes compliance with applicable federal law. Further, according to federal standards for internal control, management should design and execute control activities, and use quality information to achieve the entity’s objectives. Management should process reliable data into quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. Without processes to review whether states are meeting the cash match requirement, OIT is not implementing its responsibilities under SBA’s standard operating procedure because it cannot consistently determine whether states are meeting this requirement. Without making such a determination, SBA does not have reasonable assurance that states are contributing to the program as required by STEP’s authorizing statute. In our report, we recommended that the SBA Administrator should establish a process that ensures documentation of states’ compliance with the total match requirement before grant closeout, and develop a process to determine states’ compliance with the cash match requirement. SBA agreed with these recommendations. Some States Report Challenges to Using Grant Funds and SBA Has Not Adequately Assessed Risk to Program from Low Grant Use Next, we looked at STEP’s grant use rate. In our report, we found that nearly 20 percent of grant funds go unused each year, despite OIT officials stating that they seek 100 percent use of grant funds. Specifically: 2015. Across all 40 recipient states, combined grant use was 81 percent, leaving 19 percent, or nearly $3.4 million, unused. This included one state that left 77 percent, or over $432,000, of its funds unused that year. 2016. Across 41 of the 43 recipient states, combined grant use was 82 percent, leaving 18 percent, or nearly $3.2 million, unused. This included one state that left nearly 95 percent, or nearly $184,000, of its funds unused that year. We found that OIT made some changes to the program that could improve states’ ability to use all their grant funds. Changes included: (1) Extending funds usage period to 2 years. This change allows an additional 4 quarters to conduct program activities, which, in turn, may help enable states to use the full amount of their grant funding and achieve performance targets. (2) Eliminating travel preauthorization requirement. This change may reduce the administrative burden on state trade office staff and allow greater flexibility to use grant funds when opportunities that require travel arise with limited notice. (3) Reducing the length of the technical proposal. This change may help to streamline the program’s application paperwork. Some States Cited Challenges with the Program We interviewed officials from low-use states to identify the continuing challenges they faced. We grouped the most commonly reported challenges into the following categories: (1) Timing of the application and award processes. State officials discussed the variable and short application timeframes, and said that the award announcement happening close to the start of the grant period can make it difficult to use funds during the 1st quarter of the period. (2) Administrative burden. State officials described challenges due to inflexible application requirements, a difficult process for repurposing funds, and burdensome and changing reporting requirements. (3) Communication. State officials told us this was a challenge because of delays and inconsistent communication of requirements from OIT. SBA Has Not Adequately Assessed Risk to Achieving Program Goals or Effectively Shared Best Practices In our report, we found that OIT had not assessed and fully addressed the risk posed by some states’ low use of funds. OIT officials told us that while they informally collect feedback from states, there is no systematic process to collect states’ perspectives on challenges with the program, including obstacles to their ability to use funds. Officials said that they seek 100 percent use for each state that receives an award, as well as for the program as a whole. Federal internal control standards specify that agency leadership should define program objectives clearly to enable the identification of risks and define risk tolerances in order to meet the goals of the program’s authorizing legislation. In addition, OIT has no systematic process to share best practices with sufficient detail that states struggling to use their STEP funds might apply those practices to improve their own programs. TFTEA requires SBA to publish an annual report regarding STEP, including the best practices of those states that achieve the highest returns on investment and significant progress in helping eligible small businesses. While 12 states used 75 percent or less of their grant funds in the fiscal year 2015 cycle, 19 states used all or almost all of their funds. SBA publishes high-level information on what it deems to be notable state activities in its annual report to Congress. OIT officials told us that, when possible, they share best practices with states that may have difficulty accessing external markets. However, OIT officials told us that they do not formally facilitate the sharing of best practices among the states, saying that best practices for promoting exports in one state might not be transferable to another state because each state is unique. According to the Uniform Guidance, grant recipients’ performance should be measured in a way that helps the federal awarding agency and other nonfederal entities improve program outcomes, share lessons learned, and spread the adoption of promising practices. We have also previously reported on the importance of collecting and sharing best practices, as well as the processes for doing so. By sharing detailed information with all participating states about the approaches that some grant recipients are using to successfully achieve STEP’s goals, SBA could encourage all grant recipients to improve the effectiveness of their state STEP programs, including increasing fund use rates in pursuit of OIT’s stated aim of 100 percent grant fund use. In our report, we recommended that the SBA Administrator assess the risk to achieving program goals posed by some states’ low grant fund use rates, and that assessing this risk could include examining the challenges that states reported related to the program’s application and award processes, administrative burden, and communication. We also recommended that SBA enhance collection and sharing of best practices among states that receive STEP grant funds. SBA agreed with these recommendations. Chairwoman Finkenauer, Ranking Member Joyce, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contacts for our Offices of Congressional Relations and Public Affairs are on the last page of this testimony. GAO staff who made key contributions to this statement are Adam Cowles (Assistant Director), Cristina Ruggiero (Analyst in Charge), Martin de Alteriis, Mark Dowling, Jesse Elrod, John Hussey, and Christopher Keblitis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
What GAO Found The Small Business Administration's (SBA) management of the State Trade Expansion Program (STEP) does not provide reasonable assurance of compliance with some legal requirements. Specifically, the Trade Facilitation and Trade Enforcement Act of 2015 (TFTEA) requirements for STEP include: Proportional distribution requirement. SBA's Office of International Trade (OIT) must distribute grant funds so that the total amount awarded to the 10 states with the highest percentage of eligible small businesses does not exceed 40 percent of the program's appropriation that year. Total match requirement. States must provide a 25 or 35 percent non-federal match to the federal grant amount. Cash match requirement. A state's match cannot be less than 50 percent cash. GAO found that, while OIT has a process to meet the distribution requirement, it does not have a process for documenting that states have met the total match requirement before grant closeout, and does not have a process to determine whether states are meeting the cash match requirement. Without such processes, SBA cannot be reasonably assured that states are contributing per the law's requirements. GAO found that, while OIT has made changes to STEP in response to states' feedback, officials from states with low grant use described ongoing challenges with the program that affect their ability to fully use funds. These challenges include compressed application and award timelines, administrative burden, and poor communication. SBA has not adequately assessed risks to the program, including the risk to achieving program goals posed by some states' low grant fund use rates. Without such an assessment, OIT's ability to support U.S. exporters may be diminished. Further, SBA has not effectively facilitated sharing best practices among states. By doing this, SBA could help states make full use of funds to achieve the program's goals.
gao_GAO-20-125
gao_GAO-20-125_0
Background FAMS’s Mission and Organization The organization that is now FAMS was created in 1961 to counter hijackers. The Aviation and Transportation Security Act, enacted in November 2001, established TSA as the agency responsible for civil aviation security and transferred FAMS along with other aviation security- related responsibilities from the Federal Aviation Administration to TSA. Among other things, the Act expanded FAMS’s mission and workforce in response to the September 11, 2001, terrorist attacks. Specifically, the Act authorizes TSA to deploy air marshals on every passenger flight of a U.S. air carrier and requires TSA to deploy air marshals on every such flight determined by the TSA Administrator to present high security risks—with nonstop, long-distance flights, such as those targeted on September 11, 2001, considered a priority. As of August 2019, FAMS had thousands of employees and 20 field offices across the United States. FAMS’s Field Operations Division consists, in part, of these field offices, which are divided into regions overseen by Regional Directors. A Supervisory Air Marshal in Charge (SAC) manages each field office, assisted by a Deputy Supervisory Air Marshal in Charge or Assistant Supervisory Air Marshals in Charge, depending on the size of the field office. SFAMs typically oversee squads of air marshals in the field offices. FAMS’s Flight Operation Division consists of the Systems Operation Control Section, among other groups. The Systems Operation Control Section is responsible for planning and preparing air marshals’ schedules, which are based on 28-day cycles known as roster periods. It is also responsible for monitoring all FAMS missions. For example, its Mission Operations Center is responsible for providing real-time support to air marshals performing missions by resolving mission-related issues, including last-minute scheduling changes. The senior leader of FAMS is the Executive Assistant Administrator / Director of FAMS. FAMS’s Concept of Operations Given that there are many more U.S. air carrier flights each day than can be covered by air marshals, FAMS uses a concept of operations to set forth its methodology for deploying air marshals. FAMS’s concept of operations prioritizes flights that it considers higher risk, such as those for which a known or suspected terrorist is ticketed. FAMS refers to these flights as Special Mission Coverage (SMC) and, according to FAMS Flight Operation Division officials, FAMS typically learns of them no more than 72 hours in advance of flight departure and sometimes less than an hour before departure time. According to Flight Operations Division officials, in March 2018 FAMS adopted a new concept of operations that expanded the number of SMCs. To cover SMCs, FAMS uses air marshals scheduled to standby status, who report to their home airport and fly upon notification. If no air marshals in standby status are available, FAMS may reassign air marshals from regularly scheduled missions or air marshals who were not scheduled to fly at that time. FAMS Scheduling Guidelines FAMS has established scheduling guidelines intended to balance mission needs with air marshals’ quality of life. Specifically, Systems Operation Control Section officials maintain guidelines detailing parameters for shift length and rest periods when scheduling air marshals to fly missions. Exceptions to these guidelines are permitted to meet mission needs and the Mission Operations Center is not restricted by the guidelines when addressing mission scheduling issues, such as flight delays. For an overview of FAMS’s scheduling guidelines for shift length and rest, see figure 1. Air marshals are expected to be available to work as needed, 24 hours a day. To compensate air marshals for the demands of their position, air marshals receive law enforcement availability pay, which provides eligible TSA law enforcement officers, including air marshals, a 25 percent increase in their base pay for working or being available to work an annual average of 2 hours or more of unscheduled overtime per regular workday. In addition to law enforcement availability pay, certain air marshals are eligible to receive overtime pay after working more than 85.5 hours in a single 14-day pay period. Demographics of FAMS’s Workforce Based on FAMS 2019 human capital data, approximately 85 percent of FAMS employees are law enforcement officers (e.g., air marshals). FAMS’s law enforcement workforce is largely White, male, and 40 years of age or older. As of August 2019, 68 percent of FAMS law enforcement employees identified as White, followed by 14 percent Hispanic or Latino, 12 percent Black or African American, 3 percent Asian, 1 percent American Indian or Alaskan Native, and 1 percent identified as Other or more than one race. Also as of August 2019, approximately 94 percent of FAMS law enforcement employees were male, approximately 76 percent were aged 40 or older, and approximately 51 percent have been with the agency since 2002. See figure 2. FAMS Has Assessed Individuals’ Health, but Has Not Comprehensively Assessed Overall Workforce Health Air Marshals Continue to Express Long-Standing Health Concerns Air marshals report being concerned about their health. Air marshals in all six offices we visited stated that health issues are a key quality of life concern. The most common health issues air marshals raised in discussion sessions with us were extreme fatigue, mental health issues, difficulty maintaining a healthy diet, and increased frequency of illness. In addition, OPM’s FEVS survey asked FAMS employees whether they “believe they are protected from health and safety hazards.” DHS estimates that in fiscal year 2018—the most recent year for which complete FEVS results are available—less than half (44 percent) of FAMS employees believed they were protected from health and safety hazards. Moreover, during the 6-year period from fiscal year 2013 through 2018—a period during which the number of FAMS employees decreased by 17 percent—the number of workers’ compensation claims filed by FAMS employees nearly quadrupled, from 71 claims to 269 claims. From fiscal year 2013 through 2019, thirteen air marshals died while employed with FAMS, one of whom died while on duty covering a flight. According to FAMS officials, five of the thirteen deaths were caused by suicide; and FAMS officials did not know the cause of death for the other eight. Concerns about air marshals’ health are long-standing. For example: In 2008, a FAMS Medical Issues Working Group reported that air marshals had experienced various types of health issues—poor physical fitness as well as musculoskeletal injuries and upper respiratory infections. The Working Group also noted that air marshals’ disrupted sleep patterns often resulted in fatigue and long hours and made it difficult for air marshals to work out and maintain healthy eating habits. In 2012, the FAMS-commissioned Harvard sleep and fatigue study— which included a literature review, an analysis of air marshals’ work schedules, and a survey of air marshals—reported that shift work schedules, like air marshals’ flight schedules, can cause significant acute and chronic sleep deprivation which in turn can adversely affect their personal health, such as increasing the risk of heart disease. The study also reported that sleep deprivation degrades air marshals’ ability to think quickly, make good decisions, and to recognize when fatigue impairs performance and safety. In 2013, a FAMS review of air marshals’ fitness noted that air marshals were experiencing high injury rates when taking their physical fitness assessments and declining overall health and wellness. FAMS officials attributed air marshals’ declining overall health and wellness in part to the increasing age of air marshals. FAMS Assesses Air Marshals’ Individual Health, But Maintains Limited Health Information in a Data System FAMS has had initiatives in place to assess air marshals’ health. For example, since 2004 FAMS has required that individual air marshals obtain a medical examination at least every 2 years. In addition, FAMS has operated a Health, Fitness, and Wellness Program since 2015 and a Hearing Conservation Program since 2017. However, FAMS maintains limited health information in a data system. Medical Examinations Since 2004, FAMS has gathered information on individual air marshals’ health to help ensure employees meet its medical standards. Specifically, FAMS has required that air marshals obtain a medical examination from private, FAMS-approved clinics at least every 2 years. According to FAMS policy, these exams are to assess air marshals’ cognitive, physical, psychomotor, and psychological abilities and include certain cardiac, pulmonary, audiometric, and visual tests. FAMS’s Medical Programs Section—an office staffed with one part-time physician, five nurses, and three administrative staff—is responsible for helping ensure that air marshals obtain their required medical examinations. The office also follows up if an exam indicates an air marshal may have a health issue that may affect their ability to perform their duties, such as a sleep disorder or high blood pressure. Clinicians who conduct the periodic medical examinations provide the Medical Programs Section a medical report, which they use to determine if an air marshal is medically qualified to perform the essential functions of the position in a safe and efficient manner. Air marshals deemed unqualified to perform one or more essential functions of the position, with or without reasonable accommodation, are subject to administrative actions, such as being placed on light or limited duty status and possibly non-disciplinary removal based on medical inability to perform the essential function of the position. FAMS officials report, however, that they have not entered air marshals’ medical information, including their medical qualification status, into a data system because medical information is protected by law and their existing data system—the Federal Air Marshal Information System (FAMIS) is not suitable to maintain medical information. Instead, the Medical Programs Section maintains the results of air marshals’ medical exams—including their qualification status—in paper files. Medical Programs Section officials explained that because medical information about air marshals are not in a data system, reviewing and compiling information to obtain a comprehensive assessment—such as the number of air marshals who are medically qualified—would be resource-intensive. Medical Programs Section officials noted that it would be helpful to be able to analyze air marshals’ health records to identify any trends across the workforce. FAMS officials report that by the end of September 2020 the Medical Programs Section plans to review and evaluate software platforms that would be suitable for medical data. However, these same officials reported that, as of September 2019, the work on this initiative had been verbal and informal so they were not able to provide documentation of this effort. OPM’s 2018 report on human capital management highlights the importance of using data to conduct workforce analyses to help identify and properly address human capital challenges. Without information about the number and proportion of the FAMS workforce who are medically qualified, FAMS management has a limited understanding of its workforce’s ability to fly missions and fulfill their duties. Further, FAMS management cannot readily identify trends among its workforce and therefore is also limited in their ability to identify any problems and make better-informed workforce planning decisions. Health, Fitness, and Wellness Program In May 2015, FAMS initiated a Health, Fitness, and Wellness Program intended to address concerns with air marshals’ fitness and injury rates and improve air marshals’ overall health and wellness. According to FAMS policy, the program is intended to provide opportunity, resources, and education necessary to enhance mission readiness and promote workplace wellness. For example, FAMS requires air marshals to participate in a health and fitness assessment twice a year to measure their fitness including cardio-respiratory endurance, muscular strength, muscular endurance, and flexibility. FAMS physical fitness instructors administer the assessment and record the results in FAMIS, such as the number of pushups an air marshal can complete in one minute. Since February 2016, FAMS has used these data to track air marshals’ mandatory participation in the assessments and to identify individual air marshals who do not maintain their fitness levels or show improvement. However, it has not used these data to analyze trends in the fitness of the workforce as a whole. FAMS officials noted that analyzing these data could provide some indication of the state of the workforce, but they have not done so because these data provide a limited snapshot and other information would need to be considered to provide a full understanding of the workforce’s well-being. Two other aspects of the program are the establishment of Health Fitness and Wellness Coordinators and an optional Health Risk Assessment. FAMS Health, Fitness, and Wellness Program coordinators are responsible for engaging with air marshals to promote a culture of wellness, build an inclusive fitness community at each location, and provide health, fitness, and wellness recommendations. The national coordinator of the Health, Fitness, and Wellness Program is also responsible for providing oversight of the program, ensuring program effectiveness, and providing FAMS leadership with program reports and assessments when requested. According to FAMS documents, the optional Health Risk Assessment is intended to help air marshals identify modifiable health risk factors. The assessments are completed by air marshals and reviewed by a certified occupational health nurse. Air marshals then meet with FAMS Medical Programs Section staff to discuss their health and recommendations to promote health and wellness, and prevent disease. FAMS officials report that in 2015, they completed eight Health Risk Assessments; however, since then no additional air marshals have requested this assessment. Medical Programs Section officials stated that few air marshals took advantage of this option because air marshals prefer to obtain health services outside of the agency (i.e. with private providers) to maintain their privacy. Hearing Conservation Program In August 2017, FAMS established a Hearing Conservation Program to provide a coordinated approach to prevent hearing loss due to noise exposure in the work environment and to be compliant with federal regulations. According to FAMS documentation, air marshals are regularly or intermittently exposed to gunshot noise such as during training activities. Through this program FAMS has provided training about the adverse effects of noise and administered baseline audiograms and annual testing of air marshals. FAMS physicians are to evaluate data from the hearing screenings and conduct follow-up with individual air marshals when there is a change in the test results. FAMS officials report that they maintain these test records in the Medical Programs Section’s paper files for individual air marshals. As of July 2019, FAMS estimated that about two-thirds of air marshals had obtained baseline audiograms. FAMS officials report that they do not have plans to analyze air marshals’ audiogram results in the aggregate. Instead, FAMS officials plan to review the program at least annually to identify any enhancements that could improve program efficiency and effectiveness. FAMS Has Reviewed Some Workforce-Wide Data, But Has Not Comprehensively Assessed the Health of Its Workforce FAMS began more closely monitoring certain workforce-wide data in response to management concerns that arose in 2016 about the rising costs associated with workers’ compensation claims. In 2016, it began to more closely monitor the number and costs of workers’ compensation claims. In February 2019, FAMS hired a safety specialist to begin analyzing available information on air marshals’ on-the-job injuries in an effort to identify ways to prevent them from occurring, according to FAMS officials. Although FAMS monitors certain information on workers’ compensation claims and has plans to further monitor workplace injuries, it has not used or planned to use other information it collects to assess the health of its workforce in a comprehensive manner that would enable it to look for broader health trends and risks. As previously discussed, FAMS collects and reviews in-depth health information on each air marshal at least every 2 years. However, it has not analyzed this information to distill trends across the workforce because, according to FAMS officials, it would be difficult given that FAMS maintains individual air marshals’ medical information in paper files. Similarly, FAMS routinely collects data from air marshals’ health and fitness assessments but has not used these data to identify any workforce-wide trends because, as discussed above, FAMS officials state that these data would provide a limited snapshot of air marshals’ fitness. Further, although FAMS began collecting data from hearing screenings in 2018, officials indicated that they do not have any plans to analyze these data for the workforce as a whole. Furthermore, since 2015, the National Coordinator for the Health, Fitness, and Wellness Program is responsible for providing program assessments when requested but, as of July 2019, FAMS leadership has not requested any such reports. There is evidence of interest within FAMS in information about the overall health of the workforce. In 2017, the FAMS Advisory Council asked the Medical Programs Section to report on the health and wellness of the workforce. According to documents we reviewed, in March 2017, Medical Programs Section officials reported to the advisory council that air marshals’ most common medical restrictions were due to mental health and cardiac conditions and the most common work-related medical issues were orthopedic issues resulting from training-related injuries. However, Medical Programs Section officials told us their assessment was not derived from an analysis of air marshals’ medical data but rather relied on anecdotal information gathered from on-call nurses fielding calls from sick air marshals and providing routine occupational health case management. OPM’s 2018 report on human capital management highlights the importance of using data to conduct workforce analyses to help identify and properly address human capital challenges. The FAMS Medical Programs Section and other offices regularly collect information about individual air marshals’ illnesses and injuries as well as health and fitness information but FAMS management is not analyzing it to inform decisions and address any potential health risks. If FAMS management analyzed this information in a manner consistent with relevant policies and requirements, they would be better positioned to identify medical, health, and fitness issues among the entire workforce, make informed workforce planning decisions, and take steps they deemed warranted, such as providing targeted education or revising its policies. Further, in February 2018, OPM identified “enhancing productivity through a focus on employee health” as a key priority within human capital management for the federal workforce. Four months later, in June 2018, TSA identified “care for our people” as a leadership principle and directed leaders to prioritize employee welfare. In November 2019, FAMS management officials provided us with a statement that said, in part, that “understanding the overall health and wellness of our air marshals is paramount.” They further stated that they now plan to create a working group to identify options to monitor the health of the workforce as a whole. They did not provide any timeframes or documentation of this effort. However, if implemented, this could be a good first step toward assessment of the overall health of the FAMS workforce. Without information on the overall health and fitness of the FAMS workforce, FAMS management is not well positioned to prioritize employee health and welfare or ensure that it deploys a workforce capable of fulfilling its national security mission. FAMS Has Taken Steps to Address Schedule Unpredictability, but Has Not Monitored Work Hours against Guidelines or Made Them Available to Employees FAMS Has Taken Steps to Reduce Schedule Unpredictability Resulting from Its New Concept of Operations Air marshals in each of the six field offices we visited stated that schedule unpredictability—short-notice changes to their start times, missions, and at-home days—was a key quality of life issue. Air marshals explained that they have experienced changes to their scheduled mission days and non-mission days—such as in-office training and scheduled days off—so they could cover mission needs that came up on short notice. In addition, air marshals in four of the six field offices we visited explained that they have been taken off of their scheduled missions on short notice so they could cover higher-risk missions. Air marshals in all six field offices stated that schedule unpredictability has made it difficult to manage their personal commitments. For example, air marshals described some challenges planning and attending family events, maintaining personal relationships, obtaining childcare, and scheduling doctor’s visits for themselves and their children. Air marshals in one office also described anxiety about the possibility of missing a phone call asking them to report for a mission and about their ability to arrive to work on time when given short notice. Air marshals, supervisors, and FAMS management we met with explained that changes to FAMS’s deployment strategy in March 2018 that increased the number of SMCs have increased schedule unpredictability. According to Flight Operations Division officials, FAMS typically does not learn of these missions more than 72 hours in advance. Our analysis of FAMS data shows that the average number of SMCs per roster period more than tripled after FAMS implemented its new concept of operations in March 2018, and air marshals’ SMC-related schedule changes more than doubled during the same period. FAMS has taken some steps to mitigate the impacts of SMCs on air marshals’ schedules as follows: Implemented a standby shift and increased the number of air marshals on standby. FAMS Flight Operations Division officials report that they implemented a standby shift to staff SMCs in June 2018. According to Flight Operations Division officials, FAMS typically staffed SMCs using air marshals scheduled to domestic and international missions, recovery shifts, or ground-based duties prior to the implementation of the standby shift. Flight Operations Division officials also report that they increased the number of scheduled standby shifts in an effort to curtail schedule unpredictability. Based on our review of FAMS data, the number of scheduled standby shifts more than tripled from June 2018 to December 2018. According to these officials, scheduling air marshals on standby shifts is intended to improve schedule predictability by reducing the frequency that air marshals have their planned work schedules adjusted so they can cover SMCs. Expanded to multiple standby shifts with staggered start times and modified standby shift start times. According to Flight Operations Division officials, field office SACs reported that FAMS frequently adjusted air marshals’ scheduled start times for the single standby shift in response to SMC requests. To reduce this schedule unpredictability, Flight Operations Division officials reported that in November 2018, they began scheduling air marshals to multiple standby shifts per day with staggered start times, rather than just one shift per day. These officials stated that they received positive feedback regarding this change during management’s subsequent field office visits. We asked air marshals in four of the six field offices we visited for their perspectives on the effectiveness of this change during discussion sessions and received mixed feedback. Air marshals in two field offices stated that they thought this change had improved SMC scheduling by reducing the number of changes to standby shift start times. However, air marshals in each of these four field offices stated that Mission Operations Center personnel do not always observe air marshals’ scheduled standby shift hours. Systems Operation Control Section officials noted that the magnitude of adjustments to air marshals’ standby shift start times is not always significant. To further reduce schedule unpredictability, FAMS also began modifying standby shift start times for some of its field offices in December 2018. Flight Operations Division officials stated that they modify standby shift start times for individual field offices based on specific SMC timing trends in field offices. According to Flight Operations officials, they analyzed air marshals’ scheduled standby shift start times and actual start times both before and after these changes and concluded that they were reducing start time variance. For example, they found that between October 28, 2018, and November 24, 2018—a period during which they report using one standby shift—approximately 46 percent of actual standby shift start times deviated from scheduled start time by 4 or more hours. Between June 9, 2019, and July 6, 2019, after FAMS Flight Operation Division officials reported having expanded to multiple standby shifts and adjusted start times for individual offices, FAMS officials found that approximately 33 percent of actual standby shift start times deviated from scheduled start times by 4 or more hours. Flight Operations Division officials stated that these changes have reduced the frequency of SMCs covered by air marshals not in standby status. Our analysis of FAMS data on SMC-related schedule changes shows that FAMS reduced the need to make changes to the schedules of air marshals that were not on recovery or standby shifts in order to staff SMCs. Additionally, Flight Operations Division officials stated that they continue to monitor data on SMC start times to identify the optimal standby shift start times to reduce scheduling unpredictability. Improved coordination with field offices. In April 2019, FAMS management issued guidance aimed at improving coordination between the Mission Operations Center and field offices to reduce schedule unpredictability. First, the guidance requires that the Mission Operations Center obtain field office approval prior to adjusting an air marshal’s standby shift start time by more than 2 hours in order to staff an SMC. Second, in situations where FAMS receives a SMC request with more than 24 hours’ notice and there are no available air marshals scheduled to standby, Mission Operations Center and field office personnel are to use air marshals scheduled to recovery shifts (if they are available and at the field office’s discretion) before pulling air marshals from non-SMC missions to cover the request. According to FAMS management, this latter change is intended to reduce the number of non-SMC missions dropped to cover SMCs. FAMS Monitors Some Schedule Information, But Does Not Monitor Whether Air Marshals’ Work Hours Are Consistent with Scheduling Guidelines FAMS Monitors Some Schedule Information and Air Marshals’ Shifts Were Generally Consistent with Scheduling Guidelines FAMS management and Flight Operations Division personnel monitor some information about air marshals’ planned and actual schedules. According to Flight Operations Division officials, they routinely monitor average scheduled shift length, average actual shift length, and average scheduled rest for domestic and international missions through monthly field office-specific reports. These officials stated that field office SACs and other FAMS management officials use the reports to understand characteristics like the mission tempo in each field office. Our analysis of air marshals’ work hours as recorded on their time sheets demonstrated that air marshals’ shift lengths were generally consistent with scheduling guidelines for selected roster periods, but in each period a few shifts were not. Additionally, our analysis of air marshals’ regular days off showed that air marshals generally received 8 days off per roster period—consistent with FAMS scheduling guidelines—for the periods we analyzed. The details of that analysis are presented in appendix II. Domestic missions. Generally, FAMS schedules air marshals to shifts that range between 6.5 and 10 hours on days that they fly domestic missions, but the Mission Operations Center has the authority to extend shift lengths to 12 hours. During the four roster periods we reviewed, air marshals’ domestic mission shifts were generally shorter than 10 hours. Specifically, during the 28-day roster periods we examined in fiscal year 2019, we estimate that air marshals exclusively worked shifts lasting 10 hours or less approximately 87 percent of the time. Air marshals worked one or more shifts that extended beyond the scheduling guideline of 10 hours about 13 percent of the time. For example, during the 28-day roster periods we examined in fiscal year 2019, we estimate that air marshals worked at least one shift between 10 hours and 12 hours about 10 percent of the time and worked at least one shift that was greater than 12 hours approximately 3 percent of the time. See figure 3 for the results of our analysis of domestic mission shifts. International missions. Scheduling guidelines for international missions vary based on factors like mission destination, and some missions are not subject to a maximum duration. Given the guideline variation for international missions, we examined actual international missions against the highest international mission shift length specified by the guidelines— 18 hours—as well as guidance that requires the Mission Operations Center to consider scheduling alternatives when a delay causes an international mission shift to last beyond 20 hours. Air marshals generally worked in accordance with guidelines for international missions. Specifically, we found that air marshals generally worked shifts that lasted fewer than 18 hours during the four roster periods we analyzed. During the 28-day roster periods we examined in fiscal year 2019, we estimate that air marshals exclusively worked shifts lasting 18 hours or less approximately 71 percent of the time. Air marshals worked one or more shifts lasting more than 18 hours about 29 percent of the time. For example, during the 28-day roster periods we examined in fiscal year 2019, we estimate that air marshals worked at least one shift between 18 and 20 hours approximately 24 percent of the time and worked at least one shift greater than 20 hours about 11 percent of the time. See figure 4 for the results of our analysis of international mission shifts. FAMS Does Not Monitor Whether Air Marshals’ Work Hours Are Consistent with Guidelines FAMS management’s monthly reports on average shift lengths do not provide insight into the extent air marshals are working hours consistent with scheduling guidelines. For example, FAMS management reports for the roster periods we analyzed for fiscal years 2018 and 2019 showed that the average domestic mission shift lasted between about 6.5 and 7.5 hours. While these average times are below the 10-hour guideline for domestic mission shifts, these data are not granular enough to determine whether any air marshals worked shifts that exceeded scheduling guidelines. With regard to international missions, because FAMS’s guidelines vary more widely depending on the specifics of the mission, a single average of all international mission durations is even less useful in determining the extent to which air marshals’ work hours were consistent with applicable guidelines. For example, one FAMS management report stated that the average international mission shift length between October 29, 2017, and November 25, 2017—the first period we examined in fiscal year 2018— was 12 hours and 55 minutes. Although this average exceeds the scheduling guideline of 12 hours for international mission shifts to North and Central American destinations that do not include an overnight layover, this average is less than the guideline of 15 hours for international mission shifts to North and Central American destinations that include an overnight layover. As a result, the average shift length would not have made clear how often guidelines were being observed. FAMS’s scheduling guidelines allow for exceptions to accommodate operational needs, but more information on actual work hours could improve FAMS management’s insight into how air marshals’ quality of life is being balanced against mission needs. For example, FAMS management’s reports could include other statistics that would provide more insight into air marshals’ domestic mission shifts, such as minimum or maximum actual shift lengths or the extent of variation across actual shift lengths. Flight Operations Division officials explained that they do not monitor other statistics that could provide more insight into actual work hours because they had not identified a need to do so but stated that they could and added that more information could be helpful. Standards for Internal Control in the Federal Government requires that management use quality information to achieve the entity’s objectives by, for example, processing its data into quality information that management uses to make informed decisions. Without monitoring the extent to which air marshals’ shifts and rest periods are consistent with scheduling guidelines, FAMS management is not well positioned to determine if scheduling guidelines are serving their purpose to balance air marshals’ quality of life with FAMS’s operational needs to execute its mission, nor can it determine the extent to which air marshals are working beyond the guidelines. As a result, the agency may not be able to successfully manage risks of potentially decreased alertness and focus when air marshals perform their duties. FAMS’s Scheduling Protocols Are Unclear to Supervisors and Staff Air Marshals Do Not Have Access to Scheduling Guidelines FAMS has not made its scheduling guidelines available to all air marshals. During our visits to a non-generalizable sample of field offices, many FAMS personnel—including field office management, SFAMs, and air marshals—stated that they did not have access to scheduling guidelines. Rather, several air marshals stated that they learned of the scheduling guidelines through discussions with immediate supervisors and interactions with the Mission Operations Center. Air marshals in two field offices we visited stated that they had asked for a copy of the guidelines but were never provided one. Air marshals told us it would be helpful to have access to the guidelines so that they can understand how FAMS schedules its shifts. When we asked why the guidelines were not available to employees, Systems Operation Control Section officials reported that they were previously unaware that the field office SACs did not have access to the guidelines. In response, in June 2019, they provided Field Operations Division leadership with a document outlining the guidelines for distribution to field office SACs. However, according to Systems Operation Control Section officials, they did not explicitly direct the field office SACs to further disseminate the guidelines to air marshals in their respective field offices. As of July 2019, Systems Operation Control Section officials were not aware to what extent the document was disseminated beyond the field office SACs, if at all. FAMS scheduling guidelines are intended to balance mission needs with air marshals’ quality of life. As discussed above, these guidelines include specific parameters for shift length and rest periods when air marshals fly missions. Further, exceptions to these guidelines are permitted to meet operational needs. Standards for Internal Control in the Federal Government provides that management should implement control activities, such as FAMS scheduling guidelines, and that it is helpful for management to communicate them to personnel so they can implement them for their assigned responsibilities. Furthermore, the FAMS- commissioned Harvard sleep and fatigue study states that policies concerning work hours and scheduling need to be well communicated. Without access to the scheduling guidelines, air marshals and their supervisors may not be aware of management’s intended balance between mission needs and air marshals’ quality of life. Further, they may not feel empowered to request schedule changes that may be needed to ensure air marshals are sufficiently rested to carry out their mission. Some Supervisors Are Unaware of Their Authority to Adjust Air Marshals’ Schedules Some field office SFAMs we spoke to in our discussion sessions were not clear about protocols that require Mission Operations Center personnel to obtain their approval before making certain adjustments to air marshals’ schedules. FAMS protocols state that the Mission Operations Center can extend an air marshal’s domestic mission shift to 12 hours or reduce rest following a domestic shift to 10 hours. However, the Mission Operations Center must first obtain the approval of a field office SFAM before extending an air marshal’s domestic mission shift beyond 12 hours or reducing rest below 10 hours. SFAMs we discussed this issue with during our six site visits had varying levels of knowledge about their authority or involvement in approving such changes. For example, individual SFAMs in two field offices we visited told us they were aware of the requirements but in two other field offices, SFAMs stated that they did not have any say in adjustments to air marshals’ schedules, regardless of the circumstances. SFAMs were also unaware of field offices’ authority to remove air marshals from missions on short notice. FAMS protocols authorize, and Systems Operation Control Section officials confirmed, that field office SFAMs can remove air marshals from a mission the day of or day before the mission. However, there were SFAMs that were unaware of this in each of the four field offices where we discussed the topic. Some SFAMs had the understanding that management officials—either field office SACs or other management officials outside of field offices—or Mission Operations Center personnel must make these decisions. Systems Operation Control Section officials explained that field office SFAMs do not have access to the Standard Operating Procedure that sets forth these protocols, nor have they provided written guidance on the protocols. Systems Operation Control Section officials stated that they have not given supervisors access to these protocols or written guidance on them because they chose to communicate protocols through verbal briefings. Systems Operation Control Section officials explained that they follow the protocols and had not previously seen a need to share them more widely, but acknowledged that doing so would increase transparency. It is important that SFAMs have access to protocols outlining their role and authority so that they can carry out their job. Standards for Internal Control in the Federal Government provides that management should implement control activities through policies by, for example, communicating to personnel the policies and procedures so that the personnel can implement the control activities for their assigned responsibilities. Furthermore, the FAMS-commissioned Harvard sleep and fatigue study states that policies concerning work hours and scheduling need to be well communicated. Providing SFAMs with written information on these protocols that detail their involvement and authorities in making decisions that affect air marshals’ quality of life would provide clarity for SFAMS, who we found to be uncertain about their authorities in this regard. Some FAMS Employees Filed Discrimination Complaints and TSA and FAMS Have Taken Some But Not All Planned Steps to Prevent Discrimination FAMS Employees Filed 230 EEO Complaints Over Three Years From fiscal years 2016 through 2018, FAMS employees filed 230 EEO complaints with TSA’s Civil Rights Division (CRD), though employees may have reported additional discrimination complaints through other means. CRD is responsible for receiving and handling FAMS employees’ EEO complaints. During this 3-year period, the number of EEO complaints CRD handled regarding FAMS employees was proportional to the number of complaints handled for employees across all of TSA, relative to the size of each workforce. Specifically, in 2018 the ratio of total complaints to total number of employees was 2.8 percent for FAMS and 2.1 percent for TSA. Although reporting to CRD is the only means for FAMS employees to file an EEO complaint, they may choose to report discrimination to their manager or to other entities including the DHS OIG or TSA’s Anti- Harassment Program, which is overseen by the National Resolution Center. The Anti-Harassment Program can take immediate action intended to stop the discriminatory behavior by, for example, separating the employees involved in the complaint. FAMS employees may also choose to report to CRD as well as one or more of the other available means. Once an employee files a complaint with any of these entities, agency officials are to follow processes to investigate the allegation to determine if the complaint is substantiated or not substantiated. See appendix III for a description of the four venues through which FAMS employees can raise discrimination complaints, including what is known about the number and nature of complaints received through each venue in fiscal years 2016 through 2018. We found that some FAMS employees may choose not to report an allegation of discrimination to any of these venues. For example, air marshals in five of the six field offices we visited indicated that they may not file a discrimination complaint because they were concerned about retaliation. Additionally, air marshals in three discussion sessions indicated that some FAMS employees may prefer to handle an allegation of discrimination themselves by speaking directly with the person involved. Further, representatives of a FAMS employee group and the professional association representing federal law enforcement officers we met with stated some FAMS employees may choose not to report an allegation of discrimination to any of these venues. As such, the 230 EEO complaints may underestimate the total number of incidents of alleged discrimination within FAMS. TSA and FAMS Have Taken Some Steps to Prevent Discrimination, But FAMS Has Not Fully Implemented Various Efforts Planned in 2012 DHS, TSA, and FAMS Have Provided Training and Created Venues for Discussion to Prevent Discrimination FAMS’s 2012 action plan identified a number of existing TSA and FAMS efforts already in place at that time—such as providing certain training— and stated FAMS’s commitment to continuing and improving these existing efforts with a goal to enhance organizational and cultural initiatives regarding diversity and equal employment opportunities. Consistent with FAMS’s 2012 plan, DHS, TSA, and FAMS have provided EEO and diversity training to FAMS employees and offered several forums for air marshals to raise concerns about discrimination. Training. Since 2003, DHS and TSA have required all employees— including air marshals—to complete training intended to, among other things, prevent discrimination. These include mandatory annual DHS training, TSA new-hire training, and some optional TSA training. For example since 2003, TSA has required new employees to complete a course called Introduction to Civil Rights which provides an overview of civil rights, EEO laws, and TSA’s related complaint process. In addition, as of December 2006, DHS has required all employees to complete annual No FEAR Act training to inform employees of their rights and responsibilities with regard to discrimination in the workplace. FAMS management officials told us that educating the workforce about discrimination is important because education promotes and opens communication avenues within FAMS that were previously underutilized. TSA has also provided training beyond these required courses. For example, CRD officials told us that at the start of each fiscal year they work with FAMS management to identify FAMS field offices where concerns about discriminatory behavior have been raised. CRD officials stated that they have then provided in-person tailored trainings based on the field offices’ needs. Additionally, in August 2019, TSA’s Anti- Harassment Program provided FAMS leadership with an overview of the program—including defining harassment and manager and employee responsibilities. According to CRD and FAMS officials, they are in the process of developing additional courses that could be helpful to preventing discrimination, including civility courses, coaching through conflict, and crucial conversations training. Venues. FAMS has venues for air marshals to raise issues, such as concerns about discrimination. Specifically, in 2002 FAMS created “Field Office Focus Groups;” in 2006 FAMS established an Ombudsman position; and in 2011 FAMS created EEO points of contact in FAMS field offices. FAMS Field Office Focus Groups. During the early ramp-up of FAMS after September 11, 2001, FAMS established an internal initiative called “Field Office Focus Groups” to provide a venue for employees to raise issues, such as concerns about discrimination, to field office management through group discussions. We reviewed Field Office Focus Group meeting minutes from all 20 field offices from October 2016 through December 2018. During these meetings, discrimination-related issues were discussed in two field offices. For example, in one focus group air marshals inquired about their recourse when they believe management has retaliated against them. FAMS Ombudsman. FAMS established a FAMS-specific Ombudsman position in 2006. The FAMS Ombudsman is responsible for answering inquiries about agency policies and helping employees identify options to resolve workplace concerns, such as concerns about discrimination. The FAMS Ombudsman we met with told us they have fielded inquiries about discrimination but they do not keep records on the number of inquiries. The Ombudsman estimated that between May 2018, when assuming the Ombudsman position, and July 2019 the office received, on average, eight calls per month from air marshals on various topics, some of which involved inquiries about discrimination. In these cases the Ombudsman explained that they had informed individuals of the resources available to them as well as the 45-day time frame to file an EEO complaint with CRD if they chose to do so. Air marshals in five of the six field offices we visited reported being aware of the Ombudsman position. EEO Points of Contact in all FAMS field offices. According to FAMS officials, in 2011, FAMS began to establish EEO points of contact in FAMS’s 20 field offices. FAMS officials report that these points of contact are intended to provide ready, onsite referrals to CRD staff and facilitate access to information about EEO and diversity training opportunities. As of August 2019, FAMS officials told us that all FAMS field offices have at least one EEO point of contact and several field offices have more than one. FAMS Planned Additional Steps to Prevent Discrimination, But Has Not Fully Implemented Them The FAMS 2012 action plan highlighted additional efforts to prevent discrimination but FAMS has not fully implemented or maintained these efforts. According to FAMS leadership, they have not fully implemented or continued the efforts they set forth in the 2012 action plan because the changeover in FAMS leadership since 2012 resulted in a loss of focus on implementing the plan. For example, the plan called for each FAMS field office to develop an EEO/diversity action plan to strengthen the current workplace environment. Each plan was to emphasize four principles: leadership commitment, recruitment and resourcing, career development and enhancement, and employee engagement/workplace culture. As of July 2019, none of the field offices had a diversity action plan in place. In addition, the 2012 action plan called for FAMS to continue to convene diversity focus groups. In 2010 and 2011, FAMS conducted 10 diversity focus groups to solicit input from the workforce related to recruitment, retention, discrimination, harassment, and retaliation, according to FAMS officials. However, FAMS has not held these diversity focus groups since 2011. Further, in 2007, TSA established what is now the Diversity and Inclusion Change Agents Council, which serves as a venue where TSA employees, including air marshals, can promote diversity. In the 2012 action plan, FAMS planned to have all levels of FAMS employees, including senior leadership, such as SACs and Assistant Supervisory Air Marshals in Charge, represented on the council. However as of 2019, two air marshals are the FAMS representatives on this council. Concerns with discrimination persist among air marshals. For example, FAMS employees’ fiscal year 2018 FEVS survey responses related to issues of discrimination were consistently less positive than those of DHS and TSA employees overall, although the proportion of EEO complaints among FAMS’s workforce is similar to TSA’s as a whole. Specifically, DHS estimates that less than half (44 percent) of FAMS employees feel they can disclose a suspected violation without fear of reprisal. Further, FAMS employees’ positive responses were lower than TSA and DHS employees’. Similarly, a smaller estimated percent of FAMS employees believe that prohibited personnel practices are not tolerated (FAMS 54 percent, TSA 60 percent, and DHS 62 percent). Further, as described earlier, air marshals in five of the six field offices we visited raised concerns about potential retaliation for reporting discrimination. For example, one air marshal expressed concern that they might be given undesirable travel schedules as retaliation if they filed a complaint. Finally, according to employee exit surveys conducted by TSA in fiscal years 2012 through 2018, of the 342 FAMS respondents who completed a survey, 26 (about 8 percent) cited that a reason for leaving was diversity or inclusion barriers in the workplace. Given these indications of concerns about discrimination in the FAMS work environment, it is important that FAMS management reaffirm and strengthen its efforts to prevent discrimination. The Equal Employment Opportunity Commission’s Management Directive 715 requires agencies to take appropriate steps to establish a model EEO program and identifies six essential elements for a model EEO program, including demonstrated commitment from agency leadership and proactive prevention of unlawful discrimination. Further, it is DHS’s stated objective to develop and maintain a high performing workforce in part by promoting a culture of transparency, fairness, and equal employment opportunity throughout the DHS workforce. By taking steps to renew its commitment to the goals and initiatives in its 2012 action plan, such as updating and following through on its 2012 action plan, FAMS management can demonstrate leadership commitment to the prevention of discrimination. Doing so could better ensure it proactively addresses and reduces concerns of discrimination among its workforce. Conclusions Federal air marshals are deployed worldwide to protect civil aviation against the risk of terrorist violence. Although FAMS has taken some steps to address air marshals’ quality of life issues, FAMS management does not have information about the number and proportion of the workforce who are medically qualified, which limits their understanding of the workforce’s ability to fulfill its duties. Further, FAMS has not assessed the overall health of its workforce by analyzing available data, which would allow it to identify any health and fitness trends or risks among its workforce, take steps to mitigate these risks, make informed workforce planning decisions, and prioritize employee welfare to ensure that it deploys a workforce capable of fulfilling its national security mission. FAMS does not monitor the extent to which air marshals’ actual work hours are consistent with scheduling guidelines, limiting its ability to determine if air marshals’ quality of life is being balanced with the agency’s operational needs. FAMS also has not shared these scheduling guidelines with air marshals or provided guidance outlining authorities and procedures for changing air marshals’ schedules with field offices. Sharing these guidelines would improve the ability of air marshals and their supervisors to address quality of life issues related to long shifts and inadequate rest. Finally, although FAMS has taken steps to prevent discrimination, FAMS employees have continued to file discrimination complaints indicating that at least the perception of discrimination persists. By taking steps to reaffirm and strengthen its efforts to prevent discrimination, such as updating and following through on its 2012 action plan, FAMS management could better ensure it proactively addresses and reduces concerns of discrimination consistent with DHS’s objective of developing and maintaining a high performing workforce through fairness and equal employment opportunity. Recommendations for Executive Action We are making the following six recommendations to FAMS: The Executive Assistant Administrator / Director of FAMS should identify and utilize a suitable system that provides information about air marshals’ medical qualification status. (Recommendation 1) The Executive Assistant Administrator / Director of FAMS should develop and implement a plan to assess the health and fitness of the FAMS workforce as a whole, including trends over time. (Recommendation 2) The Executive Assistant Administrator / Director of FAMS should identify and implement a means to monitor the extent to which air marshals’ actual shifts and rest hours are consistent with scheduling guidelines. (Recommendation 3) The Executive Assistant Administrator / Director of FAMS should provide all air marshals access to scheduling guidelines, including workday length and rest periods. (Recommendation 4) The Executive Assistant Administrator / Director of FAMS should disseminate or otherwise provide supervisory air marshals access to guidance that outlines authorities and procedures for changing an air marshal’s work schedule. (Recommendation 5) The Executive Assistant Administrator / Director of FAMS should take steps to reaffirm and strengthen efforts to prevent discrimination by, for example, updating and following through on its 2012 action plan and renewing leadership commitment to the plan’s goals. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of our report to DHS for comment. In written comments, which are included in appendix IV, DHS concurred with our six recommendations and described steps they plan to take to address them, including estimated timeframes for completion. With regard to our first recommendation that FAMS identify and utilize a suitable system that provides information about air marshals’ medical qualification status, DHS officials stated that FAMS is evaluating case management software to track this information and plans to pursue funding for this effort in fiscal year 2021. This action, if fully implemented, should address the intent of this recommendation. With regard to our second recommendation that FAMS develop and implement a plan to assess the health and fitness of the FAMS workforce as a whole, DHS officials stated that FAMS recently established a team to develop a plan for assessing workforce health and wellness issues. Adopting and implementing a plan that assesses the health and fitness of the FAMS workforce as a whole should address the intent of this recommendation. With regard to our third recommendation that FAMS identify and implement a means to monitor the extent to which air marshals’ actual shifts and rest hours are consistent with scheduling guidelines, DHS officials stated that FAMS will begin tracking air marshals’ actual hours and examine the extent to which air marshals’ actual and scheduled hours vary. This information could be helpful, for example, in assessing air marshals’ schedule predictability. However, to address the intent of this recommendation, FAMS would need to monitor the extent that air marshals’ actual work and rest hours are consistent with FAMS’s scheduling guidelines. With regard to our fourth recommendation to provide all air marshals access to scheduling guidelines, according to DHS officials, FAMS will provide air marshals ongoing access to the guidelines. Similarly, with regard to our fifth recommendation to provide supervisory air marshals access to guidance that outlines authorities and procedures for changing an air marshal’s work schedule, according to DHS officials, FAMS will provide supervisors ongoing access to scheduling authorities and procedures. These actions, if fully implemented, should address the intent of these recommendations. With regard to our sixth recommendation that FAMS reaffirm and strengthen efforts to prevent discrimination, DHS officials stated that FAMS plans to review the goals of its 2012 action plan and develop steps to strengthen efforts to prevent discrimination. If fully implemented, these actions should address the intent of this recommendation. We are sending copies of this report to the appropriate congressional committees and to the Acting Secretary of Homeland Security, Administrator of TSA, Executive Assistant Administrator / Director of FAMS, and other interested parties. In addition, this report is available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or russellw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report are to (1) assess the extent to which the Federal Air Marshal Service (FAMS) has taken steps to address air marshals’ health concerns; (2) assess the extent to which FAMS has taken steps to address air marshals’ concerns about their work schedules; and (3) describe what is known about the number of discrimination complaints FAMS employees have reported to the Transportation Security Administration (TSA) and FAMS and assess the extent to which TSA and FAMS have taken steps to prevent discrimination in the workplace. To address all three objectives, we visited a non-generalizable sample of six FAMS field offices in: Atlanta, Georgia; Dallas, Texas; Los Angeles, California; Newark, New Jersey; New York, New York; and Seattle, Washington. We chose these field offices to capture variation in the following factors: the number of special mission coverage trips (SMCs) in fiscal year 2018; the rate of schedule changes by field office in fiscal year 2018; the rate of equal employment opportunity complaints by field office for fiscal years 2015 through 2018; the number of employees in each field office as of September 2018; field office location; and results from the Office of Personnel Management’s (OPM) 2018 Federal Employee Viewpoint Survey (FEVS). To obtain a range of perspectives on quality of life issues, work schedules, and discrimination within FAMS, we conducted discussion sessions with air marshals as well as separate discussion sessions with supervisory federal air marshals (SFAMs) in each field office. We conducted a total of ten discussion sessions with air marshals. We initially conducted one discussion session with air marshals in the Seattle field office—where we spoke with approximately 15 air marshals—and one discussion session with air marshals in the Dallas field office—where we spoke with approximately 30 air marshals. Following these discussion sessions, we developed a standardized list of questions used to facilitate two discussion sessions with approximately 10 air marshals each, in each of the remaining four field offices (Atlanta, Los Angeles, Newark, and New York). We also conducted a total of six discussion sessions exclusively with SFAMs—one session in each field office that we visited. The discussion session in the Seattle field office consisted of two SFAMs, while all others consisted of approximately 10 SFAMs. Following discussion sessions with SFAMs in the Seattle and Dallas field offices, the team developed a standardized list of questions that was used by a moderator in meetings with SFAMS in the remaining four field offices. For discussion sessions with air marshals and SFAMs, we requested that each field office make available a diverse group of participants, to include women and minorities. These were semi-structured discussions, led by a moderator who followed a standardized list of questions and allowed for unstructured follow-up questions. The results from these group discussions are not generalizable to air marshals or SFAMs who did not participate in them, but they provided a range of perspectives from about 125 air marshals and about 50 SFAMs spanning the six FAMS field offices we visited. In each field office we visited we also interviewed field office management officials about these same topics. Finally, we interviewed field office operations staff in four of the six field offices about their role in scheduling air marshals. To obtain additional perspectives on these topics, we interviewed a TSA employee group (Women Executives at FAMS); a professional association representing federal law enforcement officers, including air marshals (the Federal Law Enforcement Officers Association); and the FAMS Ombudsman. To address the first objective about air marshals’ health concerns, we reviewed prior research on FAMS workforce issues including our past reports on challenges associated with FAMS’s workforce; a 2012 FAMS- commissioned Harvard Medical School study on air marshal sleep and fatigue; and reports from FAMS working groups that examined medical issues and physical fitness. To identify air marshals’ current concerns about health issues, we asked air marshals about any quality of life issues they face during discussion sessions. We then performed a content analysis of the results and identified key issues relating to health that were raised during the discussion sessions. One of our analysts conducted this analysis, tallying the number of discussion sessions in which certain health issues were discussed by air marshals. A different analyst then checked the information for accuracy, and any initial disagreements were discussed and reconciled by the analysts. We also analyzed results of OPM’s FEVS for FAMS, TSA, and DHS employees in 2018—the most recent data available at the time of our review. We analyzed FEVS question number 35, which asks survey participants if “Employees are protected from health and safety hazards on the job.” We assessed the reliability of the FEVS data by reviewing OPM’s 2018 FEVS Technical Report and reviewing confidence intervals for the data points we included in this report. We determined that the data we used were sufficiently reliable for use in the analysis presented in this report. We also analyzed FAMS’s workers’ compensation claim data for FAMS employees for fiscal years 2013 (when FAMS reviewed air marshals’ physical fitness) through 2018 (the most recent full fiscal year of data available). We assessed the reliability of the claim data by interviewing cognizant FAMS officials, obtaining information about the data systems that maintain these data, and conducting checks for missing and out of range values. We determined that the data we used were sufficiently reliable for use in the analysis presented in this report. To identify steps FAMS has taken to address air marshals’ health concerns, we asked FAMS management, SFAMs, and air marshals we met with in headquarters and field offices to identify efforts to assess and promote air marshals’ health—such as programs, policies, and practices. We reviewed documentation related to these efforts including FAMS’s policies outlining medical standards for air marshals and its Health, Fitness, and Wellness program, as well as FAMS analyses of health issues among air marshals, workers’ compensation claims, and on-the- job injuries. For example, we examined (a) minutes from two FAMS meetings when FAMS Medical Programs Section officials reported on medical and health issues among air marshals; (b) summary information from TSA’s Occupational Safety, Health, and Environment Division describing air marshals’ worker compensation claims from fiscal years 2015 through 2018; (c) an analysis of injuries and illnesses reported by air marshals from calendar years 2016 through 2018. We also reviewed information about FAMS practices for maintaining medical and health information about air marshals. We compared FAMS’s efforts to address air marshals’ health concerns to OPM strategies for human capital management and a TSA strategic planning document from June 2018. To address the second objective to examine the extent to which FAMS has taken steps to address air marshals’ concerns about their work schedules, we reviewed FAMS documents outlining scheduling guidelines for shift length and rest periods, protocols for adjusting air marshals’ schedules, and FAMS management reports with statistics on air marshals’ planned and actual schedules. We analyzed data from FAMS’s Aircrews data system on the number of SMC missions and the number of changes made to air marshals’ schedules in order to cover SMCs between November 2016 and June 2019. We also analyzed data from FAMS’s Aircrews data system on the number of scheduled standby shifts between June 2018—when FAMS began scheduling air marshals to standby shifts to staff SMCs—and August 2019. We assessed the reliability of these data by reviewing documentation regarding the source of this data and by obtaining information from knowledgeable agency officials about its accuracy and completeness. We found these data to be sufficiently reliable for use in our analysis. To identify the lengths of air marshals’ shifts when they flew missions, we analyzed 808 air marshal time sheets. We first selected four separate 28-day periods, known as roster periods, during which air marshals flew missions. Our analysis included air marshals scheduled to fly or on recovery shifts on 11 or more days during the selected roster periods. This resulted in a total of 7,981 roster periods worked by air marshals as our population of interest. To help ensure the sample included air marshals from field offices that had high rates of SMCs for each roster period, we stratified our population into eight mutually exclusive strata based on the roster period and the percentage of each field office’s missions that were SMCs in each roster period. We then randomly selected a stratified sample of 101 air marshals from each roster period proportionally allocated across the SMC percentage strata within each roster period. Using these data for these air marshals, we analyzed the length of air marshals’ shifts when they flew domestic and international missions to identify shifts that were (1) consistent with or (2) exceeded scheduling guidelines. For example, we analyzed time sheets to estimate the percentage of roster periods worked by air marshals that included one or more shifts longer than 10 hours. We also analyzed time sheets to estimate the percentage of roster periods worked by air marshals that included one or more shifts between 10 and 12 hours and to estimate the percentage of roster periods worked by air marshals that included one or more shifts longer than 12 hours. We also examined the number of air marshals’ regular days off. Specifically, we analyzed air marshals’ time sheets to estimate the percentage of roster periods worked by air marshals that included less than 8 regular days off. In performing this analysis, we did not count days as regular days off when air marshals reported receiving a regular day off but also reported time worked for the same day, unless the time worked was carryover from a prior workday. In conducting these time sheet analyses, we took steps to minimize issues that might affect data reliability. Specifically, we identified time and attendance sheets that included errors that would impact our analysis— such as those with missing values—and either excluded them or obtained corrected information from FAMS. We excluded a total of 44 of the 404 roster periods initially selected in our sample. We also performed an analysis to ensure that by excluding these timesheets we did not introduce bias into our sample. We found no evidence of bias and concluded the sample data was sufficiently reliable for the purposes of producing population estimates. The results of our analysis are generalizable to the roster periods analyzed. To identify steps FAMS has taken to address air marshals’ concerns about their schedules, we interviewed management officials from FAMS’s Flight Operations Division about their efforts to (1) monitor air marshals’ shifts and rest against scheduling guidelines and (2) make scheduling protocols available to staff. We compared FAMS’s actions to address air marshals’ scheduling concerns to two principles in Standards for Internal Control in the Federal Government related to the need to implement control activities and use quality information to achieve an entity’s objectives. To address the third objective about discrimination, we reviewed FAMS, TSA, and DHS policies related to discrimination and interviewed FAMS, TSA, and DHS officials to understand how FAMS employees report discrimination complaints. Specifically, we met with officials in TSA’s Civil Rights Division (CRD), TSA’s Anti-Harassment Program, FAMS Incident Activity Coordination and Trends Unit, and DHS OIG. We also examined the number and characteristics of discrimination complaints reported by FAMS employees from fiscal year 2016 through fiscal year 2018—the most recent 3 full years of data available at the time of our review. Specifically, we analyzed record-level data on discrimination complaints filed or reported by FAMS employees to TSA’s CRD, TSA’s Anti- Harassment Program, and FAMS’s Incident Activity Coordination and Trends Unit. We also obtained information from the DHS OIG on individual complaints they received that involved FAMS employees and included complaints of discrimination. Generally, we analyzed the date of the complaint, type of allegation, basis of the discrimination, and outcomes. We assessed the reliability of the data from TSA’s CRD, TSA’s Anti-Harassment Program, and FAMS’s Incident Activity Coordination and Trends Unit by interviewing cognizant TSA and FAMS officials, obtaining information about the data systems that maintain these data, and conducting checks for missing and out of range values. We determined that the data we used was sufficiently reliable for use in the analysis presented in this report. To examine the proportion of the FAMS and TSA workforces who alleged discrimination relative to the size of these workforces, we compared the number of complaints handled by TSA’s CRD for fiscal years 2016, 2017 and 2018 to the total number of employees during the same fiscal years. We assessed the reliability of the TSA’s CRD data by interviewing cognizant TSA officials and obtaining information about the data system that maintains these data. We determined that the data we used was sufficiently reliable for use in the analysis presented in this report. To identify steps TSA and FAMS have taken to prevent discrimination in the workplace, we interviewed TSA and FAMS management, SFAMs, and air marshals we met with during our site visits. We then analyzed documentation related to the identified efforts such as minutes from all 20 FAMS Field Office Focus Group meetings between October 2016 and December 2018 as well as DHS and TSA training materials related to preventing discrimination. To identify air marshals’ current perspectives about discrimination, we asked air marshals in our discussion sessions about the processes for reporting discriminatory behavior as well as their perspectives on discriminatory behavior within FAMS. We then performed a content analysis of the results and identified key issues that were raised during the discussion sessions, including air marshals’ comments regarding their experiences related to retaliation for reporting discrimination. One of our analysts conducted this analysis, tallying the number of discussion sessions in which certain issues were discussed by air marshals. A different analyst then checked the information for accuracy. We then determined the extent to which certain key issues were raised among the sessions. In addition, we analyzed results of OPM’s FEVS for FAMS, TSA, and DHS employees in 2018. Specifically, we analyzed FEVS question number 17, which asks survey participants if employees “Can disclose suspected violation without fear of reprisal.” We also analyzed FEVS question number 38, which asks survey participants if “Prohibited personnel practices are not tolerated.” As noted above, we assessed the reliability of the FEVS data and determined that the data we used was sufficiently reliable for use in the analysis presented in this report. We also analyzed data from TSA’s employee exit survey results for FAMS employees from fiscal years 2012 through 2018—the period for which full year data were available since the DHS OIG review. Specifically, we examined the extent to which employees’ reasons for leaving included diversity or inclusion barriers in the workplace. We assessed the reliability of the exit survey data by obtaining information about how the data are collected from TSA officials. We determined that the data we used were sufficiently reliable for use in the analysis presented in this report. We compared TSA’s and FAMS’s efforts to prevent discrimination in the workplace to the Equal Employment Opportunity Commission’s Management Directive 715. This policy requires agencies to take appropriate steps to establish a model equal employment opportunity (EEO) program and identifies six essential elements for a model EEO program. In addition, we compared TSA’s and FAMS’s efforts to DHS’s and TSA’s strategic planning documents which both include an objective to develop and maintain a high-performing workforce. We conducted this performance audit from July 2018 to January 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of Air Marshals’ Regular Days Off The Federal Air Marshal Service’s (FAMS) scheduling guidelines state that each air marshal is scheduled to receive a minimum of 60 hours of rest around 2 consecutive regular days off each week, or a total of 8 regular days off each 28-day roster period. FAMS Flight Operations officials stated that there are exceptions that may prevent an air marshal from being scheduled to receive 2 regular days off each week, such as international deployments that last 6 or more days and travel to and from training programs that last 6 or more days. Additionally, FAMS management officials and air marshals that we interviewed stated that air marshals may be asked to cover flights for which a potentially high-risk passenger has been ticketed—known as Special Mission Coverage deployments—on their scheduled regular days off if no other air marshals are available. Furthermore, FAMS Flight Operations officials stated that FAMS may ask air marshals to receive non-consecutive regular days off due to operational needs. We analyzed air marshals’ regular days off as recorded on their timesheets to determine the extent that they were consistent with these scheduling guidelines. Specifically, we analyzed a generalizable sample of air marshals’ timesheets for two roster periods in fiscal year 2018 and two roster periods in fiscal year 2019. We found that air marshals generally received 8 regular days off in the roster periods we analyzed. Specifically, during the 28-day roster periods we examined in fiscal year 2019, we estimate that air marshals received 8 regular days off approximately 98 percent of the time. However, some air marshals did not receive all 8 regular days off. Specifically, during the 28-day roster periods we analyzed in fiscal year 2019, we estimate that air marshals received 7 regular days off approximately 2 percent of the time. See figure 5 for results of our analysis. Appendix III: Description of Federal Air Marshal Service Employee Discrimination Complaints Received, by Office There are four venues through which Federal Air Marshal Service (FAMS) employees can raise discrimination complaints. One of these venues is the Transportation Security Administration’s (TSA) Civil Rights Division (CRD) which is responsible for receiving and handling FAMS employees’ equal employment opportunity (EEO) complaints. Although reporting to CRD is the only means for FAMS employees to file an EEO complaint, they may choose to report discrimination in other venues. Specifically, they may report discrimination to their manager, TSA’s Anti-Harassment Program—which is overseen by TSA’s National Resolution Center, or the Department of Homeland Security’s (DHS) Office of Inspector General (OIG). FAMS employees may also choose to report to CRD as well as to one or more of the other available entities. Table 1 describes what is known about the number and nature of complaints received through each venue in fiscal years 2016 through 2018. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contacts and Staff Acknowledgements GAO Contact Staff Acknowledgments In addition to the contact named above, Claudia Becker (Assistant Director), Anne Akin (Analyst-in-Charge), Enyinnaya Aja, James Ashley, Carl Barden, Taiyshawna Battle, Edda Emmanuelli-Perez, Eric Hauswirth, Yvonne Jones, Jesse Jordan, Ellie Klein, Thomas Lombardi, Diona Martyn, Sam Portnow, Minette Richardson, Forrest Rule, Raymond Sendejas, Michael Silver, and Adam Vogt also made key contributions to this report.
Why GAO Did This Study In the wake of 9/11, terrorists continue to target aircraft and airports, underscoring the ongoing threat to civil aviation and the need for effective security measures. FAMS deploys air marshals on selected flights to address such threats and is a key component of TSA's approach to aviation security. However, longstanding challenges faced by FAMS's workforce could impact its ability to carry out its mission. GAO was asked to review FAMS workforce issues. This report addresses (1) the extent to which FAMS has taken steps to address air marshals' health concerns, (2) the extent to which FAMS has taken steps to address air marshals' concerns about their work schedules, and (3) the number of discrimination complaints FAMS employees have reported and the extent to which FAMS has taken steps to prevent discrimination. GAO analyzed TSA and FAMS policies; documentation of efforts to address air marshals' quality of life issues; and FAMS data on missions, schedules, and discrimination complaints. GAO also interviewed TSA and FAMS officials, including FAMS management and air marshals in a non-generalizable sample of six FAMS field offices selected to capture a breadth of perspectives. What GAO Found Air marshals continue to express concerns about their health, but the Federal Air Marshal Service (FAMS) has not comprehensively assessed the health of its workforce. Air marshals in all six field offices we visited noted health issues, such as sleep deprivation, as a key quality of life concern. FAMS has taken steps to assess air marshals' individual health, such as requiring medical exams, but has not comprehensively assessed the overall health of its workforce and has not developed a plan to do so. FAMS officials stated that it would be difficult to analyze air marshals' medical records because they are not stored electronically, though they are researching options to do so. FAMS could develop and implement a plan to analyze the employee health data it already collects to identify workforce trends, and use this information to better promote employee welfare consistent with Transportation Security Administration (TSA) leadership principles. FAMS has taken some steps to address air marshals' concerns about their work schedules. In March 2018, FAMS revised its deployment strategy to expand coverage of certain high risk missions that it typically learns of 72 hours in advance. Following this, changes to air marshals' schedules to accommodate these missions more than doubled. In response, FAMS altered how it staffs these missions and reports that these modifications have reduced schedule changes. FAMS also maintains shift length and rest period guidelines intended to balance mission needs with air marshals' quality of life. However, FAMS does not monitor the extent to which air marshals' actual work hours are consistent with guidelines because it has not identified a need to do so. As a result, it cannot determine how frequently air marshals work beyond guidelines and is not well-positioned to manage risks associated with long work hours. From fiscal years 2016 through 2018, FAMS employees filed 230 discrimination complaints with TSA's Civil Rights Division, though employees may have reported additional discrimination complaints through other means. In 2012, FAMS adopted an action plan to address discrimination and has taken some steps called for in the plan, such as sustaining a FAMS Ombudsman position. However, due to a loss of management focus on the plan, FAMS has not fully implemented other planned efforts, such as holding diversity focus groups. Taking steps to reaffirm its efforts to prevent discrimination would demonstrate leadership commitment to reducing concerns of discrimination within FAMS. What GAO Recommends GAO is making six recommendations to FAMS, including that it implement a plan to assess the health of the FAMS workforce, monitor the extent that air marshals' shifts are consistent with guidelines, and strengthen efforts to prevent discrimination. DHS concurred with all six recommendations.
gao_GAO-20-273
gao_GAO-20-273_0
Background The Vast and Constantly Evolving Biological Threats Cultivating a strong biological defense requires an understanding of a multitude of biological threats. The nature of these threats can be intentional, naturally occurring, or accidental and can be exacerbated by changes in behavior and environment. The vast and evolving biological threat landscape includes threats of biological warfare, bioterrorism, infectious disease threats to humans and animals, crop failure, and safety and security lapses at facilities that house biological threat agents. The use of biological weapons or their proliferation by state or non-state actors presents a significant challenge to our national security, our population, our agriculture, the economy, and the environment. Despite ratification of the Biological Weapons Convention in 1975 and the end of the Cold War decades later, the threat of biological warfare persists today. For example, the State Department reported in 2019 that China, Iran, North Korea, Russia, and Syria continue to engage in dual-use or biological weapons-specific activities. Additionally, the biotechnology revolution presents opportunities to advance the life sciences, yet that same technology in the wrong hands could be used to catastrophic effect. For example, synthetic biology may lead to advances in public health, such as the development of biosensors that can permanently reside in the body to detect and treat abnormalities such as cancer. However, if used to create and combine agents to create biological weapons, synthetic biology poses a significant threat. Finally, non-state actors such as terrorist organizations, domestic militia groups, and “lone wolves” have both the interest and, in some cases, the limited capacity to develop biological weapons. Biological threats can be unpredictable, as humans, animals, and plants are vulnerable to a variety of naturally occurring infectious disease and pest threats. Urbanization, habitat encroachment, and increased and faster travel, coupled with weak health systems, increase the risk of infectious diseases to spread rapidly across the globe. Pandemic influenza presents a constant threat to global public health and exemplifies the susceptibility of humans to diseases with animal origins. For example, in 2009 when an H1N1 influenza virus emerged with a new combination of genes from swine, avian, and human influenza viruses, it demonstrated how the genetic compositions of some viruses naturally change, meaning most people have little or no immunity to the new virus. In 2009, this led to a global pandemic with a novel H1N1 influenza virus (see fig.1). Other examples of zoonotic disease threats—infectious diseases that are transmissible from animals to humans—include Ebola, Zika, and Eastern Equine Encephalitis. Biological threats may also arise from changes in human behaviors. Habitat loss and human encroachment on rural and wildlife environments are bringing populations of humans and animals into closer and more frequent contact. These changing relationships with animals increase the risk of disease transmission among people, pets, livestock, and wildlife. Other changes in human behavior—such as vaccine hesitancy, mass migration, and conflict—put stress on health care systems around the world. In an ever increasing interconnected world, building biological defenses globally can help maintain health security domestically, because a disease threat anywhere is a disease threat everywhere. Biodefense capabilities are also needed to address changes in the environment which have the potential to negatively affect human health and the agriculture industry. As we reported in October 2015, climate change may contribute to the spread of vector-borne diseases that are transmitted to humans by animals, including invertebrate animals such as mosquitoes and ticks. Additionally, extreme climate conditions, such as sustained drought and heat waves can affect crops and livestock, and excess precipitation can also increase flooding events and erosion, and decrease soil quality. Losses of livestock and crops from the biological threats of disease, pests, or extreme climate conditions could have devastating effects on trade and the national economy. Finally, in many countries around the world, pathogens are stored in laboratories that lack appropriate biosecurity measures where they could be diverted by actors who wish to do harm. Advances in science and technology bring revolutionary cures and progress, but they also have the potential to facilitate intentional misuse. As we reported in 2016, some laboratories do not have appropriate biocontainment or biosafety protocols. These shortfalls could lead to outbreaks through laboratory acquired infections or pathogens accidently being released into the environment. GAO’s Prior Work on Biodefense-Related Challenges and Enterprise Risk Management We have previously reported on a wide range of biodefense-related efforts carried out by multiple federal departments and agencies. Since 2009, we have identified broad, cross-cutting issues in leadership, coordination, and collaboration that arise from fragmentation throughout the complex interagency, intergovernmental, and intersectoral biodefense enterprise. For example, our past work has identified a number of key challenges related to the nation’s ability to detect and respond to biological incidents that transcend what any one agency can address on its own. They include: (1) assessing enterprise-wide threats, (2) determining optimal biodetection technologies, (3) building and maintaining emerging infectious disease surveillance, (4) establishing situational awareness and data integration, and (5) enhancing biological laboratory safety and security. (Additional detail on these challenges and our related reports is presented in appendix II.) The complexity and fragmentation of roles and responsibilities across numerous federal and nonfederal entities presents challenges to ensuring efficiency and effectiveness across the entire biodefense enterprise. We called for a national biodefense strategy and focused leadership because addressing these issues is a difficult and complex challenge that crosses mission areas, federal departments, and sectors. Additionally, we have reported on enterprise risk management principles that can support enterprise-wide decision-making under complex and uncertain conditions. Enterprise risk management is a strategy for helping policymakers make decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty. While often applied at an agency level, we have also recognized that the size and complexity of certain issues, such as homeland security, involves multiple partners which can add another degree of difficulty to enterprise risk management. For certain areas, like biodefense, where activities cut across multiple federal and nonfederal entities, applying enterprise risk management principles becomes more challenging, but equally important to help ensure the responsible parties can make decisions that help to ensure effectiveness and maximize opportunities to better manage risk. Enterprise risk management in the larger interagency and intergovernmental context does not replace what each agency needs to do to pursue its own core missions. Rather, it allows agency decision makers to consider their missions and the alternatives they have to meet them from an enterprise-wide perspective. In this manner, decision makers can consider the risk-reduction contributions their actions make to the larger enterprise—for example by selecting alternatives that meet their immediate needs and provide collateral benefits to some other part of the enterprise—as one of many factors in individual agency decision- making. National Biodefense Strategy and National Security Presidential Memorandum-14 On September 18, 2018, the White House released the National Biodefense Strategy and characterized it as a new direction to protect the nation against biological threats and that its implementation would promote a more efficient, coordinated, and accountable biodefense enterprise. The Strategy’s five high-level goals are to help enable the efficient assessment, prevention, preparation, response, and recovery from natural, accidental, or deliberate biological threats. When the National Biodefense Strategy was released, the White House issued NSPM-14: Presidential Memorandum on the Support for National Biodefense. According to the Strategy, NSPM-14 “creates a dedicated mechanism, housed within the U.S. Department of Health and Human Services, to coordinate federal biodefense activities and assess the effectiveness with which the National Biodefense Strategy’s goals and objectives are being met.” NSPM-14 details a governance structure and implementation process to achieve the Strategy’s goals. The governance structure includes the creation of a Biodefense Steering Committee chaired by the Secretary of HHS, and includes seven other agency heads as members: the Attorney General, the Secretaries from the Departments of State and VA, DOD, USDA, and DHS, and the Administrator of the EPA. Additionally, NSPM- 14 required the formation of a Biodefense Coordination Team to assist the Biodefense Steering Committee in carrying out its responsibilities. Administratively located within HHS, the Biodefense Coordination Team consists of staff from multiple agencies with biodefense responsibilities and is designed to assist the Biodefense Steering Committee in monitoring and coordinating implementation of the Strategy (see fig.2). The Biodefense Coordination Team may convene working groups and maintain awareness of biodefense activities across the biodefense enterprise and has responsibility for establishing policies, processes, and procedures to govern its activities, subject to the approval from the Biodefense Steering Committee. NSPM-14 also establishes that the Assistant to the President for National Security Affairs will serve as the lead for policy coordination and review, providing strategic input and policy integration for federal biodefense efforts. NSPM-14 also outlines an implementation process, which sets requirements and deadlines for the interagency group to achieve the Strategy’s goals and also requires the heads of agencies identified by the Biodefense Steering Committee as having responsibilities pertaining to biodefense to review the Strategy every 2 years, and revise as appropriate. Strategy-Related Efforts Are Designed to Support an Enterprise-Wide Approach, but Implementation Challenges Could Limit Long-term Success The National Biodefense Strategy and associated plans bring together all the key elements of federal biodefense capabilities, which presents an opportunity to identify gaps and consider enterprise-wide risk and resources for investment trade-off decisions. However, challenges with planning to manage change, limited guidance and methods for analyzing capabilities, and lack of clarity about decision-making processes, roles, and responsibilities while adapting to a new enterprise-wide approach could limit the success of the Strategy’s implementation. The Strategy and Associated Plans Create a Framework to Assess Enterprise-Wide National Biodefense Capabilities for the First Time The National Biodefense Strategy and its associated plans bring together the efforts of federal agencies with significant biodefense roles, responsibilities, and resources to address intentional, accidental, and naturally-occurring threats. The Strategy and plans also provide processes for collecting and analyzing comprehensive information across the enterprise, an important step toward the kind of enterprise-wide strategic decision-making we have called for. For example, our prior work identified the need for a strategy to help ensure efficiency and effectiveness across the entire biodefense enterprise by connecting strategic approaches and investment decisions across disparate but interrelated functions within the biodefense enterprise. These functions are (1) understanding and defining threats, (2) taking action to prevent and protect against attacks and significant national and international infectious disease outbreaks, (3) employing new and existing techniques and technologies to more quickly detect biological events, and (4) preparing to respond and recover. Consistent with characteristics of national strategies and leading practices for interagency collaboration, the National Biodefense Strategy clearly articulates the purpose of the Strategy and the scope of the problem, as well as high-level goals and objectives to guide implementation. As shown in Figure 3, the Strategy’s five high-level goals comprise a new framework that incorporates the distinct biodefense functional areas and includes the different sources of biological threat—accidental, intentional, and naturally occurring. It is within this framework that national biodefense capabilities will be assessed across the enterprise. According to the Strategy, its aim is to bring together a single, coordinated effort to orchestrate activities across the United States Government to protect the American people from biological threats. The Strategy defines the term “biothreat” broadly to include all sources of major catastrophic risk, including naturally occurring biological threats, the accidental release of pathogens, and the deliberate use of biological weapons. Officials from three of the eight participating agencies that we interviewed noted that this is the first time that the federal government has identified activities across the whole biodefense enterprise and assessed resources and gaps to address multiple sources of threat regardless of source (naturally occurring, accidental, or intentional). The Strategy also established common terminology, giving the agencies a shared basis for identifying biodefense-related programs and activities, which is consistent with our national strategy criteria and our leading collaboration practices. Developing common terminology can help to bridge organizational cultures when multiple agencies with varying missions work together for a common purpose. The Strategy also contains goals, objectives, and over 240 separate activities that cover the range of actions that comprise national biodefense capabilities, which provides a high-level framework to begin to guide agencies toward a shared vision for outcomes. NSPM-14 Established Processes to Help Agencies Identify Gaps and Set Budget Priorities While the Strategy outlined high-level goals and objectives to help define priorities, NSPM-14 established a structure and process by which the federal agencies can assess enterprise-wide biodefense capabilities and needs, and subsequently develop guidance to help inform agency budget submissions. NSPM-14 lays out, in broad strokes, a process to identify biodefense efforts and assess how current resources support the Strategy, how existing programs and resources could better align with the Strategy, and how additional resources, if available, could be applied to support the goals of the Strategy. As shown in figure 4, this process begins through a data call with participating agencies documenting all biodefense programs, projects, and activities within their purview in a biodefense memorandum. As part of this process, NSPM-14 calls for the Biodefense Coordination Team, in coordination with NSC staff through the NSPM-4 process, to develop and collectively agree on metrics, milestones, and end-states and roles and responsibilities. For each of the objectives within the Strategy where agencies have roles and responsibilities, HHS directed participating agencies, as part of a data call, to identify any resource, authority, policy, science and technology, or coordination gaps against those end states and propose solutions where needed. As outlined in NSPM-14, the Biodefense Coordination Team is then to use the information submitted by the individual agencies to identify gaps, shortfalls, redundancies, and challenges across the enterprise. Finally, NSPM-14 directs officials with biodefense responsibilities to create joint policy guidance in coordination with the Assistant to the President for National Security Affairs through the NSPM-4 process—to be updated on an annual basis—that can help guide individual agency budget submissions. The process outlined in NSPM-14 is intended to lead to a cross- government assessment of federal biodefense capabilities and is consistent with our past calls for a strategy that can guide investment across the whole enterprise and with leading practices for interagency collaboration and enterprise risk management. We have previously reported that defining shared outcomes—and processes by which to achieve them—and developing mechanisms to monitor and evaluate results can reinforce accountability for collaborative efforts. Working together to develop a set of draft metrics, milestones, and end-states requires interagency participants to establish a shared vision for outcomes, and metrics and milestones serve as accountability mechanisms. NSPM-14 describes how agencies will consider the agreed upon joint policy guidance developed by agencies with biodefense responsibilities and the White House when developing their budgets. Specifically, according to NSPM-14, these agencies shall include in their respective annual budget requests to OMB information on the programs within the budget requests that support the implementation of the Strategy and conform to budget formulation requirements established by OMB, including specified funding levels. Establishing goals, objectives, and desired end states that cut across the federal government also create a foundation for effective enterprise risk management. As we have previously reported, a shared understanding of the scope of the risks enables leaders across the enterprise to align agency goals and objectives and consider their own missions and purposes within a more expansive and comprehensive understanding of threats and opportunities. In our interviews, officials from participating agencies stated that the NSPM-14 processes constitute a new approach to identifying gaps and setting budget priorities for biodefense, and that they viewed the approach as generally well designed. Specifically, officials from six of the eight participating agencies said that the process for identifying gaps was somewhat well-designed. Officials from the other two participating agencies said that this process was very well-designed. Agency officials provided several reasons for optimism about the Strategy and the processes outlined in NSPM-14, including that: They provide a holistic picture of current biodefense programs and activities, which creates government-wide visibility so that gaps can be identified. They create a forum to discuss potential gaps and biodefense responsibilities, which has not existed previously. They contain a strong overarching architecture to map existing efforts, identify gaps, and inform future revisions (as necessary). Additionally, agency officials said that the assessment and joint policy guidance development process outlined in NSPM-14 offered some promise for helping agencies identify the resources necessary to achieve the Strategy’s goals, which is consistent with our national strategy criteria. Specifically, officials from five of the eight agencies said the process is somewhat well-designed to accomplish these goals. Officials from the other three agencies said the process is very well-designed to ensure the appropriate identification of resources and investments necessary to achieve the goals outlined in the Strategy. For example, officials from three agencies said it would help the implementation of the Strategy succeed where previous efforts failed because it is designed to allow the Strategy’s priorities to drive budget decisions. However, officials from all of the agencies we interviewed, even those with the most optimistic views on the leadership and governance structure design, tempered their responses with the caveat that implementation is in such early stages that it remains to be seen how effective these structures will actually be once tested. Implementation Challenges Could Hinder Enterprise-Wide Biodefense Efforts Although the Strategy and associated plans establish the foundation for enterprise risk management, in particular by bringing together all of the functional biodefense areas across different sources of threat, we and biodefense agency officials identified multiple challenges that could affect the Strategy’s implementation. These include challenges individual agencies faced during the initial data collection process as well as a lack of planning and guidance to support an enterprise-wide approach. In our analyses and interviews, we found that parts of the process in the first year were underdeveloped, raising questions about (1) the plans to support change management practices and ensure that early- implementation limitations do not become institutionalized in future years’ efforts; (2) guidance and methods for meaningfully analyzing the data; and (3) the clarity of decision-making processes, roles, and responsibilities. Challenges with Adapting to New Processes May Have Led to Incomplete Data Collection During our interviews, agency officials reported challenges they faced in the first-year’s data collection effort with (1) staffing and organizational resources within individual agencies, (2) quantifying biodefense activities, and (3) technology glitches. These challenges may have led to incomplete data collection, but are not wholly unexpected given they occurred in the context of adapting to cultural change that this kind of enterprise-wide approach to managing risk represents, while implementing new processes and procedures. We have previously reported that leaders of successful transformations seek to learn from best practices and create a set of systems and processes that are tailored to the specific needs and circumstances of the new organization. However, the agencies involved in implementing the Strategy do not have a plan that includes change management practices that can help prevent these challenges from being carried forward into future efforts, and help reinforce enterprise-wide approaches, among other things. Staffing and organizational resources. During our interviews, one challenge that arose involved having the personnel and expertise needed to complete the initial effort to document biodefense programs, projects, and activities. For example, officials from one agency told us that this data collection effort was especially challenging because policy and program managers were responsible for determining both programmatic and budgetary information, which exceeded their expertise. This agency ultimately had to bring in non-biodefense personnel—including from the comptroller’s office—to identify programs and resources to complete the information request. Officials from three of the eight agencies stated that staffing and organizational resource limitations also posed a challenge to the data collection process. For example, officials from one agency said that the agency does not have full-time staff assigned to the effort. Instead, it was seen as a collateral duty competing with regular priorities, which reduced the time devoted to identifying the necessary information. Quantifying biodefense activities. Officials we interviewed also highlighted the challenge of quantifying biodefense-related activities. Specifically, officials from four agencies noted that agencies without specific biodefense line items in their budgets have had difficulty fully quantifying how much their agency invests in biodefense-related activities. To help agencies attempt to capture and quantify this information in a consistent way, the Biodefense Coordination Team developed guidance to assist agencies in estimating the percentage of their chemical-biological-radiological-nuclear (CBRN) defense, all- hazards preparedness, and agriculture programs and activities, among others, that are specifically related to biodefense. Nevertheless, officials from two agencies said that distinguishing the biodefense-specific activities within their CBRN defense or all-hazards activities and budgets was inherently challenging, which in turn required officials to invest additional staff and time into the effort. Technology glitches. Officials we interviewed also cited challenges with the technology used to collect data. For example, officials from two agencies said that they had experienced glitches with the OMB Max Information System, which the Biodefense Coordination Team guidance directed them to use for the data collection effort. They stated that the technology issues prevented them from entering biodefense budget numbers in a timely manner. Officials noted that an integrated platform dedicated to biodefense enterprise needs would enhance their collaboration, which is consistent with our work on interagency collaboration that states technology is one means of establishing compatible processes for working across interagency boundaries. HHS officials are aware of the technology challenges and said they are collecting feedback and identifying ways to improve the data collection and analytical tool for future data collection efforts. These challenges with resources, identification of budget activities, and technology occurred in the context of the individual agencies and officials adapting to new procedures and a broader cultural shift from how they have approached their biodefense missions in the past. Officials told us that because of the learning involved the first time through the process and the 2018 government shutdown, coupled with the tight time frames set forth in NSPM-14, agencies may not have submitted complete or detailed information about their biodefense programs. For example, officials at one large agency told us they treated the first year as a learning experience and that in the coming years, when agencies have sufficient time to respond to the data call, the quality of the data submitted should improve. Some officials we interviewed voiced concern that this first-year effort could set a poor precedent for these activities in future years if the challenges are not acknowledged and addressed. For example, an official noted that committing to the first-year’s results as the “baseline” for future years of the Strategy’s implementation could compound or institutionalize the issues encountered in the first year. Officials cautioned against a “garbage in, garbage out” situation, meaning the output of any analysis would only be as good as the quality of the data fed into that analysis. As agency officials described their data collection efforts, it was clear to them that the focus was on meeting the time frames established in NSPM-14 to identify existing biodefense efforts in this first year and that not all processes had been fully developed prior to the data collection effort. OMB staff acknowledged that there were challenges in the first year’s data collection effort, and said data quality would likely improve in future years as agencies adjust their internal structures to suit the demands of the NSPM-14 process. Officials from HHS and OMB staff stressed that this process will be iterative, with the first year being primarily about outlining the existing biodefense landscape. Our prior work on organizational transformations states that incorporating change management practices improves the likelihood of successful reforms and notes that it is important to recognize agency cultural factors that can either help or inhibit reform efforts. We have also reported that identifying cultural features of the originating components, prior to, or early on, in the transformation process, can help leadership gain a better understanding of their beliefs and values. Incorporating this type of change management practice can help educate agencies to better understand the varying missions and how those missions support the broader enterprise-wide effort. We have also noted the importance of communication and obtaining feedback from participants to help promote ownership for the transformation. This type of approach to managing risk across a multi-agency, multi-sectoral enterprise like biodefense is complex and novel. During our interviews, agency officials recognized a need for change management practices to support this effort in future years. Agency officials we interviewed noted that the process for the identification of biodefense resources and activities across the federal government outlined by NSPM-14 could be “transformational” for the biodefense enterprise and approached the data collection process in good faith, but said that it will take time to get right. The biodefense agencies are currently assessing the activities and challenges of the first year of implementation, and they plan to develop an after action report on lessons learned. HHS has conducted a survey and interviews to collect information and the material is being analyzed, but the lessons learned document is not yet final. HHS has not worked with the other biodefense agencies, however, to undertake an intentional effort to manage key cultural aspects of the enterprise-wide approach—such as communication and education mechanisms to help bridge organizational cultures, promote ownership of the transformation, and emphasize awareness of joint national security responsibilities. Further, HHS has not worked with the other biodefense agencies to establish feedback and monitoring mechanisms or processes, that can help identify implementation challenges and develop solutions to address those challenges, particularly early implementation issues that might threaten the efficacy of the effort if they are institutionalized going forward. A systematically developed plan for managing change could help ensure effective planning to sustain and advance transformation in the early years. Such a plan could address (1) institutionalizing learning and feedback mechanisms that allow for corrective action and ensure that issues that arise in early implementation—for example, incomplete or unreliable data—do not become entrenched in a way that plagues the future years’ efforts; and (2) establishing a communication and education strategy to reinforce collaborative behaviors, enterprise-wide approaches, and to emphasize accountability for shared national security missions, outcomes, and procedures. The Strategy Implementation Efforts Lack Clear Methods Guidance, and Lack Plans to Help Ensure the Ability to Perform Meaningful Analysis We found a lack of clear procedures and planning to help ensure that the Biodefense Coordination Team is prepared to analyze the data, once it has been collected, in a way that that leads to recognition of meaningful opportunities to leverage resources in efforts to maintain and advance national biodefence capabilities. In particular, HHS (1) has not documented guidance and methods for analyzing the data, including but not limited to methods and guidance for how to account for the contribution of nonfederal capabilities; and (2) does not have a resource plan for staffing and sustaining ongoing efforts. Methods and guidance for analyzing data. We found that the processes for the Biodefense Coordination Team to analyze the results of all the individual agency data submissions and identify priorities to guide resource allocation were not agreed upon or documented prior to the agency efforts and continue to lack specificity and transparency. At the time of our interviews, agency officials were in the midst of compiling and assigning budget numbers to their programs, projects, and activities. Officials we spoke with expressed uncertainty about how the information would be used. For example, officials from four agencies said they were uncertain about fundamental elements of the implementation process, including how information gathered will be used to identify gaps and set priorities. The overarching purpose of the analysis described in NSPM-14 is identification of gaps, shortfalls, and redundancies to support the goals and objectives of the Strategy. However, NSPM-14 does not specifically articulate what is meant by these terms. In response to our question about how the analysis was to be conducted, the Office of the Assistant Secretary for Preparedness and Response—the HHS office responsible for leading the Biodefense Coordination Team—described a general process that reflects the high-level description laid out in NSPM-14. HHS officials also stated that the Biodefense Coordination Team had consulted with experts in budget, planning, and evaluation while developing the methodology. However, HHS has not documented specific guidance and methodologies to help ensure transparency and accountability across the interagency and consistency in the Biodefense Coordination Team’s analysis. Additionally, the initial effort to collect information on all programs, projects, and activities focused on existing federal activities did not include a complete assessment of biodefense capabilities at the nonfederal level. Processes for soliciting nonfederal capabilities that contribute to the biodefense enterprise and are necessary to support the Strategy’s implementation are not articulated in NSPM-14. Moreover, the guidance document that agencies used for the data call stated that the Biodefense Coordination Team—in coordination with National Security Council and OMB staff—was to, among other things, use the information provided by the agencies to analyze the extent to which current U.S. Government resources support the goals and objectives of the Strategy. Officials from two agencies also said that not gathering information from the private sector and other existing biodefense working groups was a limitation in the information gathering process for this first year. Officials said these entities provide valuable subject matter expertise and including input from them in the future could help identify gaps across the biodefense enterprise. Some agencies included information about their work to support nonfederal stakeholders in their data collection effort, for example, by listing their grant programs or cooperative agreements. In addition, during our interviews, officials from all eight agencies described efforts to involve nonfederal partners when developing the Strategy and many described outreach efforts to obtain information since the Strategy’s release. For example, HHS issued a notice in the Federal Register and the Biodefense Coordination Team held a summit related to the implementation of the National Biodefense Strategy to engage nonfederal stakeholders. However, the Biodefense Coordination Team was not explicitly required to analyze nonfederal resources and there was no guidance that would help ensure agencies consistently and systematically included the contributions of nonfederal capabilities. In 2011, we reported that few of the resources required to support national biosurveillance capabilities are wholly owned by the federal government. Effective response to significant national biological incidents also relies heavily on nonfederal resources and capabilities. Because nonfederal entities own many of the resources and capabilities needed to achieve the goals and objectives outlined in the Strategy, assessing the baseline and identifying investment needs for a national biodefense capability necessarily involves assessing nonfederal entities’ ability to support a national capability. Officials from one of the agencies initially tasked with developing the biodefense strategy said the Biodefense Coordination Team needs to develop engagement structures with nonfederal partners, because currently, there is not a system in place to get everyone’s views or learn of what is going on outside the federal government. Our enterprise risk management work calls for agencies to identify and assess risks to be able to select among risk reduction alternatives. Enterprise risk management requires good information and analysis to enable officials to make informed trade off decisions across alternatives. Although the NSPM-14 process is designed to enable this kind of assessment and selection, it will not be as effective without complete information at the risk identification stage. Effective enterprise risk management implementation starts with agencies establishing a customized program that fits their specific organizational mission, culture, operating environment, and business processes. In our guide for designing evaluations, we called for plans to analyze data in ways that allow for valid conclusions to be drawn. Although the NSPM-14 guidance provides a high-level process that serves as a solid foundation for an effort as complex as managing risk across the entire biodefense enterprise, it does not provide the kind of specific guidance that can help all the involved agencies ensure they are operating off a common set of procedures that fits the particular needs of this effort. Furthermore, an analysis that cannot consistently account for the contribution of nonfederal capabilities does not reflect the true enterprise operating environment and limits the selection of alternatives available for managing risk. Clear and specific documentation of methodologies and procedures for analysis—including guidance on the methods to account for nonfederal capabilities—would provide better guidance for agencies that submit information for the assessment, assurance of more complete information to assess the state of national capabilities, and better overall transparency, accountability, and consistency. Staffing, supporting, and sustaining ongoing efforts. Officials we interviewed expressed concern about the resources that the Biodefense Coordination Team had available to it, both in the first year and on an ongoing basis. According to officials from five of the eight agencies, in order for the team to be most successful, it would need to be staffed by detailees from the participating agencies. However, officials we spoke with told us that not all agencies were able to provide a full-time detailee to help support the office. Without a dedicated liaison to the Biodefense Coordination Team, agencies may have less access to information and more limited influence over the iterative process. We have previously reported that agencies need to identify how interagency groups will be funded and staffed. HHS, which serves a leadership role on the Biodefense Steering Committee, identified in its fiscal year 2020 budget request $5 million for the resources necessary to help carry out its administrative functions for implementing the National Biodefense Strategy. However, HHS appropriations for fiscal year 2020 did not include the $5 million HHS requested. In addition, in our work on leading practices for agency reform efforts we stated that having a dedicated implementation team that has the capacity—including staffing and resources—can help ensure successful transformation. However, officials from multiple agencies reported that the initial planning for the staffing and responsibilities for the Biodefense Coordination Team had not been finalized. Without a plan to help ensure resources and mitigate resource challenges for ongoing efforts, the Biodefense Coordination Team risks not having the capacity it needs to conduct meaningful analysis, which would undermine the vision created by the Strategy and NSPM-14. Processes and Roles and Responsibilities for Making Joint Decisions Lack Clarity and Are Not Fully Developed The governing bodies overseeing the National Biodefense Strategy’s implementation—the Biodefense Steering Committee and Biodefense Coordination Team—did not clearly document key components of the assessment process and roles and responsibilities for joint decision- making in the first year of NSPM-14 implementation. This raises questions about how these bodies will move from an effort to catalog all existing activities to decision-making that accounts for enterprise-wide needs and opportunities. For example, officials from multiple agencies were not certain how the group would make joint decisions regarding priority setting and the allocation of resources, how the group would assign new biodefense responsibilities if gaps were identified, and to what extent the Biodefense Steering Committee could enforce budgetary priorities, if at all. Process for leveraging or directing resources. We found a lack of shared understanding and agreement about how the interagency process would work to align resources toward any identified gaps and reconfigure resources for any identified redundancies or inefficiencies. To address needs for new appropriations, NSPM-14 lays out a process to identify the need for additional resources to support the goals of the Strategy and how agencies will consider the joint policy guidance in their budget requests to Congress, but this coordination process also remains ambiguous and untested. OMB staff said the 2022 budget cycle would be the first year that agencies consider the joint policy guidance to inform their budget submissions, as envisioned by the Strategy and NSPM-14 process, as that guidance is still being developed. Officials from four agencies expressed reluctance to redirect resources away from their core missions to better support any enterprise-wide identified needs. When asked about the process outlined in NSPM-14, officials from only one of the eight agencies we interviewed said that the governing bodies were well-positioned to assign new responsibilities in response to identified gaps. Further, officials we interviewed noted that new responsibilities or activities may be difficult to implement without additional appropriations or authorities approved by Congress, or they would compete with an agency’s other priorities. When discussing their understanding of the process for prioritization and determining which agencies require what resources to help implement the Strategy, officials from four agencies referenced the NSPM-4 process (within the White House) to help guide this process. NSPM-14 also references NSPM-4, as noted above, and states the Biodefense Steering Committee seeks to reach consensus on decisions, and should any disagreements arise, the issue will be addressed through the NSPM-4 process. Through this process, the Assistant to the President for National Security Affairs serves as the lead for policy coordination and review to provide strategic input and facilitate policy integration for federal biodefense efforts. When we asked HHS officials for more specific decision-making guidance, they continued to cite the existing processes and directives for interagency decision-making. However, we found that neither of these Presidential memorandums detailed specific decision-making principles or steps for reaching consensus or even for raising decision points about how to best leverage or direct resources across the enterprise in response to gaps and inefficiencies. Similarly, agency officials we interviewed were not clear how this process would work, how decisions would be made, or how agencies would agree to take on new responsibilities to bridge gaps to achieve the Strategy’s goals. Roles and responsibilities. Similarly, the governing bodies have not fully defined the roles and responsibilities for making enterprise-wide decisions that affect individual agency budgets and for enforcing enterprise-wide budget priorities. NSPM-14 directs the heads of agencies to monitor, evaluate, and hold accountable their agencies for implementation of the Strategy, and describes how agencies will develop their budgets with consideration of the agreed upon joint policy guidance developed by the agencies and the White House. However, as with other parts of the NSPM-14 implementation process, the details regarding specific roles and responsibilities for directing and enforcing budget decisions lack detail and specificity. Additionally, officials from four agencies stated that the charter for the Biodefense Coordination Team has not been finalized, further delaying the articulation of roles and responsibilities and the ability to establish a shared agenda and common operating picture. As a result, some officials remain skeptical of the effectiveness of any decisions made. For example, officials from four agencies said the Biodefense Steering Committee does not have the authority to decide how individual agencies in the broader biodefense enterprise should allocate resources or prioritize programs. Officials we spoke with also provided examples of how this part of the implementation process requires attention and will from stakeholders outside the Biodefense Steering Committee, including the National Security Council staff, OMB, and the Congress. For example, officials from two agencies said turnover within the National Security Council staff had contributed to a lack of consistent leadership from the White House, which created a “lapse in momentum” and disrupted the implementation process. Additionally, officials said that key parts of the implementation process, such as the finalization of metrics, milestones and end states, as well as agreement on the federal agency roles and responsibilities for the biodefense activities articulated in the Strategy, had not been approved by the National Security Council staff. As of January 2020, these documents had not received National Security Council staff approval as the process for the development of metrics, milestones and end states is considered ongoing, which could lead to inefficiencies and delay effective implementation of the Strategy’s goals. Finally, officials we interviewed also discussed Congress’s key role as part of the regular federal budget process in determining agency appropriations. For example, officials from two agencies said it will be hard to predict whether the budget component expressed in NSPM-14 to assess and prioritize biodefense programs and activities will achieve its intended outcome. Some agency officials also believed the process to use joint policy guidance to inform annual budget submissions would not be entirely dissimilar to the annual budgetary process, as agencies will continue to submit their proposed budgets and wait for Congress to make appropriation decisions. However, we have previously reported that sustained congressional attention helps ensure that agencies continue to achieve progress resolving complex issues. We previously reported that determining the sources and types of resources needed and where those resources should be targeted are key decisions that effective national strategies should support. We also reported that effective national strategies should help clarify implementing organizations’ relationships in terms of leading, supporting, and partnering—in the context of the Strategy, that includes how enterprise- wide decisions about leveraging or directing resources to fill gaps and reduce inefficiency will be made and by whom. These could include gaps in policy, programming, or funding. Similarly, our previous work has found that articulating and agreeing to a process for making and enforcing decisions can improve the clarity surrounding a shared outcome, and that articulating these agreements in formal documents can strengthen agency commitment to working collaboratively and provide the overall framework for accountability and oversight. Moreover, a key aspect of enterprise risk management is creating a foundation that will enable participants to consider and prioritize alternatives. This prioritization can be based on a number of factors, such as the degree of risk reduction alternatives afford and the cost and difficulty to implement them. However, to do this at the enterprise level, the interagency participants need to agree on processes, roles, and responsibilities for enterprise-wide decision-making. This is particularly important in the context of enhancing efficiency and effectiveness in a broad mission space like biodefense where there is a wide array of threats and the threat landscape continually evolves. Uncertainty around the mechanisms to identify enterprise-wide priorities along with the lack of clearly documented and agreed upon processes, roles, and responsibilities for joint decision-making jeopardize the Strategy’s ability to enhance efficiency and effectiveness of the nation’s biodefense capabilities. In the absence of clearly articulated and agreed upon processes and procedures for joint decision-making to leverage or direct resources across agency boundaries in order to enhance efficiencies, agencies run the risk of continuing to work in stovepiped mission spaces and collecting information that does not serve its intended purpose. Full development and documentation of the processes, roles, and responsibilities for leveraging or directing resources across the enterprise in response to identified gaps and inefficiencies would enhance transparency and clarity for future year’s efforts and help establish a common operating picture that enables trade-offs across agency missions. Conclusions The National Biodefense Strategy, released in September 2018, and the establishment of interagency governance and budgeting mechanisms to help implement the Strategy constitute a promising new approach to establishing a transformational enterprise-wide endeavor that meaningfully enhances the effectiveness and efficiency of government- wide biodefense efforts. These efforts include establishing a framework to collect and compare biodefense programs, projects, and activities across the federal government, which could facilitate enterprise-wide decision- making and budget tradeoff decisions to help ensure the most efficient use of the nation’s biodefense resources. However, these efforts represent a start to a process and a cultural shift that may take years to fully develop. During the first year of implementation, agencies have faced numerous challenges that must be overcome to ensure long-term implementation success. While agencies remain optimistic about the potential benefits of this new approach, it is imperative that additional steps be taken to ensure the challenges experienced early on are not institutionalized and that there is an intentional communication, education, and feedback effort to reinforce collaborative behaviors and enterprise-wide accountability for national security missions. A plan that includes change management practices to help bridge agency cultures and missions, such as efforts to reinforce collaborative behaviors and enterprise-wide approaches, can help ensure agencies continue to refine their interagency efforts and adapt to changes and respond effectively to challenges along the way. In addition, without clear methods and guidance that articulate how all relevant information should be analyzed, including ensuring nonfederal roles, responsibilities, and resources are accounted for in the assessment, the Biodefense Coordination Team’s ability to effectively use the information to support enterprise risk management will be limited. Moreover, without a plan to help ensure resources for sustaining ongoing institutional support, the Biodefense Coordination Team risks not having the capacity it needs to conduct meaningful analysis and decision making processes. Finally, without the development and documentation of the processes, roles, and responsibilities for joint decision making regarding the identification of priorities and for raising decisions about resource alignment across agencies, it will be difficult to sustain an enterprise-wide approach to managing risk across the biodefense enterprise. These actions could help guide agencies towards a common operating picture and shared understanding of the efforts needed beyond their individual missions. The intersection of human, animal, plant, and environmental health, as well as the nexus to the national security and economic sectors, represent challenges that no single agency can address alone. The National Biodefense Strategy was written to help link these efforts and additional planning and guidance would help enable the agencies to achieve the Strategy’s goals. Recommendations for Executive Action We are making the following four recommendations to the Secretary of HHS: The Secretary of HHS should direct the Biodefense Coordination Team to establish a plan that includes change management practices—such as strategies for feedback, communication, and education—to reinforce collaborative behaviors and enterprise-wide approaches and to help prevent early implementation challenges from becoming institutionalized. (Recommendation 1) The Secretary of HHS should direct the Biodefense Coordination Team to clearly document guidance and methods for analyzing the data collected from the agencies, including ensuring that nonfederal resources and capabilities are accounted for in the analysis. (Recommendation 2) The Secretary of HHS should direct the Biodefense Coordination Team to establish a resource plan to staff, support, and sustain its ongoing efforts. (Recommendation 3) The Secretary of HHS should direct the Biodefense Coordination Team to clearly document agreed upon processes, roles, and responsibilities for making and enforcing enterprise-wide decisions. (Recommendation 4) Agency Comments We provided a draft of this report to HHS, USDA, DOD, DHS, State, VA, Justice, EPA, the National Security Council staff, and OMB for review and comment. In its written comments, which are reproduced in appendix III, HHS concurred with our four recommendations and provided additional information on the steps the agency has taken or plans to take to address our recommendations. To address recommendation 1 for the Biodefense Coordination Team to establish a plan that includes change management practices, HHS reported that it had implemented change management practices to include strategies for feedback, communication, and education. Specifically, the letter describes plans to institutionalize an after-action survey following the interagency data collection effort each year and a communications and outreach plan that was informed by multiple sources of stakeholder input. In technical comments, officials also described meetings across different components of the participating agencies that the Biodefense Coordination Team has held to help bridge organizational cultures and promote ownership. These actions, if implemented effectively, are important steps toward addressing the intent of our recommendation. At the same time, it is important to recognize the extent to which the enterprise-wide approach—making resource decisions in the context not only of each agency’s separate mission and authorities, but also to further a shared national security mission—represents a cultural shift. In technical comments, HHS officials acknowledged that opportunities exist to continue to enhance cultural aspects of the enterprise-wide approach and noted that the participation of all the agencies will be important. In addition VA, State, and EPA—in technical comments and written responses—commented on the ability of the Biodefense Steering Committee and Biodefense Coordination Team to drive enterprise-wide decision-making. They noted challenges like the limitations in these bodies’ authority to direct action and the difficulty of achieving consensus across so many actors. (See Department of Veterans Affairs’ letter reproduced in appendix IV.) HHS also concurred with recommendation 2 about clear documentation of guidance and methods for analyzing the data collected from the agencies, including ensuring that nonfederal resources and capabilities are accounted for in the analysis. However, in its written response, HHS reiterated the assessment steps it already described during our review, but it did not provide additional documentation containing more concrete and detailed methods for the analysis. HHS noted the Biodefense Coordination Team’s limited responsibilities to address nonfederal resources in the annual assessment, as described in NSPM-14. HHS also expressed in its technical comments that NSPM-14 does not charge the Biodefense Coordination Team with analyzing or accounting for nonfederal capabilities in any formal or specific way. We recognize the challenges involved with assessing nonfederal capabilities, but disagree with HHS’s characterization of the Biodefense Coordination Team’s responsibilities. According to NSPM-14, the foundation for the United States Government’s role in the biodefense enterprise is the National Biodefense Strategy and its implementation plan. The memorandum further states that agency biodefense activities shall be conducted consistent with the National Defense Authorization Act for Fiscal Year 2017 (NDAA), which provides that the strategy is to include an articulation of related whole-of-government activities required to support the strategy. We have previously reported that parts of the biodefense enterprise, such as the resources that support surveillance capabilities, are heavily reliant on nonfederal resources. Moreover, the National Biodefense Strategy states that it is broader than a federal government strategy, rather a call to action for various nonfederal entities. Therefore, to fully address our recommendation, we continue to believe that NSPM-14 notwithstanding, HHS should develop and document clear guidance for the data collection and analytical methods that will support the NDAA’s call for articulation of the capabilities that support national biodefense and recommendations for strengthening those capabilities. Regarding recommendation 3 for the Biodefense Coordination Team to establish a resource plan to staff, support, and sustain its ongoing efforts, HHS concurred, and said it requested $5 million in no-year funding in its fiscal year 2020 budget request to support the administrative management of the National Biodefense Strategy. However, as we reported, the HHS appropriations for fiscal year 2020 did not include the $5 million HHS requested and officials from multiple agencies reported that the initial planning for the staffing and responsibilities for the Biodefense Coordination Team had not been finalized. To fully address our recommendations, HHS will need to establish a resource plan that would describe how the Biodefense Coordination Team plans to staff, support, and sustain its efforts. Finally, HHS concurred with recommendation 4, for the Biodefense Coordination Team to clearly document agreed upon processes, roles, and responsibilities for making and enforcing enterprise-wide decisions. In its response, HHS points to the authority NSPM-14 gives the Biodefense Coordination Team to establish governance, policies, and procedures, subject to the approval of the Biodefense Steering Committee. HHS stated that the Biodefense Coordination Team had developed charters and guidance to govern its activities, but said that these documents were still pending the approval of the Biodefense Steering Committee. We will continue to evaluate these actions to determine the extent to which they fully address our recommendation. To fully address our recommendation, HHS in partnership with other participating federal agencies should agree upon and document clear guidance, roles, and responsibilities for addressing shared national security concerns with interagency resources and solutions that transcend the mission and capabilities of the individual agencies. Irrespective of NSPM-14, clarifying decision making processes should help the agencies identify the recommendations for improved capabilities, authorities, command structures, and interagency coordination called for by the NDAA and make incremental progress over time toward implementing those recommendations. We are sending copies of this report to the appropriate congressional committees; the Secretaries of the Departments of Health and Human Services, Agriculture, Defense, Homeland Security, State, and Veterans Affairs; the Attorney General; the Administrator of the Environmental Protection Agency; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact Chris Currie at (404) 679-1875 or CurrieC@gao.gov, and Mary Denigan- Macauley at (202) 512-7114 or DeniganMacauleyM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Analysis of the National Biodefense Strategy and Its Associated Plans against Elements Listed in Statute The National Defense Authorization Act for Fiscal Year 2017 (NDAA) articulated eight elements to include in the required National Biodefense Strategy (Strategy). The NDAA also included a provision that we review the Strategy. As part of our analysis, we assessed the extent to which the Strategy and its associated plans incorporated the elements listed in the NDAA. On March 14, 2019, we briefed the committees of concern (as identified in the NDAA) on our findings, which we present here. To determine the extent to which the National Biodefense Strategy incorporated the elements established in the NDAA, three analysts and an attorney independently evaluated the Strategy and NSPM-14 against each NDAA element, recording scores on separate matrices. The reviewers used the following descriptors to assess the extent to which the Strategy included an element: Great Extent – explicitly cites all elements, even if specificity and detail is lacking and thus could be improved upon; Some Extent – explicitly cites some, but not all, elements; No Extent – does not explicitly cite or discuss any elements, or any implicit references are either too vague or general. The analysts and attorney then convened as a panel to reconcile any differences in scoring to reach consensus. We also interviewed officials from the agencies which comprise the Biodefense Steering Committee to gain contextual information regarding the Strategy’s development as well to help identify any challenges that agencies faced in addressing any of the statutory elements during the development process. As of the date of our briefing in March 2019, the National Biodefense Strategy and associated plans generally addressed most of the elements in the NDAA, and agencies continued to develop additional key components. Specifically, for five of the eight NDAA elements, the Strategy and associated plans addressed the major parts of the elements with few or no omissions. For the other three NDAA elements, some parts were still under development. As of March 2019, the National Biodefense Strategy and Associated Plans Generally Addressed Five of Eight Elements in the NDAA We found in March 2019 that the National Biodefense Strategy and its associated plans generally addressed five out of eight elements listed in the NDAA, even if some of these elements lack specificity and detail. For example, where we determined the Strategy and associated plans included an element to a great extent, we recognize that these documents reflect the intent of the required element, even if improvement could be made in future revisions. Figure 5 identifies the eight elements required by the NDAA and our assessment on the extent to which those elements were included in the Strategy and associated plans. Specifically, the Strategy and related documents include a description of biological threats and the capabilities necessary to address threats, as well as recommendations for improving current biodefense capabilities, authorities, structures and interagency coordination. Description of biological threats. The NDAA provides that one element to be addressed in the strategy is a description of various biological threats. The Strategy includes a description of biological threats, as well as additional contextual information about those threats and their place within the overall threat environment. For example, the Strategy describes biological warfare, bioterrorism, naturally occurring infectious diseases, and accidental exposures as significant threats. Articulation of necessary capabilities. One element listed in the NDAA is an articulation of related or required interagency capabilities and whole- of-Government activities required to support the Strategy’s priorities. The Strategy provides a list of five goals, with associated objectives and activities that articulate the capabilities necessary to fulfill the aims of the Strategy, such as the need to improve interagency capabilities. For example, one such activity describes the need to improve state, local, tribal, territorial, private sector, federal, regional, and international surveillance systems and networks to contain, control and respond to biological incidents. Another activity involves strengthening the ability to detect zoonotic diseases and incorporating forecasting into intelligence collection by federal agencies. This articulation of necessary capabilities addresses the NDAA element to a great extent, even though we noted that additional steps to include nonfederal capabilities in the annual assessment of programs, projects, and activities would enhance implementation efforts. Recommendations for improving current biodefense capabilities. Another element listed in the NDAA is to identify recommendations for strengthening and improving current biodefense capabilities, authorities, and command structures. The Strategy contains descriptions of activities necessary to improve upon current biodefense efforts and to help agencies establish new means to fulfill the goals of the Strategy. NSPM- 14 establishes a new governance structure (command structure) to help implement the Strategy and also includes a mechanism for continual revision of the Strategy, including recommendations for strengthening biodefense activities, based on identified needs. Recommendations for interagency coordination. The NDAA also provided that the Strategy include recommendations for improving and formalizing interagency coordination and support mechanisms with respect to a strong national biodefense. The Strategy and associated plans address this element by establishing collaborative interagency structures—the Biodefense Steering Committee and the Biodefense Coordination Team—intended to work continually on improving biodefense. NSPM-14 also identifies a focal point for coordination among agencies—the Secretary of HHS. Other matters identified by agencies. The final element is to include any other matters deemed necessary by the secretaries of Defense, Health and Human Services, Homeland Security, and Agriculture. According to officials from all eight agencies, the agencies originally tasked with authoring the Strategy opened the process up to all agencies with a stake in the biodefense enterprise because they recognized those four agencies could not develop a comprehensive biodefense strategy if all partners were not included. Officials from all of the agencies on the Biodefense Steering Committee cited the inclusive nature of the drafting process as contributing to a conceptually robust Strategy. Additionally, NSPM-14 includes a requirement for the development of metrics, milestones, and end states for implementing the Strategy, and officials from all eight agencies we interviewed said the interagency group drafted them and officials from 6 of the 8 agencies said they are under review by the National Security Council staff. Agencies Continued to Implement Key Elements of the Strategy and Associated Plans As of March 2019, three of 8 elements listed in the NDAA were only included to some extent because agencies implicitly addressed the element through their work, or have started addressing parts of the elements but not yet completed them. The main body of the report discusses some of the ongoing challenges related to the Strategy’s implementation. Inventory and assessment of doctrine. To some extent, the Strategy addresses the element related to an inventory and assessment of all existing strategies, plans, policies, laws, and interagency agreements related to biodefense. The agencies implicitly addressed this element by incorporating existing doctrine in the process of drafting the Strategy. For example, officials at a majority of the 8 agencies said that agencies deliberately wrote the Strategy in a way that reflects their ongoing priorities in the area of biodefense or takes into account existing agency policies or strategies. The Strategy and NSPM-14 explicitly reference some existing executive orders, presidential directives, and international treaties related to biodefense, though it excludes reference to many relevant agency-level strategies, plans, policies, laws, and interagency agreements. For example, the Strategy reinforces obligations under the Convention on the Prohibition of the Development, Production, and Stockpiling of Bacteriological and Toxin Weapons and on their Destruction (Biological Weapons Convention) (1975), but does not mention the HHS’s National Health Security Strategy, which informs a number of HHS programs that contribute to the biodefense enterprise. According to HHS officials, an inventory of doctrine was completed and submitted to Congress along with the transmittal of the Strategy, when it was released. However, not all officials we spoke to believe this work is fully completed, and officials from several agencies said they are currently evaluating their internal policies and strategies to determine how they align with the new Strategy. Catalogue of current activities. The NDAA also included an element related to a description of the current programs, projects, or activities of the United States Government with respect to biodefense. While the Strategy itself does not include a catalogue of such activities, the NSPM- 14 process requires agencies to create this catalogue, and efforts to do so are described in the body of this report. NSPM-14 requires the Chair of the Biodefense Steering Committee to send written requests for information to agencies with biodefense responsibilities, including 17 agencies mentioned in the NSPM. According to HHS officials agencies completed this collection of information in June 2019. NSPM-14 directs the Biodefense Coordination Team to use the information gathered to produce an overall assessment of federal biodefense programs and coordinate the assessment with National Security Council staff and OMB prior to its finalization and approval by the Biodefense Steering Committee. Under NSPM-14, this process will occur annually as part of the budget cycle. We characterized this element as included to some extent because efforts to complete it were underway at the time of our briefing in March 2019. Additionally, as we describe in the body of the report, we identified areas of this process to be clarified for future years’ efforts. Agency roles and responsibilities. The Strategy and associated plans did not include a description of the roles and responsibilities of the Executive Agencies, including internal and external coordination procedures, in identifying and sharing information, as described in the NDAA. The Strategy’s implementation plan includes over 240 activities, but it does not assign roles and responsibilities for performing those activities. However, NSPM-14 includes a requirement to establish these roles and responsibilities, and officials from all of the 8 agencies said agencies drafted a document assigning roles and responsibilities to each agency. This document was submitted for review to the National Security Council staff. Agency officials also discussed their engagement with nonfederal partners on the Strategy, as they play a vital role in the Strategy’s implementation. However, as we describe in the body of the report, more can be done to articulate the nonfederal role in implementing the Strategy. Additionally, NSPM-14 describes a governance structure and initial responsibilities for executive agencies, such as identification of a senior-level official as the focal point for all federal biodefense efforts. However, as described in the body of this report, additional clarity is needed on specific roles and responsibilities regarding decision-making and leadership. Therefore, we consider this element addressed to some extent. As of October 2019, the agencies took additional steps to address the elements listed in the NDAA. For example, the data collection of the programs, projects, and activities was complete, and the assessment of those data submissions was in draft form. Additionally, the agencies drafted metrics, milestones, and end states, as well as roles and responsibilities for the over 240 activities outlined in the Strategy’s Implementation Plan. However, both of these documents had not received final approval from the National Security Council staff, and the charter outlining roles and responsibilities for the Biodefense Coordination Team had not been finalized. Appendix II: Descriptions of Long-standing Biodefense Challenges Previously Reported Since 2009, we have identified broad, cross-cutting issues in leadership, coordination, and collaboration that arise from fragmentation throughout the complex interagency, intergovernmental, and intersectoral biodefense enterprise. The biodefense enterprise is the whole combination of systems at every level of government and the private sector that contribute to protecting the nation and its citizens. It is composed of a complex collection of federal, state, local, tribal, territorial, and private resources, programs, and initiatives designed for different purposes and dedicated to mitigating both natural and intentional risk. In June 2019, we testified before the Subcommittee on National Security, Committee on Oversight and Reform, House of Representatives on our past work, which has identified a number of key challenges related to the nation’s ability to detect and respond to biological incidents that transcend what any one agency can address on its own. They include: (1) enterprise-wide threat determination, (2) biodetection technologies, (3) emerging infectious disease surveillance, (4) situational awareness and data integration, and (5) biological laboratory safety and security. Agencies have taken steps to address many of the recommendations we and others have made in these areas, and we continue to monitor ongoing efforts. Enterprise-Wide Threat Determination Needed to Help Leverage Resources and Inform Resource Tradeoffs. We reported in October 2017 that opportunities remain to enhance threat awareness across the entire biodefense enterprise, leverage shared resources, and inform budgetary tradeoffs among various threats and agency programs. Key biodefense agencies, including DHS, DOD, HHS, USDA, and EPA carry out activities within their own mission spaces to better understand threats and help make decisions about biodefense investments. Additionally, federal agencies in our October 2017 review had mechanisms to support specific federal activities and individual programs, or in response to specific biological incidents after they begin to unfold. However, there was no existing mechanism that could leverage threat awareness information to direct resources and set budgetary priorities across all agencies for biodefense. Without a mechanism that is able to assess the relative risk from biological threats across all sources and domains, we found that the nation may be limited in its ability to prioritize resources, defenses, and countermeasures against the most pressing threats. In June 2019, we said implementation of the National Biodefense Strategy offers the potential for the nation to progress toward more integrated and enterprise-wide threat awareness and to use that information to identify opportunities to leverage resources, but this will take time and entails a change in the way participating agencies have traditionally operated. Challenges Determining Optimal Biodetection Technology Solutions. We have previously reported on the challenges of determining and then implementing technologies capable of identifying biological threats in the environment. Since 2012 we have reported that DHS has faced challenges in clearly justifying the need for the BioWatch program and its ability to reliably fulfill its primary task of detecting aerosolized biological attacks. According to DHS officials, DHS is in the early stages of Biodefense 21 (BD21), a multi-year acquisition effort. DHS plans to develop requirements based on collected environmental data and input from first responders, public health officials, and other partners determine what the replacement to BioWatch needs to be. As part of the early acquisition cycle for BD21, DHS is currently conducting a technology demonstration for trigger and sensor technology; therefore we cannot yet determine how it will be implemented in the future or what decisions DHS will ultimately make regarding the existing BioWatch system. Additionally, in August 2017 we reported that from a homeland security and public health perspective, threats of bioterrorism, such as anthrax attacks, and high-profile disease outbreaks, such as Ebola and emerging viruses like dengue, chikungunya, and Zika, highlight the continued need for diagnostic tests that provide early detection and warning about biological threats to humans. One option being explored is multiplex point-of-care technologies which can simultaneously test (in minutes to a few hours) for more than one type of human infectious disease pathogen from a single patient sample (such as blood, urine, or sputum) in one run at or near the site of a patient. These technologies may be used for diagnosing different diseases, including more common diseases such as influenza, emerging infectious diseases, or diseases caused by weaponized biological agents. Advances in biological detection technologies present opportunities to provide early detection and warning of catastrophic biological incidents, and in June 2019 we said the agencies responsible for implementing the National Biodefense Strategy will need to engage on this issue in a way that helps to drive informed investment tradeoff decisions about technology alternatives. We also recognized that the National Biodefense Strategy and its interagency governing leadership offer the potential for the nation to better define the role of detection technologies in a layered national biodefense capability to help those that pursue these technologies better articulate the mission needs and align requirements and concepts of operation accordingly. Challenges Building and Maintaining Emerging Infectious Disease Surveillance. We have reported that establishing and sustaining biosurveillance capabilities can be difficult for a myriad of reasons. For example, maintaining expertise in a rapidly changing field is difficult, as is the challenge of accurately recognizing the signs and symptoms of rare or emerging diseases. We reported in October 2011 that funding targeted for specific diseases does not allow for focus on a broad range of causes of morbidity and mortality, and federal officials have said that the disease- specific nature of funding is a challenge to states’ ability to invest in core biosurveillance capabilities. According to federal, state, and local officials, early detection of potentially serious disease indications nearly always occurs first at the local level, making the personnel, training, systems, and equipment that support detection at the state and local level a cornerstone of our nation’s biodefense posture. In May 2018, we reported that officials from HHS told us that their grant awards funded by annual appropriations are intended to establish and strengthen emergency preparedness and capacity building, but may not fully support the need for surge capacity that states and other jurisdictions require to respond to an infectious disease threat. Further, we reported in May 2018 that although the awards funded by supplemental appropriations have allowed state and local public health departments, laboratories, and hospitals to surge during a threat—for example, the H1N1influenza and Zika virus outbreaks—most of the 10 non-federal stakeholders we interviewed, as well as HHS officials said that the timing of these awards can result in challenges to carrying out preparedness and response activities during infectious disease threats. In June 2019, we reported that how and to what extent implementation of the National Biodefense Strategy is able to efficiently leverage and effectively sustain capacity across both nonfederal and federal stakeholders will affect how prepared the nation is to more quickly gear up for whatever challenges emerge when outbreaks of previously non- endemic diseases threaten the nation. We also noted that the Strategy and its interagency governance structure offer the opportunity to design new approaches to identifying and building a core set of surveillance and response capabilities for emerging infectious diseases. Ongoing Challenges to Fulfill Enhanced Situational Awareness and Data Integration Requirements. Our prior work has identified challenges at DHS and HHS related to the sharing, collecting, and integration of data from various federal and nonfederal agencies for their public health situational awareness and data integration efforts. We have reported that DHS’s National Biosurveillance Integration Center (NBIC), which was created to integrate data across the federal government with the aim of enhancing detection and situational awareness of biological incidents, has suffered from long-standing issues related to its clarity of purpose. Since 2009, we have reported that NBIC was not fully equipped to carry out its mission because it lacked key resources—data and personnel— from its partner agencies, which may have been at least partially the result of collaboration challenges it faced. In September 2015, we reported that despite implementing our prior recommendations and NBIC’s efforts to collaborate with interagency partners to create and issue a strategic plan that would clarify its mission and efforts, a variety of challenges remained. In October 2019, officials acknowledged that situational awareness and data integration are still very challenging problems to solve, but overall the relationships between NBIC and partner agencies are improving. Similarly, in 2017, we reported on long-standing challenges faced by HHS—such as planning and implementation shortfalls—to create a public health situational awareness network, not unlike that envisioned for DHS. In June 2019 we observed that because the National Biodefense Strategy identified biosurveillance data integration among several information sharing activities that need to be enhanced, its implementation offers the potential for the nation to better define what kind of integrated situational awareness is possible, what it will take to effectively and efficiently achieve it, and what value it has. Continued Oversight Needed to Enhance Biological Safety and Security. We—along with congressional committees—have, for many years, identified challenges and areas for improvement related to the safety, security, and oversight of high-containment laboratories. For example, in response to reported lapses in laboratory safety at HHS and DOD in 2014 and 2015, we examined how federal departments oversee their high-containment laboratories and found that most of the 8 departments and 15 agencies that we reviewed had policies that were not comprehensive or were not up to date. Additionally, we found that while the departments and agencies we reviewed primarily used inspections to oversee their high-containment laboratories, some of them were not routinely reporting inspection results, laboratory incidents, and other oversight activities to senior officials. In October 2017, we found that the Federal Select Agent Program—jointly managed by HHS and USDA—oversees laboratories’ handling of certain hazardous pathogens known as select agents and toxins, but the program does not fully meet all key elements of effective oversight. For example, the Federal Select Agent Program was not independent from all laboratories it oversees, and it had not assessed risks posed by its current structure or the effectiveness of its mechanisms to reduce organizational conflicts of interest. In June 2019, we said the National Biodefense Strategy highlights the need for continuous improvement of biosafety and biosecurity for laboratories and other facilities, creating an opportunity for interagency partners to develop additional oversight or other practices to mitigate the risk of bioincidents at high containment laboratories. Appendix III: Comments from the Department of Health and Human Services Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Chris P. Currie at (404) 679-1875 or CurrieC@gao.gov Mary Denigan-Macauley at (202) 512-7114 or DeniganMacauleyM@gao.gov. Staff Acknowledgments In addition to the contacts named above, Kathryn Godfrey (Assistant Director); Nick Bartine and Susanna Kuebler (Analysts-in-Charge); Jeff Cirillo; Michele Fejfar; Eric Hauswirth; Tracey King; Jan Montgomery; Matt Ray; and Adam Vogt made key contributions to this report.
Why GAO Did This Study GAO has reported on the inherent fragmented nature of the federal and nonfederal resources needed to protect the nation from potentially catastrophic biological threats. GAO called for a strategic approach to help the federal government better leverage resources and manage risk The White House issued the National Biodefense Strategy and the Presidential Memorandum on the Support for National Biodefense to promote a more efficient and coordinated biodefense enterprise. The National Defense Authorization Act for Fiscal Year 2017 included a provision that GAO review the strategy. This report addresses the extent to which the Strategy and implementation efforts are designed to enhance national biodefense capabilities and any implementation challenges that exist. GAO analyzed the Strategy, plans, and NSPM-14, and compared them to selected characteristics of GAO's work on effective national strategies, enterprise risk management, organizational transformation, and interagency coordination. GAO interviewed officials from the eight federal agencies that comprised the Biodefense Steering Committee to learn about early implementation. What GAO Found Issued in September 2018, the National Biodefense Strategy (Strategy) and implementation plan, along with National Security Presidential Memorandum-14 (NSPM-14), are designed to enhance national biodefense capabilities. NSPM-14 established a governance structure composed of relevant federal agencies and chaired by the Secretary of Health and Human Services (HHS) to guide implementation. It also required federal agencies with biodefense responsibilities to collect and assess data on their biodefense activities to, among other things, identify gaps. The Strategy defined the scope of the biodefense enterprise (which includes partners at all levels of government and the private sector) and brought all of the biological threats—intentional, accidental, and naturally-occurring—together, establishing an overarching vision, goals, and objectives. There are a number of challenges, however, that could limit long-term implementation success. Among other things, there was no documented methodology or guidance for how data are to be analyzed to help the enterprise identify gaps and opportunities to leverage resources, including no guidance on how nonfederal capabilities are to be accounted for in the analysis. Many of the resources that compose national capbilities are not federal, so enterprise-wide assessment efforts should account for nonfederal capabilities. Agency officials were also unsure how decisions would be made, especially if addressing gaps or opportunties to leverage resources involved redirecting resources across agency boundaries. Although HHS officials pointed to existing processes and directives for interagency decision making, GAO found there are no clear, detailed processes, roles, and responsibilities for joint decision-making, including how agencies will identify opportunities to leverage resources or who will make and enforce those decisions. As a result, questions remain about how this first-year effort to catalogue all existing activities will result in a decision-making approach that involves jointly defining and managing risk at the enterprise level. Without clearly documented methods, guidance, processes, and roles and responsibilities for enterprise-wide decision-making, the effort runs the risk of failing to move away from traditional mission stovepipes toward a strategic enterprise-wide approach that meaningfuly enhances national capabilities. What GAO Recommends GAO is making four recommendations to the Secretary of HHS, including working with other agencies to document methods for analysis and the processes, roles, and responsibilities for enterprise-wide decision making. HHS concurred with all the recommendations and described steps to implement them.
gao_GAO-19-279
gao_GAO-19-279_0
Background DOT is made up of nine modal administrations and the Office of the Secretary of Transportation (OST), each of which has its own mission, primarily focused on enhancing mobility and safety. Among other activities, modal administrations oversee financing and grant funding programs specific to their modes (e.g., roads, transit, rail). OST oversees the formulation of national transportation policy and promotes intermodal transportation. In this latter role, OST administers programs that provide grants to projects that can represent multiple transportation modes: roads, bridges, transit, rail, or ports. In fall 2015, DOT created the Build America Transportation Investment Center within OST—a predecessor to the Bureau. This new center was created to be a single point of contact and coordination for project sponsors seeking to apply for finance programs and explore innovative financing, in recognition of the fact that sponsors can face difficulties navigating multiple modal administrations to apply for funding or financing for a single project. In 2015, DOT was required by law to establish a finance bureau to align, coordinate, and consolidate certain surface transportation funding and financing programs. The Bureau—located within OST—is led by an Executive Director, who is responsible for managing and overseeing the daily activities, decisions, operations, and personnel of the Bureau. The Executive Director is appointed by the Secretary and then approved by the President. Three financing programs and one grant funding program were moved into the Bureau. Collectively, these programs provide billions of dollars of support to transportation projects across the country, as described below: TIFIA. TIFIA provides direct loans, loan guarantees, and standby lines of credit to surface transportation projects of national or regional significance. Eligible projects include a variety of projects such as highways, intermodal stations, and passenger rail. The fundamental goal of TIFIA is to leverage federal funds by attracting substantial private and other non-federal co-investment, and the legislation creating TIFIA stated that the program can do so by complementing existing resources to fill market gaps. For fiscal year 2018, the FAST Act authorized $285 million in funding to cover the federal government’s cost of providing financing and administering the program. According to DOT, $1 of TIFIA’s budget authority generally allows DOT to provide more than $10 in credit assistance, so $285 million in funding authority could support approximately $2.9 billion in assistance. TIFIA has provided over $31 billion in financing to 79 projects since its creation in 1998. The Federal Highway Administration (FHWA) administered TIFIA before it was moved to the Bureau. RRIF. RRIF provides direct loans and loan guarantees to finance the development of railroad infrastructure, such as rehabilitating passenger equipment and acquiring or rehabilitating track and bridges. Created in 1998, the RRIF program is authorized to provide up to $35 billion in credit assistance, and RRIF dedicates part of this funding to providing vital access to financing for smaller, short-line and regional railroads, which have historically lacked the access to private financing. The RRIF statute permits appropriations of budget authority to be used for the cost of providing financing, but appropriations acts have typically prohibited the use of appropriations for such purposes. This prohibition, however, was not included in the fiscal year 2018 Consolidated Appropriations Act and appropriations were, for the first time, made available to pay the cost of providing financing. RRIF loans totaling over $5 billion have supported 39 projects as of February 2019. The Federal Railroad Administration (FRA) administered RRIF before it was moved to the Bureau. PAB for Highway and Surface Freight Transfer Facilities. PAB provides private-sector developers of certain types of surface transportation projects with access to tax-exempt financing. In contrast to TIFIA and RRIF, where the federal government directly provides loans and other forms of credit assistance, PAB does not directly provide financing but enables a state or city to borrow on behalf of private companies and nonprofits. PAB does, however, impose costs on the federal government through forgone tax revenues. The total amount of PAB for surface transportation is limited by statute to $15 billion, and the Secretary of Transportation allocates this available capacity among qualified projects. As of February 2019, DOT had allocated about $10.3 billion in PAB to 27 projects. A different office within OST previously administered PAB. Infrastructure for Rebuilding America (INFRA). The FAST Act authorized DOT to award $4.5 billion in discretionary grants for nationally significant freight and highway projects for fiscal years 2016 through 2020. In response, DOT developed the INFRA grant funding program. States and local governments are among the eligible entities that may apply for INFRA grants. DOT may fund freight or highway projects that meet statutory requirements, such as reserving at least 25 percent of available funds for rural areas. In June 2018, DOT announced its most recently proposed INFRA grants totaling nearly $1.5 billion for 26 projects. The FAST Act also created the Council on Credit and Finance (Council) to review and make recommendations to the Secretary on applications for DOT’s financing programs, regularly review projects that have received financing, and conduct other duties the Secretary establishes. The Council is mostly comprised of DOT political appointees, including the Deputy Secretary of Transportation, Under Secretary of Transportation for Policy, and Administrators of FRA, FHWA, and the Federal Transit Administration (FTA). The FAST Act outlined specific responsibilities for the Bureau, some of which relate to administering the above programs. The responsibilities include the following, grouped into five broad categories: Administering the application evaluation process for certain Establishing procedures for analyzing and evaluating applications for these programs, as well as for documenting major decisions in the application evaluation process through a decision memorandum or similar mechanism that provides a clear rationale for such decisions Streamlining the approval processes for the above programs Providing assistance to project sponsors seeking funding or financing Making credit assistance programs more accessible Providing technical assistance, upon request, for proposed public- private partnerships and environmental reviews and permitting, among other areas Promoting innovative-financing best practices: Developing and monitoring best practices for state authorities and practices, standard contracts, and analytical tools Improving environmental reviews and permitting: Serving as DOT’s liaison on the Council on Environmental Quality Coordinating efforts to improve the efficiency and effectiveness of the environmental review and permitting process Identifying, developing, and tracking metrics for permit reviews and decisions Sharing information on procurement costs and risks: Developing procurement benchmarks for projects receiving assistance under the above programs, and collecting and publishing information on procurement benchmarks; to the extent possible, the benchmarks should establish maximum thresholds for cost increases and schedule delays, establish uniform ways to measure these changes, and be tailored to different types of project procurement Developing guidance to require and publish value for money and after-action reports findings for public-private partnerships seeking assistance from the Bureau programs The conference report accompanying the FAST Act noted that the Bureau would serve as a one-stop shop for states and local governments, and to serve in this capacity, the report highlights the Bureau’s role to work with individual project sponsors as the Bureau administers financing programs, as well as its broader role to help reduce costs and uncertainty with environmental reviews and permitting and procurement. The FAST Act also gave the Secretary of Transportation authority to consolidate or eliminate different offices within DOT as it creates the Bureau. DOT Made Progress Establishing the Bureau and Meeting Some Responsibilities, but the Bureau Lacks Tools to Further Guide and Measure Its Efforts DOT established an organizational structure for the Bureau and created a consolidated process for it to use when working with sponsors to evaluate applications for financing programs and provide assistance. Creating this process helped the Bureau make progress on two of its FAST Act responsibilities, and overall, DOT’s initial steps were important actions that allowed it to open and operate the Bureau. Since the Bureau was established in 2016, it has made more limited progress on its other responsibilities, including promoting innovative-financing best practices for certain types of projects. Although we recognize that it is a relatively new office that in many ways remains a work in progress, the Bureau lacks a plan to guide its ongoing and future efforts and has not established performance indicators to measure its outcomes and assess progress. DOT Established the Bureau’s Structure and Created a Consolidated Process for Evaluating Applications and Providing Assistance DOT designed and established the Bureau in the year after the FAST Act’s enactment. DOT established internal committees and hired a consultant to produce an initial implementation plan to establish the Bureau. To create this plan, the consultant analyzed existing staffing and processes, interviewed internal and external stakeholders, and examined organizational structures at public and private sector entities, among other things. DOT prioritized several areas for this initial work, including consolidating existing processes for evaluating applications for finance programs and providing assistance, that were important to opening and operating the Bureau as well as assuming control of the financing programs. As part of its work to develop a structure for the Bureau, DOT’s initial implementation plan set out guiding principles for what the Bureau aims to achieve: mobilizing available financial resources for high-impact transportation projects in the United States; identifying and encouraging innovative best practices in project planning, financing, delivery, and monitoring; clearing roadblocks to provide financing and grants more quickly and transparently, with a streamlined user interface and less uncertainty, complexity, and cost for project sponsors; and ensuring the protection of public resources through efficient leveraging of taxpayer money and the development of a creditworthy portfolio of projects. DOT also created an organizational structure for the Bureau and laid out the Bureau’s relationships to other offices in DOT. When the Bureau opened in July 2016, DOT appointed an Acting Executive Director, filled 29 positions with staff from other DOT offices, and created two offices within the Bureau, all of which generally aligned with the initial implementation plan. The Outreach and Project Development Office largely aligns with the Bureau responsibility to provide assistance to sponsors, which includes providing technical assistance on public-private partnerships and federal requirements to specific project sponsors as they prepare to apply for funding and financing. The Bureau’s Credit Programs Office largely aligns with the Bureau’s responsibility to administer the application evaluation process for certain programs through its work on underwriting, risk management, and portfolio management. DOT decided to also leverage other DOT offices within OST and modal administrations to carry out some of the Bureau’s work. Bureau officials told us this was a more efficient approach because it used the expertise and support of existing DOT offices rather than duplicating this expertise and support. For example, DOT used staff in OST that administer another competitive grant funding program to administer the INFRA grant program. See appendix II for more detail on the Bureau’s organizational structure and staffing. In its initial work, DOT also created a consolidated process for the Bureau to use when working with project sponsors pursuing TIFIA and RRIF financing, including standardized steps for evaluating applications for financing programs as well as providing assistance. Applications for the PAB program, which was also moved into the Bureau, go through many of the same application evaluation steps as TIFIA and RRIF, especially in the latter phases. DOT’s work to create this process aligned with two responsibilities given to the Bureau in the FAST Act: Administering the application evaluation process for certain programs Providing assistance to project sponsors seeking funding or financing DOT, in creating this process, set out steps that the Bureau would follow when working with sponsors. In the first two phases—initial engagement and project development—the Bureau provides assistance to project sponsors as they consider and navigate the financing programs. In those phases, a single point of contact works with sponsors to share information on the Bureau and provide assistance as sponsors develop materials to apply for financing programs. In the remaining phases of the process, Bureau staff and other DOT officials evaluate financing applications. During the creditworthiness review for a TIFIA or RRIF loan, for example, Bureau staff and independent advisors conduct an in-depth review of the project, including the sufficiency of a proposed repayment stream or collateral pledged. Throughout the process, the Credit Review Team—a decision-making body composed of Bureau and other DOT staff—votes at three points whether to advance a project seeking a TIFIA or RRIF loan and votes once for PAB allocations. During a later phase in the process, the Council then votes whether to recommend that an application advance to the Secretary for approval. The phases and steps in the Bureau’s process are summarized in figure 1 below. In creating this consolidated process, DOT also sought to improve and streamline the process, as called for in the FAST Act. Overall, DOT officials and documentation stated that these improvements, described below, should allow the Bureau to gather more information and better assist sponsors in the early phases of the process as well as identify and address potential issues earlier in the process. Single point of contact in the initial engagement and project development phases. The Bureau provides a single point of contact to assist sponsors during the early phases of the process. With a single point of contact, the Bureau aims to provide a streamlined interface with DOT for a sponsor. Furthermore, Bureau documents show that the single point of contact works with the sponsor to identify specific technical assistance needs—such as help completing environmental review requirements—and then develops a roadmap for providing this assistance as the sponsor develops its draft application. The point of contact can also help to resolve any conflicting requirements; for example, Bureau officials said the point of contact can facilitate discussions with a project sponsor and modal administrations on which Buy America requirements apply for a multi-modal project, as the requirements may differ across modes. Bureau officials said the Bureau’s work in these phases builds off the functions of the Bureau’s predecessor, the Build America Transportation Investment Center, which the initial implementation plan shows reached out to some sponsors interested in federal financing and connected them to the TIFIA, RRIF, and PAB programs, as well as the work of the former TIFIA Joint Program Office and RRIF Office. In contrast, the Bureau now more formally connects early assistance to later phases where the Bureau evaluates financing applications, all within the same office. Combined process for the creditworthiness review, application review, and Council review phases. The Bureau’s process combined the various review processes previously used by the three separate offices—in FHWA, FRA, and OST—to administer the three financing programs—TIFIA, RRIF, and PAB, respectively. For example, before this new process was implemented, a sponsor seeking a TIFIA loan and a RRIF loan would have to submit two applications to two offices and then work through two different processes; now a sponsor can submit one application to the Bureau and work through a single process for both loans. Our analysis of DOT and Bureau documents found that the reviews conducted in these phases are largely built off and resemble previously used processes. For example, the initial implementation report shows that previously the offices administering TIFIA, RRIF, and PAB were each required to brief the Council’s predecessor at different steps for each program, while the new process requires briefings to the Council at the same step for each program. Formalized decision-making body that monitors and advances projects through phases. The Credit Review Team—the new, primary decision-making body within the Bureau—plays a key role in deciding when projects can advance from one phase to another. For example, the team reviews a project’s initial materials for a TIFIA or RRIF loan and then votes on whether the project is ready to advance to the creditworthiness review phase. According to Bureau documents, the team’s predecessor, a less formal working group, did not review projects until after the creditworthiness review began. Bureau documents show that the Credit Review Team is meant to meet weekly, in contrast to its predecessor organization, which met monthly. According to Bureau officials, this more frequent meeting schedule allows the Bureau to expedite its decision-making. Bureau Lacks a Plan and Timelines to Guide Ongoing and Future Efforts and Indicators to Assess Progress Since DOT designed and established the Bureau, the Bureau has made more limited progress in its first 2 years on addressing additional responsibilities assigned to it by the FAST Act, as listed and described below. Bureau officials spoke generally about plans to continue making progress on these responsibilities in the future, and pointed out that the Bureau is still a relatively new office that remains a work in progress. However, Bureau officials were unable to provide written plans or timelines for these additional efforts. Promoting innovative-financing best practices. The Bureau has started to address this responsibility by employing the expertise of modal administration staff. The Bureau signed an agreement with FHWA in October 2016 to leverage the expertise of FHWA’s long- standing Office of Innovative Program Delivery rather than duplicate these efforts in the Bureau. Since signing the agreement, the Bureau and FHWA have jointly developed or updated a number of resources for public-private partnerships, building on FHWA’s existing work. This includes conducting on-site trainings for state entities and updating two model contract guides. Progress with other modal administrations has been more limited. For example, Bureau staff told us they have worked with FTA to start to identify gaps and jointly produce materials, such as an upcoming public-private partnership procurement guide. Though the Bureau does not have a signed agreement with FTA, Bureau officials said they want to sign one. Bureau officials said that they have started speaking with officials at other modal administrations to identify opportunities but that it will take time to identify gaps and develop tools in innovative financing for rail, maritime, and aviation. Improving environmental reviews and permitting. Bureau officials said they have relied on the expertise of DOT’s Infrastructure Permitting Improvement Center to carry out responsibilities to improve environmental reviews and permitting, rather than duplicate this expertise in the Bureau. The Center’s stated mission is to improve the performance of federal environmental review and permitting of infrastructure projects. As a result, Bureau officials said the Center carries out several specific responsibilities directed to the Bureau in the FAST Act, including serving as DOT’s liaison to the Council on Environmental Quality and tracking metrics for permit reviews and decisions in a public dashboard. According to Bureau officials, the Infrastructure Permitting Improvement Center and the Bureau also jointly hired an environmental expert. This environmental expert’s duties include supporting broad efforts to improve the efficiency and effectiveness of these processes in the Center and providing technical assistance to ensure that environmental reviews on specific projects move forward in the Bureau. Bureau officials told us that the Bureau does not have a written plan to document its efforts to fulfill the Bureau’s FAST Act environmental review and permitting responsibilities, beyond the position description for the environmental expert, because both offices are under the direction of the Under Secretary. However, the position description does not mention the Bureau or provide a sequence or timeline to fulfill these responsibilities that could help ensure continued progress. Sharing information on procurement costs and risks. The Bureau has not taken steps to collect or share information on procurement costs and risks, though documents show it has coordinated with FHWA to take some preliminary planning steps. For its FAST Act responsibility to develop, collect, and publish procurement benchmarks, the Bureau and FHWA published a preliminary paper in June 2017 that identified the types of procurement information to collect and publish, identified existing information sources for highway projects, and outlined possible next steps. However, Bureau officials told us that much work remains to identify specific cost and schedule information to collect from project sponsors and ultimately publish procurement benchmarks for projects across modes. The FAST Act also directs the Bureau to require sponsors procuring a project as a public-private partnership to conduct and publish value for money assessments and after-action reports, but the Bureau has not taken steps to do so. Bureau officials stated that additional efforts to address these responsibilities will require additional work and resources. Bureau officials could not provide a written plan or schedule for these future efforts. Several factors, including some outside the Bureau’s control, have affected the Bureau’s ability to more fully carry out its responsibilities in its 2 years of operation. First, there have been changes in leadership. After the presidential transition in early 2017, many DOT leadership positions, including many members of the Council, were vacant until new political appointees were put in place. Bureau documents show that the Council did not meet for 2 months, and Bureau officials told us that career staff sat on the Council to enable it to meet and resume voting on applications until appointees were confirmed. In addition, the Bureau’s Executive Director stepped down in November 2017. The Bureau is currently trying to fill that position through a second job announcement. With the Executive Director position vacant, Bureau officials told us that the Deputy Assistant Secretary and 3 senior officials from the Bureau and OST have fulfilled the day-to-day activities of that leadership role in the interim. Bureau officials told us that the lack of an Executive Director has had an effect on setting long-term plans for the Bureau; such planning is part of the duties of that position. Some stakeholders we spoke to stressed the importance of having an Executive Director in place so Bureau staff can quickly elevate issues or make decisions that currently need to be made by higher-level officials. Second, the Bureau has had a number of vacant positions since it was opened. Based on Bureau documents and discussions with Bureau officials, we determined that between 8 and 11 positions in its current organizational chart were vacant during 2018. During the government- wide hiring freeze in early 2017, the Bureau could not fill any vacancies, but several positions remained vacant before and after the hiring freeze, and two former Bureau officials said that the Bureau remained understaffed into mid-2017. The positions vacant during 2018 changed over time due to attrition, but two positions that remained vacant throughout this period are the transit-oriented development and project finance specialists. When asked about the vacancies in early 2018, Bureau officials said that they had originally wanted to fill the Executive Director position before filling other vacancies but later decided to start filling some critical vacancies. In July 2018, Bureau officials discussed their strategy for filling some vacant positions in response to immediate needs and in October 2018 said they intended to fill all vacant positions. Throughout this period, Bureau officials verbally shared these staffing priorities with us but did not provide a written plan or strategy for prioritizing the Bureau’s vacancies. Bureau officials said they do not have a timeline to fill remaining vacant positions in part due to limited human capital resources to draft position descriptions and conduct other parts of the hiring processes. DOT’s efforts to establish the Bureau and its processes were guided by an initial implementation plan. However, subsequent work by the Bureau to address its responsibilities and continue its implementation efforts is ongoing without the benefit of a plan and associated timelines. Key practices for organizational transformations state that an agency must set implementation goals and a timeline and ensure that top leadership drives the transformation, as such a transformation could take years to complete. Bureau officials have developed general priorities and approaches that they said have been communicated to staff through regular meetings and use specific performance plans to guide work in certain areas. However, without detailed written plans with implementation goals and timelines, the Bureau risks not being able to sustain the progress it has made in the last 2 years and ensure that it implements all of its statutory responsibilities in a timely manner. Finally, though the consultant’s report recommended that the Bureau develop indicators to track its performance, the Bureau has not established any indicators or measures to track progress in accomplishing its guiding principles or mission to be a “one-stop shop.” Federal standards for internal control and key practices for organizational transformations stress the importance of setting measurable objectives and developing performance measures to assess progress. The consultant’s initial implementation plan identified a number of potential performance indicators for the Bureau, including customer satisfaction. Bureau officials said they currently track data on projects through early assistance and application evaluation. However, Bureau officials said they do not want to use certain indicators, such as those that measure how long different parts of the process take, as they could create incentives to move projects ahead before they are ready. However, our prior work shows that to counter such incentives as well as to help an agency avoid drawing the wrong conclusions about its effectiveness, an agency could use multiple indicators rather than any one indicator to assess progress. Concerns about one indicator might be countered by information from other indicators. For example, to help offset incentives to move projects ahead before they are ready, an indicator for how long different parts of the process take could be considered along with an indicator that also measures the ratio of projects that were and were not returned to staff to gather additional information. Without establishing or beginning to use performance indicators that measure the Bureau’s performance rather than the progress of individual projects as it currently does, the Bureau will not know if it is achieving its guiding principles or meeting the mission set out in the conference report that it serve as a “one-stop shop” that advances projects. Sponsors Highlighted Positive Experiences and Challenges with Application Process, but Bureau Lacks a Mechanism to Assess How Well Its Process Is Working Sponsors we interviewed had mixed views on the Bureau’s process for evaluating applications and providing technical assistance, including views on whether the process was quick or streamlined. Selected sponsors had a generally positive experience with the PAB application evaluation process. However, for TIFIA and RRIF, selected sponsors had more mixed experiences and identified challenges with the application evaluation process, including the length and uncertainty of the process, changes to requirements or terms, and unclear goals and risk appetite— that is, how much risk an agency is willing to accept to achieve its goals— for the programs. Bureau officials identified limitations to providing more certainty to sponsors for each of these challenges and noted that the Bureau cannot control all the factors, such as a sponsor’s responsiveness or changes to a project’s proposed financing, surrounding the application evaluation process. However, the Bureau has also not determined how it will improve or streamline its process by, for example, consistently soliciting feedback from sponsors, nor has it outlined the goals and appetite for risk for TIFIA and RRIF. Sponsors Had Positive Experiences with PAB Application Evaluation Process As discussed earlier, DOT created a consolidated process for evaluating applications for its financing programs. Selected sponsors we interviewed that applied for a PAB allocation since the Bureau was created had a generally positive experience with the PAB application evaluation process. In particular, sponsors of the four PAB projects we selected said the process was quick and streamlined. For example, each sponsor said the process met or exceeded its schedule expectations for receiving a PAB allocation. In addition, these sponsors said the process was simple to follow and that the simplicity was an important strength. One sponsor found its point of contact’s efforts to clearly explain information requirements early in the process as useful to understand the Bureau’s expectations. DOT officials also said that PAB applications can move relatively quickly as they, in contrast to TIFIA and RRIF, do not create a direct financial risk for DOT or the federal government since DOT’s role is limited to approving the use of tax-exempt bond authority. Sponsors Had Mixed Views on TIFIA and RRIF Application Evaluation Process, and Some Cited Challenges Selected sponsors that applied for TIFIA and RRIF financing had mixed views on their overall experiences with the Bureau’s application evaluation process. Some sponsors had positive experiences to share. Among sponsors of six projects we selected, two sponsors said they believed the application evaluation process was streamlined, and five sponsors said it was somewhat streamlined. Some sponsors based their responses on comparing the Bureau’s process to the processes previously used to administer TIFIA and RRIF, while other sponsors focused on whether the process was efficient. For example, one sponsor that was new to TIFIA and that believed the process was streamlined said the Bureau was thorough but did not ask repetitive questions and that the process was not overly onerous. In terms of speed, two sponsors said the process was quick, two said the process was somewhat quick, and three said the process was not quick. Among sponsors of the six projects we selected and ten additional sponsors and stakeholders that had experience with some part of the Bureau’s application evaluation process, five sponsors and one stakeholder found the responsiveness of the Bureau’s staff to questions or issues as most useful, and several sponsors also praised individual staff or cited the professionalism and commitment of Bureau staff. Despite these positive comments, sponsors and stakeholders we interviewed also identified challenges with the application evaluation process for TIFIA and RRIF and offered some suggestions to improve the process, including how to further streamline the process. Based on our interviews, the most common challenges involved uncertainty related to the overall length of the application process, changes to the Bureau’s requirements or terms for loans, and the goals and risk appetite for the financing programs. We and others have previously reported on some of these challenges for TIFIA or RRIF. Length and uncertainty of process. Four sponsors and one stakeholder said the overall length of the application evaluation process creates a challenge when seeking and planning for credit assistance. This challenge predated the Bureau as we similarly reported in 2012 and 2016, before the Bureau was created, that project sponsors cited the length of the application evaluation process for the TIFIA and RRIF programs respectively as challenges. Furthermore, seven sponsors and three stakeholders we spoke with also said the Bureau should refine or further streamline the application evaluation process. For example, one sponsor said it faced an uncertain timeline when its project awaited Credit Review Team approval and that it was not informed by the Bureau when the meeting would be held. The Bureau instituted regular Credit Review Team and Council meetings to give sponsors a greater sense of certainty and transparency on when DOT would be voting to advance a project. Another sponsor said it took the Bureau over 3 months to procure independent advisors to help with the Bureau’s creditworthiness review, though Bureau officials said it takes about 6 weeks to procure these advisors. In our analysis of six selected TIFIA and RRIF projects, we found that five projects signed their credit agreements between 3 and 6 months later than was anticipated when the project was in creditworthiness review, according to Bureau documents for each project. Our analysis also found that the processing time for steps in the process varied, including steps that may be more within the Bureau’s control. For example, the number of days between a project’s receiving approval by the Council and the Secretary ranged from same-day approval to 43 days. Though some slowdowns can result from factors that are out of the Bureau’s control, sponsors we interviewed discussed the overall effect of slowdowns to projects. For example, sponsors of two projects said application slowdowns led to cost increases and a schedule delay for one project. To improve the application evaluation process, three sponsors and one stakeholder said the Bureau could provide tailored schedules for a project for each phase of the process. One stakeholder also said the Bureau could add certainty and transparency by providing information on how long different phases generally take, information that this stakeholder said it had not received when working with the Bureau, though this is a customary practice when seeking financing in the private sector. Bureau officials pointed out limitations to providing or predicting formal schedules and timelines for the process for specific projects. Bureau officials said many factors influence how quickly a project can advance through the application evaluation process for TIFIA and RRIF, primarily the quality of the project’s credit and overall complexity. In addition to these primary factors, Bureau officials said an application’s processing time can be affected by a sponsor’s responsiveness to requests or whether the sponsor is concurrently negotiating other agreements. Bureau officials said they do not tell a sponsor the specific date of the Credit Review Team or Council meeting on which its project will be reviewed, but instead tell a sponsor what information is needed and by when to reach the next meeting. The Bureau takes this approach because a sponsor may, for example, provide incomplete information, meaning the project would have to wait to be discussed at a meeting that is later than expected. Furthermore, the dates of Council meetings often change due to the members’ schedules, and the Bureau does not want to cause a sponsor undue alarm if the date changes. Bureau officials said they provide a general schedule to a sponsor once a project enters creditworthiness review and use this schedule as a starting point to build a tailored schedule for a project. We found that this general schedule uses historical data to show how long steps in the process could take, but this schedule uses steps and decisions for the process used for TIFIA that pre-dated the Bureau. Bureau officials also said they may informally identify ways to expedite the process where appropriate for a specific project, but that these enhancements affect primarily lower-risk projects. Changing requirements or terms for loans. Six sponsors said changing requirements or terms during the application evaluation process created a challenge of having to navigate new expectations during the process. For example, two sponsors said they had to make changes to terms and conditions for loans late in the process. Specifically, one of the sponsors said it would have preferred to learn about the Bureau’s policy related to certain terms earlier in the process rather than have to accept an unexpected change late in the process, after it has committed time and resources to the process. One sponsor said certain terms developed by the Bureau’s underwriting team, which conducts the creditworthiness review, had to be restructured following review by the Credit Review Team. Another sponsor said the Bureau changed or introduced new requirements after it began the application evaluation process, including what was required at particular steps, but did not provide reasoning for its changes. To address such challenges, four sponsors and two stakeholders said the Bureau could better accommodate projects with different revenue streams by, for example, creating different standard terms and contract templates. Bureau officials described factors that can result in changes to the tentative terms and conditions during the application evaluation process for a project. For instance, if a project’s scope or construction cost estimates change significantly in ways that affect the financial assumptions for a project, the Bureau must reevaluate the project and make changes to the terms and conditions accordingly. Bureau officials said they try to balance providing certainty and flexibility but lean toward providing flexibility; for instance, the Bureau will try to accommodate a sponsor that changes the proposed financing for a project, which then may result in changes to terms as the Bureau reevaluates the project’s risk. In addition, the terms and conditions discussed for a project are tentative until they are approved by the Credit Review Team, Council, and Secretary. According to Bureau officials, sponsors can advance through the application process more quickly and with greater certainty by agreeing to use the Bureau’s standard credit terms—that is, agreeing to the terms and conditions in a template provided by the Bureau as opposed to choosing to negotiate with the Bureau with those terms and conditions as a starting point. Finally, Bureau officials said they were developing two additional standard loan templates to post on the Bureau’s website with the two existing loan templates for projects with different financing structures and revenue streams. Unclear program goals and risk appetite. Many sponsors we interviewed said the Bureau did not clearly convey the program goals or appetite for risk for its TIFIA and RRIF programs. Eight sponsors and one stakeholder cited the Bureau’s approach toward risk as creating a challenge for sponsors to determine if their projects fit the Bureau’s programs. Four sponsors said the Bureau required strict terms and conditions in its credit agreements that seemed excessive, and one sponsor said such strict terms can impose additional costs on a sponsor without materially improving credit quality since a project must have an investment-grade credit rating. One sponsor stated that the lack of clarity on goals and appetite for risk for its project, coupled with other challenges, led the sponsor to withdraw from seeking financing. According to the sponsor, while the programs were created to fill market gaps, it is not clear whether the Bureau’s financing programs currently seek to provide financing to lower risk projects that have a high-quality credit rating or to higher risk projects that are unable to secure financing in the private markets. Similarly, a May 2017 Congressional Research Service report noted that a significant portion of RRIF financing has gone to passenger rail projects since 2008, though the program was primarily created to support freight rail projects, and that the size of loans and some of the risks for passenger rail assistance differ from the assistance historically provided for freight rail. One sponsor we spoke with said it would be helpful if the Bureau and the Council shared information with sponsors regarding DOT’s appetite for risk when evaluating projects, similar to how commercial banks can share a risk profile framework. Bureau officials said DOT’s financing programs and their treatment of risk have evolved over the past decade based on changes to private markets and lessons learned by DOT in working on projects that faced bankruptcy. According to Bureau officials, the Bureau has also changed its standard terms and conditions, as any lender would do, over time. However, Bureau officials said the Bureau lacks an external statement that communicates its goals and appetite for risk for its financing programs. Bureau officials told us they have developed a draft risk appetite statement for internal use. Officials said this risk appetite statement is imbedded in draft credit-risk guidelines the Bureau is developing to use to enable more consistent review of individual projects applying for financing. The officials noted that this draft statement is short and general by design because TIFIA and RRIF can finance a wide range of projects. Furthermore, Bureau officials said it would be difficult to create a public risk appetite statement, as suggested by the consultant, that did not constrain their flexibility to finance a range of projects, particularly as the Bureau seeks to further diversify its portfolio and assist a variety of projects. In lieu of a public risk appetite statement, the Bureau encourages sponsors to meet with its staff early to assess whether a project would be a good fit for its financing programs. However, Bureau officials agreed that it could be beneficial for the Bureau to issue a public statement that conveys how it intends to balance its financing portfolio and support varying types of risks and projects that seek assistance. Bureau Lacks a Mechanism to Assess its Process Given the challenges identified by sponsors, we found that the Bureau has not developed an approach to assess how effectively its application evaluation process works for TIFIA and RRIF, including what in the process is challenging and what works well. In particular, Bureau officials said they have not formally analyzed the amount of time it takes for projects to proceed through the process due to concerns that assessing speed and efficiency may not be appropriate to track for all projects. For example, a sponsor may not need financing immediately and thus choose to proceed at a slower pace. Also, while Bureau officials said it would be beneficial to formally solicit and analyze the satisfaction of sponsors that have closed loans, the Bureau has not implemented a mechanism to systematically solicit feedback on sponsors’ experiences, including any challenges. Federal standards for internal control state that management should design control activities to achieve its objectives. Control activities include reviews of an agency’s programs or activities to compare actual results to objectives and expected results, for example by evaluating the amount of time projects take in each step of the process. Federal standards for internal control also state that an agency should externally communicate information to achieve its objectives; this communication includes receiving information through reporting lines from external parties to help ensure effective operations. In addition, Office of Management and Budget guidance to agencies that manage financing programs also states that effective oversight relies on robust data collection and reporting systems that include, for instance, metrics from collected feedback on customer service or overall applicant satisfaction. As noted above, the Bureau cannot control all the factors and circumstances surrounding the application evaluation process. However, officials have stated that the Bureau seeks to expand and diversify the types of projects that access the TIFIA and RRIF programs, and one of the Bureau’s own guiding principles is to clear roadblocks to provide financing more quickly and transparently and to have a consistent application process. Without a mechanism to formally examine how to improve and further streamline the process, the Bureau may be missing an opportunity to address any recurring challenges with the process or with how the Bureau communicates with sponsors, a situation that could discourage sponsors from the seeking financial assistance from these programs. Moreover, the Office of Management and Budget has directed agencies that manage financing programs to establish acceptable risk thresholds to balance policy goals with risks and costs to the taxpayer, and to monitor the program’s progress toward achieving policy goals within those acceptable risk thresholds. Federal standards for internal control also call for management to define objectives or goals clearly to enable the identification of risks and define risk tolerances. These standards also call for management to externally communicate the necessary information to achieve its goals. In the initial implementation plan, the Bureau’s consultant recommended that the Bureau publicly issue a risk appetite statement that specified acceptable types of risks and projects DOT would support. We have previously reported that setting an organizational risk appetite is an example of a good practice agencies can take to align risk management processes to goals and objectives. We also reported that by not clearly defining and communicating its appetite for risk, an agency could be taking risks well beyond management’s comfort level or be passing up opportunities by assuming its leaders were risk averse. In addition, a former DOT official we interviewed said DOT and the Bureau should have an in-depth conversation about the risk in its portfolio of projects to help decide what risks are tolerable and, thus, help the Bureau better decide the risks it can accept for individual projects. Without clearly defining and communicating to the public the goals and appetite for risk for TIFIA and RRIF programs, the Bureau may be missing an opportunity to make its application process more transparent. Moreover, by issuing a public statement that clearly communicates the types of risks DOT is willing to accept, sponsors would be in a better position to determine if the TIFIA and RRIF programs would be a feasible option for their projects before committing resources to applying. Half of Selected Sponsors Were Satisfied with the Bureau’s Technical Assistance when Seeking Financing, but Some Sponsors Highlighted Concerns Since it opened in July 2016, the Bureau has provided technical assistance to sponsors for 119 distinct projects, based on our analysis of Bureau data. As of August 2018, about half of projects were in the early phases of working with the Bureau. In total, 56 projects were in initial engagement or project development, the phases during which the Bureau provides technical assistance to sponsors (see table 1). By mode, rail and highway projects comprised about half of all projects. The amount of technical assistance and level of interaction between the Bureau and project sponsor in the initial engagement and project development phases varied, based on the sponsor’s experience using DOT’s financing programs and the project’s complexity. For example, one sponsor we interviewed met with the Bureau to discuss the expected timing to apply for and receive a TIFIA loan; this sponsor did not seek additional technical assistance in project development as it had previously received a TIFIA loan and had completed work to comply with federal requirements for the project, including the environmental review and permitting work. Another sponsor we interviewed was new to the Bureau’s financing programs and met with the Bureau to learn more generally about the requirements for the different programs and the application process. Half of the sponsors we interviewed were satisfied with the Bureau’s technical assistance, but some sponsors expressed concerns including the following: Ability and willingness to move projects forward. In our interviews with 16 sponsors that received technical assistance from the Bureau, 8 said they were satisfied with the technical assistance provided by the Bureau, and 9 said that the Bureau functioned as a one-stop shop to access financing and funding programs and technical assistance. However, six sponsors said the Bureau’s technical assistance was slightly helpful or not helpful in clearing roadblocks to provide credit and grants more quickly and transparently. For example, one sponsor said its project experienced delays over a period of several months as it made multiple attempts to obtain specific, actionable feedback from the Bureau on its materials to better understand what was needed to advance in the Bureau’s process. Lack of clarity on RRIF program eligibility. In our interviews with sponsors, a recurring concern included a lack of clarity from the Bureau on eligibility requirements for the RRIF program, in particular for sponsors seeking financing for transit-oriented development projects. For example, from information gathered from sponsors of 10 inactive projects, we found that four were transit-oriented development projects that became inactive because the Bureau determined them to be ineligible. Sponsors of two of these projects said they were initially told their projects would be eligible, but after continuing to work with the Bureau for 5 to 6 months, the sponsors said their transit-oriented development projects were determined to be ineligible for the RRIF program. In addition, sponsors of these two projects said they faced difficulty reconciling differences found in the Bureau’s transit-oriented development eligibility guidance for the RRIF program and transit-oriented development guidance issued by modal administrations for other programs. For example, one sponsor said it felt that the Bureau’s guidance did not clearly outline the eligibility requirements for transit-oriented development for the RRIF program and that it would help if the Bureau provided greater clarity about what kinds of development around rail stations would be eligible. In response to these concerns, the Bureau has begun taking steps that could help address them. For example, the Bureau is working to develop an expedited application process—RRIF Express—for RRIF projects that meet certain criteria. As we and the DOT Office of Inspector General have previously reported, sponsors have identified challenges with RRIF that, in some cases, have deterred them from applying to the program, so steps taken by the Bureau to expand use of the program are of particular interest to many sponsors of potential rail projects. Despite these efforts, as stated earlier, the Bureau does not have a written plan to guide its continued implementation efforts, and it does not have a formal mechanism to examine how it could improve its process for working with sponsors. Such a plan and mechanism could help the Bureau better understand and appropriately address sponsors’ concerns with the Bureau’s provision of technical assistance. Bureau Provided Clear Rationale for TIFIA and RRIF Decisions but Not For PAB Decisions As discussed earlier, the FAST Act required the Bureau to document major decisions in the application evaluation process and provide a clear rationale for its decisions. Federal standards for internal control also call for management to internally communicate the necessary quality information to achieve its objectives; this communication includes providing management quality information that is necessary for effective oversight. We reviewed documents for six TIFIA and RRIF projects and found the Bureau documented each decision to approve these projects and provided a clear rationale for those decisions. To document decisions about whether to advance and approve these projects, the Bureau used formal meeting agendas and notes from the Credit Review Team and Council meetings and internal memorandums. For example, the Bureau used internal memorandums to record the Secretary’s signature of approval to extend credit to a project. To document the rationale in support of these decisions, the Bureau used internal reports and memorandums. For example, to support its decisions to invite or not invite a project sponsor to submit a formal application, the Credit Review Team provided a description of how the project satisfied program requirements like having a preliminary rating opinion letter and how the project satisfied program creditworthiness standards including the sufficiency of the repayment source or collateral. However, in our review of four projects that received PAB allocations, we found that while the Bureau documented its decision about whether to advance and approve each application, it did not document a clear rationale to support that decision. Specifically, the Bureau recorded decisions in Credit Review Team and Council meeting materials and the approval letter sent to the sponsor. To evaluate a PAB application, the Bureau reviews the application against statutory eligibility requirements and the availability of PAB allocation capacity. We found that the Bureau’s documents in the PAB evaluation process lacked a clear rationale in support of decisions. Specifically, the documents summarized information from the application but did not articulate whether or how the Bureau determined that this summarized information from the application satisfied PAB eligibility and availability requirements. We found that this occurred because the Bureau lacks a policy to document the rationale for how a project meets statutory and DOT requirements in order to advance a PAB application. DOT officials said determining whether a project meets requirements to receive a PAB allocation can be self-evident, and therefore, the application itself can be sufficient documentation. However, absent a documented rationale to support its decisions, it is not immediately clear what information the Bureau cited or used to make decisions about applications through the process. As a result, DOT, the Bureau, and the PAB program could be exposed to risks. For example, we previously reported that programs that do not have defined application review procedures may not review applications consistently and thereby leave the program vulnerable to questions about the integrity of the process. Moreover, as the PAB program nears the $15 billion allocation limit, recording the rationale—including the effect of a proposed allocation—would help ensure DOT’s decision makers receive up-to-date information needed to make informed decisions and manage the program. Conclusions With the creation of the Bureau, transportation projects seeking financing from DOT have a new, central point of contact for assistance. A concerted initial-planning effort enabled the Bureau to open and start working with project sponsors in just over 6 months after federal law called for its creation. The Bureau has made varied progress on its statutory responsibilities since it was created over 2 years ago. This situation underscores the need to sustain momentum beyond an initial implementation effort, in order to give ongoing planning and attention to additional priorities and tasks and to identify possible improvements based on early experiences. The Bureau was given a challenging task— to serve as a one-stop shop that provides a number of different services and diverse technical resources. However, without an implementation plan and performance indicators, it may not be able to sustain its progress and prioritize its efforts. In response to congressional direction for the Bureau to make changes to streamline the application evaluation process for DOT’s financing programs, the Bureau created a new, consolidated process to accept and evaluate applications. However, the Bureau has not developed an approach to examine whether opportunities for further streamlining and improvement exist. Furthermore, absent clarity about the Bureau’s appetite for risks for its financing programs, sponsors lack information to know if they should invest time and resources applying for TIFIA or RRIF for their projects. Without examining the Bureau’s process and communicating its appetite for risk, the Bureau may be missing an opportunity to address any recurring challenges that may undermine the purpose and availability of its programs. Finally, for the PAB program, the Bureau does not have a policy to document its rationale justifying decisions and that lack of a rationale may leave the Bureau open to challenges regarding its decisions. By providing the rationale for its decisions, the Bureau could engender more trust in these decisions and increase the program’s transparency. Recommendations We are making the following five recommendations to DOT: The Under Secretary of Transportation for Policy should ensure that the Build America Bureau develop a detailed implementation plan that sets goals and a timeline for the Bureau’s continued efforts, fills vacancies in the Bureau, and prioritizes and sequences work to carry out the multiple responsibilities given to the Bureau in the FAST Act. (Recommendation 1) The Under Secretary of Transportation for Policy should ensure that the Build America Bureau develop performance indicators to assess the Bureau’s progress toward meeting its guiding principles or mission as a “one-stop shop.” (Recommendation 2) The Under Secretary of Transportation for Policy should ensure that the Build America Bureau develop a mechanism to assess the Bureau’s application evaluation process for TIFIA and RRIF and identify and address opportunities to improve and further streamline the process. This evaluation should include mechanisms to solicit feedback from project sponsors that sought financing. (Recommendation 3) The Under Secretary of Transportation for Policy should ensure that the Build America Bureau develop and adopt a public statement that outlines DOT’s and the Bureau’s policy goals and appetite for risk for the TIFIA and RRIF financing programs. (Recommendation 4) The Under Secretary of Transportation for Policy should ensure that the Build America Bureau establish a policy to document a clear rationale to support decisions made in the PAB application evaluation process to explain why an allocation should or should not be approved. (Recommendation 5) Agency Comments and Our Response We provided a draft of this report to the Department of Transportation for review and comment. In its comments, reproduced in appendix III, DOT concurred with our recommendation to develop performance measures (Recommendation 2) and to assess its application review process (Recommendation 3). DOT did not fully concur with our recommendations to develop a detailed implementation plan (Recommendation 1), adopt a public statement of its policy goals and risk appetite for its financing programs (Recommendation 4), and establish a policy to document the rationale for decisions in the PAB process (Recommendation 5). In its comments, DOT did not provide reasons for disagreeing with these three recommendations. We continue to believe that it is important for DOT to implement these recommendations to help the Bureau prioritize and complete its continued implementation efforts and to help improve the transparency of the Bureau’s processes and decisions for evaluating applications. DOT also provided one technical comment, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The Fixing America’s Surface Transportation Act (FAST Act) required that the Department of Transportation (DOT) establish a finance bureau to coordinate and consolidate certain surface transportation funding and finance programs. The FAST Act also included a provision for GAO to review the Bureau’s actions to establish procedures for evaluating applications for programs it administers and provide a clear rationale for major decisions in the application evaluation process. We assessed (1) the progress DOT made to establish the Bureau and carry out its responsibilities; (2) the Bureau’s process for evaluating applications and providing technical assistance, including obtaining the views of sponsors and stakeholders; and (3) whether the Bureau, when evaluating applications, has provided a clear rationale for its decisions. In the second objective, we focused on the Bureau’s work evaluating applications and providing technical assistance because these two responsibilities aligned with the mandate for GAO and were responsibilities the Bureau has made the most progress on. To examine DOT’s progress establishing the Bureau, we reviewed DOT and Bureau documents—90-day and yearly implementation progress reports to Congress, operating procedures, job descriptions and position postings for vacant positions, and budget requests—to determine DOT’s plans and progress organizing and staffing the Bureau. We also analyzed reports, including an initial implementation plan, created by a consultant DOT hired in 2016 to help it create and organize the Bureau, and we reviewed the FAST Act and appropriations acts to identify DOT authorities to eliminate and consolidate offices and transfer funds and staff in order to establish the Bureau. We interviewed former and current DOT and Bureau officials to understand DOT’s goals and priorities, coordination with modal administrations, challenges or successes, and key next steps for the Bureau. We selected former DOT and Bureau officials who played key roles to establish or work in the Bureau or that were recommended in our interviews. We also interviewed select associations and advisors about their interactions with the Bureau to date, including observations on its creation, organization, and staffing. We selected associations representing project sponsors that have sought or could seek assistance from the Bureau, that vary in mode and sponsor type, and that vary in terms of experience working with the Bureau since July 2016. We selected advisors that have experience working with multiple project sponsors and that worked with the most sponsors of recently closed TIFIA and RRIF loans. At the end of this appendix, these selected organizations are included in table 2, which lists the individuals and organizations interviewed for this report. In addition, to determine DOT’s and the Bureau’s progress in carrying out responsibilities set out for the Bureau in the FAST Act, we examined DOT and Bureau documents, such as the Credit Programs Guide and Build America Bureau Processes and Governance Manual, and procedures, documents, and other information publicly available on the Bureau’s website. We supplemented this information with interviews with DOT and Bureau officials to understand the progress the Bureau made for each responsibility and how the Bureau prioritized its approach to fulfilling these responsibilities overall. We also used these interviews to understand the Bureau’s timeline or strategy for fulfilling each responsibility in the future or the cause of no action to date for responsibilities on which the Bureau has taken limited or no action, as well as to understand what metrics or performance measures DOT established to track its progress or outcomes for these responsibilities. We also asked stakeholders we interviewed—including select former DOT officials, associations, and advisors, selected as described above— about their observations on the Bureau’s progress in carrying out these responsibilities. We compared DOT’s and the Bureau’s efforts to federal standards for internal control and key practices for organizational transformations. To assess the Bureau’s process for evaluating applications and providing technical assistance, we reviewed the Credit Programs Guide and other Bureau documents and interviewed Bureau officials to determine the phases and steps in the process. We also reviewed these documents and interviewed Bureau officials to understand the changes DOT made to combine and consolidate existing processes. Our review of the process of evaluating applications included semi-structured interviews with selected project sponsors and stakeholders to understand their experiences using the application evaluation process, experiences working with the Bureau, and comparisons of the application process before and after the Bureau was created, if applicable. First, we selected sponsors for the 10 projects for which we reviewed application documents, as described below, to determine whether the Bureau provided a clear rationale for its decisions. Second, we selected other stakeholders—including advisors and associations (as described above) and projects sponsors with experience applying for DOT financing both before and after the Bureau was created. Among these project sponsors, we selected three projects that had multiple loans; used special authorities or agreements (i.e., master credit agreement); or employed public-private partnerships to deliver projects. Five additional project sponsors, selected as part of other samples described in this appendix, had experience with some part of the TIFIA or RRIF application evaluation process under the Bureau, so we asked these sponsors questions on this part of the process. We analyzed the interview responses by categorizing them based on the extent to which respondents said the process was quick, streamlined, and transparent; what in the process was most useful and most challenging; suggestions for improving the process; and overall satisfaction or dissatisfaction with the process. Furthermore, our review of the Bureau’s process for providing technical assistance included analyzing the Bureau’s data to describe the projects that have sought assistance from the Bureau since it opened by mode, location, type of financing pursued, and step reached in the application process. For technical assistance, we focused on project-specific assistance provided by the Outreach and Project Development Office before a project enters the creditworthiness review phase—referred to as initial engagement and project development. We reviewed the Bureau’s data on projects from April 2018 as well as updated data from August 2018. To assess the reliability of these data, we reviewed relevant documents and interviewed Bureau officials responsible for overseeing the data to learn how information was entered, maintained, and reviewed. We also reviewed relevant data elements for missing data, outliers, and obvious errors. Based on these steps we determined that the data were sufficiently reliable for the purpose of describing the number and type of projects that worked with the Bureau and selecting project sponsors to interview. We also conducted semi-structured interviews with project sponsors to understand their experiences working with the Bureau during the initial engagement and project development phases. In these interviews, we asked sponsors whether the Bureau serves as a single DOT point of contact and provides access to its finance programs with greater speed and transparency; for projects no longer seeking assistance from the Bureau, we asked about the reasons for doing so. Among project sponsors actively working with the Bureau, we identified 32 projects that began working with the Bureau after it was created in July 2016, that had met with or been in contact with the Bureau in the 6 months prior to April 2018, and that the Bureau ranked as 2 or higher on its readiness scale. Of these projects, we selected 13 sponsors to ensure variety in project status (i.e., initial engagement, project development, creditworthiness), mode, total project cost, prior experience with DOT’s financing programs, and location. Among project sponsors no longer actively working with the Bureau, we identified 10 projects that began working with the Bureau after it was created in July 2016 and had at least two interactions with the Bureau, based on available data. We selected 5 of these sponsors to interview to ensure variation in mode and location. For the Bureau’s provision of technical assistance, we categorized the responses to questions in terms of which interactions with the Bureau were most useful and most challenging, suggestions for improving the process, and overall satisfaction or dissatisfaction. For inactive project sponsors, we categorized responses according to reasons the project became inactive or withdrew from working with the Bureau, and what other financing, if any, was used for the project. Table 2 below lists project sponsors and other organizations we interviewed. Overall, we assessed the Bureau’s process for evaluating applications and providing technical assistance and the collected evidence against federal standards for internal control and Office of Management and Budget’s guidance for agencies that manage financing programs. To assess whether the Bureau provided a clear rationale for its decisions when evaluating applications, we reviewed the Credit Programs Guide and other Bureau documents to identify steps and major decision points and accompanying documents in the application evaluation process. We identified 5 major decision points for TIFIA and RRIF and 3 major decision points for PAB. We also used these documents to identify evaluation criteria for each major decision point (i.e., the information or requirements that the Bureau says must be considered at each decision point) to use to assess whether the Bureau provided a clear rationale for each decision point. We confirmed our list of steps and major decision points, as well as accompanying documents, with Bureau staff responsible for the financing programs. We did not examine whether the Bureau documented decisions for the grant funding program it administers, and we have previously evaluated this program and also have work in progress to evaluate it. To assess whether the Bureau followed these procedures and documented major decisions and rationale, we selected projects that went through most of the application process after the Bureau updated its process in September 2016. For TIFIA and RRIF, these are projects that completed the first or second decision point—being invited to enter creditworthiness or being invited to submit a formal application—and had signed credit agreements as of March 31, 2018. We selected all three projects that completed the first decision point and had signed credit agreements. We selected 3 of the 5 projects that completed the second decision point and had signed credit agreements to ensure variation in type of sponsor (e.g., state or local government, private entity), mode, and size of loan. For PAB, we selected all four projects that submitted an application after September 2016 and received an allocation as of March 2018. For each selected project, we reviewed Bureau documents, including meeting agendas and summaries, memos, summaries of financial analyses, and letters to sponsors. Two GAO staff independently reviewed these documents to determine if the Bureau documented and provided a clear rationale for each major decision point, comparing the documents against practices in the Bureau’s application evaluation process and federal standards for internal control. Using Bureau documents, we also calculated how much time it took for each project to move between each step and decision point and determined whether each project met its anticipated financial close date. We did not compare the amount of time it took for these projects to complete the application process to projects that received financing before DOT created the Bureau because the steps and decision points for the application process changed. However, we interviewed Bureau officials to understand the application evaluation process and the 10 projects we selected. We also drew on past GAO work and that of others to understand past findings and challenges for the financing programs before the Bureau was created. Appendix II: Additional Information on Build America Bureau’s Organizational Structure and Staffing Initial Organizational Structure The consultant’s initial implementation plan for the Build America Bureau (Bureau)—created by the consultant while working with the Department of Transportation’s (DOT) internal committees—outlined an organizational structure with responsibilities and roles for its positions. Most positions resided in three offices that administer specific programs or provide technical assistance to sponsors. The Outreach and Project Development Office works to educate project sponsors about how they can best combine DOT’s financing and funding programs as well as innovative project delivery approaches. The implementation plan envisioned a director to manage the office, general project development lead positions to conduct outreach and provide assistance to sponsors on specific projects, and specialized project development lead positions with expertise in a particular area, such as rail or maritime, to help sponsors with more complex projects and to provide technical assistance to other sponsors and staff in the Bureau. The plan also envisioned best practices positions with expertise in public-private partnerships, transit-oriented development, or federal permitting. The Credit Programs Office administers the application processes for the Transportation Infrastructure Finance and Innovation Act (TIFIA) and Railroad Rehabilitation and Improvement Financing (RRIF) programs. The implementation plan envisioned a director to manage the office with the remaining positions split among three areas: underwriting positions to review and evaluate project applications, portfolio management positions to manage existing credit agreements, and risk management positions to evaluate project- specific risks, conduct audit activities, and carry out other risk and budget activities. Underwriting staff, for example, conduct an in-depth review of a project application that includes evaluating the plan of finance and feasibility of the revenue stream pledged to repay credit assistance or sufficiency of other pledged collateral. For the Infrastructure for Rebuilding America (INFRA) Grants Office, the structure envisioned a director and additional positions to administer the competitive grant program. Beyond these offices, the initial implementation plan proposed an Executive Director, as required by statute, to lead the Bureau’s work and positions to support the entire Bureau. The organizational structure also included additional positions to provide full-time legal support to the Bureau, which are housed in DOT’s Office of General Counsel. Initial Staffing for the Bureau Our analysis—based on Bureau documents and discussions with Bureau officials—shows that when the Bureau opened in July 2016, 7 months after the Fixing America’s Surface Transportation Act (FAST Act) was enacted, it largely followed the envisioned structure. When the Bureau opened in July 2016, DOT detailed or transferred 29 staff to run the Bureau. Twenty-five of these staff filled positions in the Bureau’s three offices, and the four remaining staff filled positions in the Office of General Counsel that provided dedicated legal services to the Bureau. These staff came from other parts of DOT as follows: Federal Highway Administration (FHWA). DOT detailed 16 staff from FHWA’s TIFIA Joint Program Office to the Bureau, primarily to work in the Bureau’s Credit Programs Office. DOT also detailed three attorneys from FHWA’s Office of the Chief Counsel to the Office of General Counsel. Federal Railroad Administration (FRA). DOT transferred five staff from FRA to the Bureau’s Credit Programs Office. Federal Transit Administration (FTA). DOT transferred one attorney from this modal administration to the Office of General Counsel. Maritime Administration. DOT transferred one staff member from this modal administration to the Bureau to work in the Outreach and Project Development Office. Office of the Secretary of Transportation (OST). DOT transferred the remaining three staff from the Build America Transportation Investment Center to work in the Outreach and Project Development Office and in Bureau leadership and support roles. DOT, in opening the Bureau, did not fill any of the positions in the INFRA Grants Office. According to current and former DOT officials, DOT used staff in OST that administer another competitive grant funding program to administer the first round of INFRA grants, as noted above. This decision also allowed DOT to move quickly to make grants for the first round of funding. At the same time, DOT officials told us that no funding was provided specifically to administer the INFRA program, so hiring staff to fill those envisioned positions would have diverted resources from other Bureau priorities. In addition, one OST staff person who both worked on the INFRA program and managed the Private Activity Bonds (PAB) program continued to manage PAB after the Bureau took over administration of that program while staying in OST. DOT also decided to leverage other DOT offices and modal administrations to carry out some of the Bureau’s work. Bureau officials stated that this model allows the Bureau to realize efficiencies by using the expertise and support of existing DOT offices rather than duplicating this expertise and support. Figure 2 summarizes the DOT offices that the Bureau interacts with, based on our analysis of Bureau and DOT documents and interviews with Bureau officials. Support provided by other offices within OST: As noted above, the Office of Infrastructure Finance and Innovation administers the INFRA program, leveraging the experience and knowledge of staff in that office that administer another competitive grant program. The Bureau also coordinates with the Infrastructure Permitting Improvement Center on its FAST Act responsibilities related to environmental reviews and permitting. Expertise from DOT’s modal administrations: Designated liaisons in FRA, FTA, FHWA, and the Maritime Administration coordinate with the Bureau to help assess project readiness or identify issues on projects applying for financing, such as ongoing litigation or work remaining on environmental reviews. Liaisons are funded by their modal administration and told us that they spend anywhere from 10 to 75 percent of their time serving as a liaison to the Bureau. The FAST Act gave DOT authority to consolidate or eliminate offices and positions when creating the Bureau. When the Bureau opened in July 2016, DOT eliminated the FRA office that administered RRIF and the Build America Transportation Investment Center as staff and functions transferred to the Bureau. DOT also plans to eliminate the TIFIA Joint Program Office—the office that FHWA staff detailed to the Bureau formerly worked in. According to DOT officials, the FHWA staff from that office are fully integrated and working in the Bureau; however, these staff will remain FHWA employees until DOT completes actions to transfer funds and staff to the Bureau and formally eliminate that office. See below for more detail on the transfer of funds and staff. DOT officials said it was easier to eliminate FRA’s RRIF office than the TIFIA Joint Program Office because the RRIF office did not have dedicated administrative funding like the TIFIA office did and FRA employees worked on RRIF as one of several duties. Changes to Organizational Structure and Staffing After opening and operating the Bureau, DOT made minor changes to the initial organizational structure. According to DOT officials, the Bureau has evolved and changed since it began operations—as would occur for any new office—and its current structure differs in various ways from its initial structure. Based upon the Bureau’s early experience, it eliminated 7 proposed positions: 1 position providing legal support, 3 positions for outreach to sponsors, 2 for addressing risk management, and 1 for managing the Bureau’s portfolio. The Bureau decided to eliminate the outreach positions because despite earlier findings that DOT’s TIFIA and RRIF programs were underutilized, officials discovered that more sponsors than expected were interested in those financing programs. The Bureau also added 5 positions that had not been initially proposed: 2 underwriter positions and 3 positions that work across individual Bureau offices. These cross-Bureau positions handle several duties, including budget, human resources, and procurement issues for the Bureau, working closely with the Office of the Under Secretary for Policy. Funding for the Bureau currently comes from three sources, though DOT officials said they want to consolidate all funding for the Bureau in OST. First, 12 positions are funded through appropriations from general revenues to OST specifically for the Bureau. The President’s budget request has requested funding to support these 12 positions since fiscal year 2017. Second, 23 positions for the TIFIA program are funded through appropriations from the Highway Trust Fund. This funding cannot be used for positions that do not work on matters involving the TIFIA program, unless it is formally transferred to the Bureau, according to DOT. Third, the remaining 8 positions identified in the Bureau organizational chart are not carried out by Bureau employees. Instead, they are carried out by contractors and employees supported by other units of DOT, an approach that Bureau officials said is consistent with the missions of those other units and the Bureau. For instance, FHWA funds two positions in the Outreach and Project Development Office, outside of funding for TIFIA. DOT’s initial ability to transfer funds under the FAST Act to support the Bureau ended in December 2017; according to Bureau officials, this impaired the Bureau’s ability to finish steps to formally consolidate staff who are paid from the Highway Trust Fund. Due to how funds for TIFIA are authorized to FHWA in the FAST Act, DOT needed to receive transfer authority beyond December 2017 so that it could maintain its ability to pay Highway Trust-funded employees in future years after they are formally transferred to OST and paid from OST’s budget. In early 2018, DOT’s ability to transfer funds was extended in the fiscal year 2018 Consolidated Appropriations Act. DOT provided information to the appropriations committees on transferring funds and consolidating offices, as required in statute, and is awaiting a response from these committees. See figure 3 below for position titles, locations in the organization, and funding sources as of October 2018. Vacant Positions The Bureau has had many vacant positions since it opened in July 2016, based on our interviews with current and former DOT officials and our review of Bureau documents. In the 6 months after the Bureau opened, DOT filled some positions, including competitively selecting an Executive Director. Then, in early 2017, DOT and other executive branch agencies were subject to a hiring freeze for about 3 months. However, in the time since the end of the hiring freeze, we found that the Bureau has continued to have many vacant positions (see fig. 4). The Executive Director position has been vacant since the person previously in that role stepped down in November 2017. DOT posted an unsuccessful announcement for this position in November 2017, followed by a second announcement in April 2018 that largely matched the earlier announcement. Beyond the Executive Director, the Bureau has had between 8 and 11 vacant positions in its organizational structure throughout 2018. Some positions, such as the Deputy Executive Director position, have never been filled. Other positions were filled but became vacant as staff left the Bureau for other opportunities. According to our analysis of Bureau documents, 16 of the 29 staff who were detailed or transferred to work in or for the Bureau when it was created in July 2016 remained in the Bureau as of August 2018. DOT and Bureau officials said that DOT did not want to fill vacant positions in the Bureau before filling the Executive Director position, as hiring is one of that position’s duties. Therefore, between fall 2017 and spring 2018, while the Executive Director position was vacant, DOT did not actively fill other vacancies, instead taking a “wait and see” approach, according to DOT and Bureau officials. However, in spring 2018, DOT and Bureau officials said they identified 5 critical vacancies to fill but were not able to provide a written document that laid out a hiring plan or sequence for filling the remaining positions. As of October 2018, Bureau officials said they had filled 5 positions and are in various stages of filling all the remaining vacant positions, either planning to write position descriptions, working with human resources to post jobs, or are in the hiring process. Finally, according to DOT and Bureau officials, DOT continues to use other OST staff to administer INFRA because of uncertainties related to the Bureau’s funding sources. However, DOT and Bureau officials said that many members of the team that oversees the INFRA evaluation process are also members of the Council on Credit and Finance, so the Bureau has an indirect role in the program. The Bureau has used detailees and contractors to fill vacant positions in the Outreach and Project Development Office. This office, unlike the Credit Programs Office, did not have an existing program or a large existing office to fill its positions from. Since July 2016, four detailees from other parts of DOT have filled positions in the Outreach and Project Development Office—the project development or specialized project development lead positions—on short, 4 to 6 month terms. Two of these detailees were reassigned permanently to these positions in the Bureau in summer 2018, and the other two detailees returned to their prior roles. Recently, the Bureau filled one additional such positon with a 2-year detailee from the Federal Aviation Administration. Finally, the Bureau filled two other positions with staff provided through an interagency agreement with the John A. Volpe National Transportation Systems Center effective through fiscal year 2020. Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Cohen (Assistant Director); Joanie Lofgren (Analyst in Charge); Lauren Friedman; David Hooper; Lauren Lynch; Ned Malone; Malika Rice; Amy Rosewarne; and Michael Sweet made key contributions to this report.
Why GAO Did This Study Constructing surface transportation projects can be long endeavors and involve multiple DOT offices. The 2015 Fixing America's Surface Transportation Act (FAST Act) required DOT to establish a finance bureau to consolidate certain funding and financing programs. The FAST Act further required that DOT improve procedures for evaluating applications for these programs—including providing a clear rationale for decisions and streamlining the process. The FAST Act also gave this finance bureau other responsibilities such as promoting best practices for innovative financing. In response, DOT opened the Build America Bureau in July 2016. The FAST Act included a provision for GAO to review the Bureau. This report assesses, among other things, (1) progress DOT made to establish the Bureau and carry out its responsibilities, (2) the Bureau's process for evaluating applications, and (3) whether the Bureau provided a clear rationale for decisions in that process. GAO reviewed federal laws and Bureau documents and interviewed DOT officials and selected stakeholders, including 28 project sponsors selected so projects varied by mode, cost, and outcome. What GAO Found The Department of Transportation (DOT) has taken initial steps to establish the Build America Bureau's (Bureau) organizational structure and to create a process to help the Bureau carry out some of its responsibilities since it was created in 2016. However, the Bureau lacks a plan to guide its ongoing and future efforts. Initial steps included creating a consolidated process to evaluate applications for three financing programs: Transportation Infrastructure Finance and Innovation Act (TIFIA), Railroad Rehabilitation and Improvement Financing (RRIF), and Private Activity Bonds (PAB). DOT largely based this consolidated process on prior practices used for individual programs but also sought to improve and streamline the process. For example, DOT formed a decision-making body that meets more frequently than a predecessor group to quickly address issues and to decide when to advance projects through the process. However, progress has been more limited in implementing other responsibilities, such as promoting best practices for innovative financing. While some of the lack of progress can be attributed to factors such as changes in leadership and staff, the Bureau lacks a plan with implementation goals and a timeline to guide its ongoing and future efforts and also lacks performance indicators to assess its progress. Without these tools, the Bureau may face difficulties prioritizing work to carry out other responsibilities and maintaining momentum throughout continued implementation efforts and any future changes in leadership and staff. While the Bureau has taken steps to improve and streamline the application evaluation process, it does not have a mechanism to assess how well the process works—including what is challenging and what works well. Project sponsors GAO interviewed had mixed views on the Bureau's application evaluation process and whether it was streamlined. Selected sponsors that applied for TIFIA and RRIF financing identified challenges with the process, including the length of the process and changes to requirements or terms for a loan. For example, sponsors said the Bureau took longer than it had estimated to procure external advisors to help conduct its evaluation of applications. According to the sponsors, such delays and uncertainty led to cost increases for two projects and construction delays for one project. Bureau officials noted that many factors outside the Bureau's control influence the length of the application evaluation process, such as changes to a project's scope and construction cost estimates. However, the Bureau has not taken steps, such as consistently soliciting feedback from sponsors, to assess how to further improve and streamline its process. Without taking such steps, the Bureau is missing an opportunity to further streamline the process and to ensure that any challenges do not discourage sponsors from seeking the Bureau's financing programs. GAO found that the Bureau provided a clear rationale for decisions to advance or approve projects in the TIFIA and RRIF programs but did not do so for the PAB program. While DOT did document the decisions made in each step of the application evaluation process for the PAB program, the lack of a documented rationale to support these decisions leaves that program open to questions about the integrity of its process, as it is not immediately clear how the Bureau determined that an application satisfied requirements and what information was used to support decisions that advanced projects. What GAO Recommends GAO is making five recommendations, including that the Bureau develop a plan to guide its efforts and assess ways to further improve the application evaluation process. DOT concurred with two but did not fully concur with three of the recommendations and provided no rationale. GAO continues to believe the recommendations are valid as discussed in the report.
gao_GAO-20-415
gao_GAO-20-415_0
Background The Bureau’s address canvassing operation updates its address list and maps, which are the foundation of the decennial census. An accurate address list both identifies all living quarters that are to receive a notice by mail to respond to the census, and serves as the control mechanism for following up with households that fail to respond to the initial request. Precise maps are critical for counting the population in the proper locations—the basis of congressional apportionment and redistricting. Our prior work has shown that developing an accurate address list is challenging—in part because people can reside in unconventional dwellings, such as converted garages, basements, and other forms of “hidden” housing. For example, as shown in figure 1, what appears to be a single-family house could contain an apartment, as suggested by its two doorbells. During address canvassing, the Bureau verifies that its master address file and maps are accurate to ensure the tabulation for all housing units and group quarters is correct. For the 2010 Census, the address canvassing operation mobilized almost 150,000 field workers to canvass almost every street in the United States and Puerto Rico to update the Bureau’s address list and map data—and in 2012 reported the cost at nearly $450 million. The cost of going door to door in 2010, along with the emerging availability of imagery data, led the Bureau to explore an approach for 2020 address canvassing that would allow for fewer boots on the ground. To reduce costs for the 2020 Census, the Bureau took a new approach and some address canvassing work was completed in-office. The Bureau compared current satellite imagery to the contents of its master address file to determine if areas had housing changes, such as new residential developments or repurposed structures. If the satellite imagery and the master address file matched, then the Bureau considered those areas to be resolved or stable and did not canvass them in-field. These areas that were unresolved by the in-office review were sent to in- field address canvassing. Field staff called listers used laptop computers to compare what they saw on the ground to the address list and maps. Listers confirmed, added, and deleted addresses or moved addresses to their correct map positions. The listers were trained to speak with a knowledgeable resident at each housing unit to confirm or update address data, ask about additional units, confirm the housing unit location on the map (known as the map spot), and collect a map spot either using global positioning systems (GPS) or manually. If no one was available, listers were to use house numbers and street signs to verify the address data. The data were then transmitted electronically to the Bureau. The Bureau Completed In-Field Address Canvassing on Schedule and under Budget, but Listers Did Not Always Follow Procedures Productivity Was Higher Than Expected The Bureau completed in-field address canvassing on time despite nationwide hiring shortfalls. The Bureau credits this success to better- than-expected productivity. The Bureau conducted “in-field” address canvassing for approximately 35 percent of the housing units (approximately 50 million housing units) across the country (see fig. 2). The Bureau had already determined “in-office” that the other 65 percent of addresses (approximately 93 million housing units) were part of stable blocks. The Bureau began the in-field address canvassing operation at seven of its 39 Area Census Offices on August 4, 2019, and then rolled out the operation to the remaining 32 offices on August 18, 2019. It conducted this phased approach to ensure all operations and systems worked together before commencing the operation nationwide. The total in-field address listing workload was more than 50 million addresses from the Bureau’s address file. Bureau officials reported that listers were generally more productive than expected, thus allowing the Bureau to complete the operation as scheduled on October 11, 2019 (see fig. 3). The actual hourly productivity rate for the operation was 19.8 addresses versus the anticipated rate of 15.8 addresses. According to Bureau officials, listers were more productive due to efficiency gains from the Bureau’s new approach, including an automated time and attendance system, the use of computer laptops to collect census data, and a new operational control system that was used to electronically optimize assignments and transmit work to listers. Bureau officials stated that the high productivity also helped the operation come in under budget. The operation’s cost was $118.6 million—while the anticipated cost was $185 million—a reduction of 36 percent. The Bureau Missed Potential Opportunities to Improve the Address List When Listers Did Not Follow Procedures For in-field address canvassing, listers received online training, which detailed the procedures they were to follow, such as: comparing the housing units they see on the ground to the housing units on the address list, knocking on all doors so they could speak with a resident to confirm the address (even if the address is visible on the mailbox or house) and to confirm that there are no other living quarters such as a basement apartment, looking for hidden housing units, and confirming the location of the housing unit on a map with GPS coordinates collected on the doorstep. In our observations of in-field address canvassing, the majority of listers generally followed these procedures. However, some listers we observed did not always follow procedures. For example, Ten out of 59 listers did not work ground to book (i.e., compare what they saw on the ground to what was on their list). Nine out of 59 listers did not walk up to the doorstep to collect the GPS coordinate. Specifically, we observed listers use mailboxes to confirm address information and collect the GPS coordinates from the mailbox. Following proper procedures is important because getting a GPS reading from the doorstep of every address contributes to the accuracy of the address file. Fourteen of 59 listers did not consistently knock on every door as required to confirm the address and ask about “hidden” housing units. Seventeen of 59 listers did not always look for or ask about “hidden” housing units. Not knocking on doors or asking about hidden housing units represents missed opportunities to potentially add missing addresses to the Bureau’s master address file. Further, not all listers we observed provided the required confidentiality notices to occupants. Seven listers we observed did not provide confidentiality notices. Occupants may be more willing to provide their information if they know their responses will not be shared. We communicated the information regarding our observations to the Bureau, and on August 26, 2019, the Bureau instructed its field offices to remind listers of the appropriate procedures. According to Bureau officials, some amount of temporary staff deviates from following procedures with every decennial census. As such, to control for this, the Bureau implemented a Quality Control (QC) component for in-field address canvassing that is designed to detect and correct deficient production listers’ work. QC started on August 11, 2019, and included a total workload of around 3.4 million addresses. For this operation, an automated system selected the sample of addresses to review; these addresses were assigned to QC listers. QC listers received instructions to begin canvassing at a specified location, usually an intersection, and to continue canvassing addresses until the system identified the work unit as “complete” for QC purposes. An address worked by a production lister was considered to have “failed” QC if the QC lister recorded changes, or if the lister missed the address and the QC lister found it. Depending on the size of the block, after a predetermined number of addresses fail within a block, the system fails the entire block. Once a block fails, the QC lister must recanvas all the addresses in that block. Based on preliminary results, Bureau officials estimate that 4.3 percent, or about 2.2 million addresses, failed. According to Bureau officials, while they did not have a predetermined target for what was an acceptable range for the total number of addresses that failed QC, they nevertheless are reasonably confident that this was in an acceptable range for QC errors encountered during the operation. They further stated that they could not compare 2020 QC results to 2010 because the 2010 Address Canvassing Operation canvassed 100 percent of the addresses in-field, while the 2020 In-field Address Canvassing Operation only covered approximately 35 percent of the addresses across the country. Lister productivity for QC was also higher than expected. The Bureau anticipated the QC productivity at 8.03 addresses per hour compared to the actual rate of 14.05 addresses per hour. Higher-than-expected productivity rates contributed to a reduction in costs and the actual cost of QC production was $10.3 million versus the anticipated cost of $25.6 million, a savings of $15.3 million. Additionally, Bureau officials stated that QC came in so far under budget because the use of laptops increased efficiency and the actual QC workload was lower than the budget estimate. Planned Evaluations Will Ultimately Determine the Quality of the Operation While the Bureau conducted real-time quality control follow-up of selected blocks during address canvassing, it also has two studies underway that will evaluate the re-engineered address canvassing approach, as well as the in-field address canvassing operation. Similar studies conducted by the Bureau in 2010 found that 95.7 percent of addresses were correctly deleted and 83.6 percent of addresses were correctly added. Both studies underway have a set of research questions designed to evaluate the accuracy and effectiveness of address canvassing. For example the Bureau seeks to answer questions such as: What percentage of the housing units added during in-field address canvassing were correctly added (and added-in-error)? What percentage of the housing units identified as deleted or duplicated by the listers during in-field address canvassing were correctly deleted or duplicated (and deleted-in-error)? Answering these and other questions contained in both studies will be critical to determining the quality of the operation, as not all listers followed procedures, which may have led to errors in the address file. It is anticipated that the final report for the 2020 Census In-Field Address Canvassing Operational Assessment study will be available September 2020, and the 2020 Census Evaluation: Reengineered Address Canvassing study will be available March 2023. The Bureau Had Successes and Challenges during In- Field Canvassing, Which Have Potential Implications for Future Operations The Bureau Cited Successes with the Operation In addition to completing in-field address canvassing on schedule and under budget Bureau officials highlighted other successes from the operation including: Automated solutions for training staff. Bureau-developed training materials that used a blended training approach including instructor- led, computer-based, and hands-on training. This is a change from the 2010 paper-based and classroom-only training approach. Efficiency gains from conducting reengineered field operations using: New operational control systems, which were used to electronically assign and transmit work to the listers. New automated time and expense reporting (timecards) for employees. In 2010, timecards were paper-based and the listers had to meet with their supervisors to submit them. Enhanced software application for validating and updating addresses. Implementation of rapid response to Hurricane Dorian, which affected areas of the Southeastern United States, resulted in minimal disruptions to the operation. Additionally, the Bureau was able to resolve some unforeseen challenges at the seven Area Census Offices that opened early. For example, the Bureau identified issues with training login and new hires not being on the training roster and rectified those issues before the operation expanded to the rest of the country. The Bureau Is Taking Steps to Address Challenges with Hiring and Onboarding Staff The Bureau experienced delays in hiring for its early operations, raising concerns about hiring for peak operations. The Bureau’s target was to hire 40,300 listers by September 7, 2019, but as of September 9, 2019, the Bureau had hired 31,151 listers. Though address canvassing productivity was higher than expected, in some parts of the country the operation was at risk of falling behind because of a shortage of listers. The Bureau told us it filled the gap with listers who lived well outside of the area in which they were supposed to work—in some cases from a different state. This strategy allowed the Bureau to complete the operation on schedule; however, though the operation as a whole was under budget, the Bureau incurred unplanned costs for travel (airfare, personal mileage rates, rental cars, hotel stays, and per diem). As we previously reported, these hiring problems are an early warning for what may occur later in the census during nonresponse follow-up, when the Bureau intends to hire between 320,000 to 500,000 enumerators to follow up with households that did not initially respond to the census. The Bureau said the hiring issues were caused by delays in processing background checks and greater-than-expected attrition. According to the Bureau, these delays arose, in part, due to early shortages of staff to review background checks and because a significant number of applicants did not completely or accurately fill out related forms. In February 2019, the Bureau began to bring on about 130 temporary staff to review forms for accuracy and completeness prior to submission for investigation and to help investigators conduct the pre-employment background checks. Those delays in turn contributed to subsequent challenges in onboarding listers for address canvassing. For example, according to Bureau officials, the delays in early hiring for Area Census Office staff meant some offices did not have enough clerks in place to process paperwork for listers or make reminder phone calls to hire and onboard listers. Regarding attrition, more listers quit than expected at two points in the hiring process: Fingerprinting: The Bureau expected about 15 percent of applicants would leave the hiring process after being selected and before submitting fingerprints. However, the attrition rate was closer to 25 percent. Bureau officials told us they attributed this to selected applicants, in some cases, having to travel long distances to be fingerprinted. Training: The Bureau found that fewer selected and cleared applicants attended training than anticipated. Bureau officials attributed this to fewer clerks being available to call trainees with reminders to attend training due to delays in clerks receiving their own background checks. Bureau officials also attributed some of this attrition to the 60-day period between the selection of applicants and their training. This new time frame was put in place for the 2020 Census to provide adequate time for adjudication of background checks. The Bureau has begun to address these challenges by adapting its hiring and onboarding processes for peak operations, such as nonresponse follow-up, which is to begin May 2020. For example, the Bureau: Increased the number of fingerprinting locations and machines. According to Bureau officials, it added 133 additional sites and 300 additional machines, bringing the total number of vendor sites for fingerprinting to 829. Staffed Area Census Offices to help newly-selected applicants for positions complete their forms and initiate the background check process. Hired additional staff to help clear background checks. The Bureau hired 200 staff at the National Processing Center and an additional 150 at the Regional Census Centers. Changed the recruiting goals due to the attrition experienced during address canvassing. The recruiting goal has increased from 2.3 million to 2.7 million to ensure it has a large enough applicant pool. This increases the ratio of recruited applicants to positions from 5:1 to 6:1. Completed a wage rate study and increased wages in 73 percent of counties by an average of $1.50 per hour for enumerators. Developed an email campaign to maintain contact with individuals in the recruiting pool. Decreased the types, and therefore the number, of positions that required a full background check. Included additional training for replacement hires in the training schedules. A make-up session was added to the nonresponse follow- up training schedule, May 14-19, 2020. If effectively implemented, these steps hold promise for helping to address the hiring issues. The Bureau Experienced Challenges with Management’s Use of Information To effectively manage address canvassing, the Bureau provides data- driven tools for the census field supervisors to manage listers, including system alerts that identify issues that require the supervisor to follow up with a lister. Operational issues such as listers not working assigned hours or falling behind schedule need to be resolved quickly because of the tight time frames of the address canvassing and subsequent operations. For the address canvassing operation, the system generated codes that covered a variety of operational issues such as unusually high or low productivity (which may be a sign of fraud or failure to follow procedures) and administrative issues such as compliance with overtime and completion of expense reports and time cards. During the operation, more than 621,000 alerts were sent to census field supervisors. Each alert requires the supervisor to take action and then record how the alert was resolved. To assist supervisors, these alerts need to be reliable and properly used. However, nine out of 22 census field supervisors we spoke to indicated the alerts were not always useful. For example, almost 40 percent of those alerts were related to no progress being made on a block. This was due in part to listers opening all of the blocks they were assigned on their laptops in order to manage their workload, triggering the system that work had begun on all assigned blocks when in fact the lister was only working one block. We first heard about this issue from field supervisors in late August. Census field supervisors we spoke to indicated that these alerts took an inordinate amount of time to resolve, in part because almost every lister would open every block to plan his or her day. We alerted Bureau officials in headquarters, and they notified area census offices to remind supervisors to instruct listers not to open all of their blocks at once. After the notification was sent out, Bureau officials reported that the number of alerts due to blocks not being worked declined. Bureau officials further stated that this issue would not impact nonresponse follow-up because enumerators do not receive multiple assignments, but instead receive, work, and transmit only one assignment of housing units for follow-up a day. Another challenge faced by census field supervisors was providing feedback to listers on why addresses failed quality control. Four of 22 census field supervisors we spoke with were not aware that they had access to the reasons why addresses on a block failed quality control. Knowing where to find this information would have allowed census field supervisors to communicate this information to listers, thus improving lister performance as well as the accuracy of the data collected. We shared this information on some census field supervisor’s lack of awareness with the Bureau and on August 26, 2019, the Bureau notified its field offices to remind supervisors that detailed information on why addresses failed quality control was available on their laptops. For nonresponse follow-up, Bureau officials told us QC information about any enumerator with a specified number of failed cases will be sent directly to the Regional Census Center rather than the census field supervisor. The Regional Census Center will decide whether the enumerator should continue working and, if so, what corrective action to take, such as retraining. However, if it is determined that an enumerator falsified data, then the enumerator would not be given new assignments and all of his or her work would then be reinterviewed. Agency Comments We provided a draft of this report to the Secretary of Commerce. In its written comments, reproduced in appendix I, the Bureau noted that our report made no formal recommendations and that we highlighted several successes of the in-field address canvassing operation. The Bureau also described several claims of cost savings and efficiency gains which it attributed to various address list-building activities. While we have previously reported on the Bureau’s 2020 address list-building efforts, we have not audited claims made in the Bureau’s response or elsewhere regarding potential cost savings from innovations for the 2020 Census. The Bureau also provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-3236 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. Appendix I: Comments from the Department of Commerce Appendix II: Area Census Offices Responsible for Locations Visited in This Review Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments J. Christopher Mihm, (202) 512-3236 or mihmj@gao.gov In addition to the contact named above, Lisa Pearson, Assistant Director; Timothy Wexler, Analyst-in-Charge; Margaret Fisher; Robert Gebhart; Richard Hung; Cynthia Saunders; Anna Sorrentino; Kate Sharkey; Dylan Stagner; Jon Ticehurst; Peter Verchinski; and Alicia White made key contributions to this report.
Why GAO Did This Study The decennial census is a costly and complex undertaking and its success depends largely on the Bureau's ability to locate every person residing in the United States. To accomplish this monumental task, the Bureau must maintain accurate address and map information for every person's residence. If this information is inaccurate, people can be missed, counted more than once, or included in the wrong location. To help control costs and to improve accuracy, the Bureau used new procedures to build its address list for 2020. GAO was asked to review how the in-field address canvassing operation performed. This report (1) determines the extent to which the Bureau followed its plans and schedule for in-field address canvassing, and (2) identifies the successes and challenges that occurred during 2020 Census In-Field Address Canvassing that have potential implications for future operations. To address these objectives, GAO reviewed key documents including the 2020 Census operational plan that discussed the goals and objectives for the operation. GAO observed in-field address canvassing across the country at 18 area census offices, including a mix of rural and urban locations. GAO also interviewed field supervisors, listers, and office management to discuss the operation's successes and challenges. GAO provided a draft of this report to the Bureau. The Bureau provided technical comments, which were incorporated as appropriate. What GAO Found The Census Bureau (Bureau) completed in-field address canvassing as scheduled on October 11, 2019, despite nationwide hiring shortfalls. The Bureau credits this success to better-than-expected productivity—the actual hourly productivity rate for the operation was 19.8 addresses versus the anticipated rate of 15.8 addresses. The total workload included more than 50 million addresses. GAO observations of in-field address canvassing found that a majority of field staff (listers) generally followed procedures, but there were a number of exceptions. For example, 14 of 59 listers we observed did not consistently knock on every door as required to confirm the address and ask about “hidden” housing units. Not knocking on doors or asking about hidden housing units represents missed opportunities to potentially add missing addresses to the Bureau's address file. GAO communicated to Bureau officials that listers were not following procedures and they sent out a nationwide reminder for listers to do so. The Bureau credits efficiency gains to new systems for assigning work and a new reporting mechanism for collecting timecards, but experienced delays in hiring for address canvassing. Though address canvassing productivity was higher than expected, in some parts of the country the operation was at risk of falling behind because of a shortage of listers. The Bureau told GAO that it filled the gap with listers who lived well outside of the area in which they were supposed to work—in some cases from a different state. The Bureau is taking actions to address hiring problems for later operations, including nonresponse follow-up, when the Bureau intends to hire between 320,000 to 500,000 enumerators to follow up with households that did not initially respond to the census. Those actions include increasing wage rates in 73 percent of the counties nationwide.
gao_GAO-19-602
gao_GAO-19-602_0
Background Counting the nation’s approximately 140 million households is an enormous undertaking requiring such essential logistics as the opening of hundreds of area offices to conduct essential field activities, recruiting and hiring hundreds of thousands of temporary workers to carry out those activities, and developing an approach to training those employees. To help control costs while maintaining accuracy, the Bureau is making significant changes in each of these areas compared to prior decennials. Area Census Offices According to Bureau planning documents, the Bureau intends to use technology to efficiently and effectively manage the 2020 Census fieldwork, and as a result, reduce the staffing, infrastructure, and brick- and-mortar footprint required for the 2020 Census. The three main components of the reengineered field operations are increased use of technology, increased management and staff productivity, and streamlined office and staffing structure. The Bureau’s 2020 Operational Plan states that 2020 Census field operations will rely heavily on automation. For example, the Bureau plans to provide most listers—temporary staff who verify and update addresses and maps—and enumerators— temporary staff who follow up with households that do not respond to the census questionnaire—with the capability to receive work assignments and perform all administrative and data collection tasks directly from a mobile device allowing them to work remotely. Supervisors will also be able to work remotely from the field and communicate with their staff via these devices—precluding them from needing access to a nearby local office. The Bureau’s 2020 Operational Plan states that these enhanced capabilities will significantly reduce the number of offices required to support 2020 Census fieldwork. In the 2010 Census, the Bureau established 12 RCCs and nearly 500 ACOs. The new design for the 2020 Census field operations includes six RCCs with 248 ACOs. Those 248 will be split into two waves, with 39 of the offices opening for Wave 1 by March 2019 to support early census operations such as in-field address canvassing, and the remaining 209 opening for Wave 2 by September 2019. Recruiting and Hiring Recruiting enough workers to fill the hundreds of thousands of temporary positions needed to conduct the 2020 Census is a tremendous challenge. According to Bureau plans before hiring begins, the Bureau needs to assemble an applicant pool in the millions. For the decennial census, Bureau plans indicate the Bureau will need a large and diverse workforce to ensure the accuracy of its maps and address list, and to follow up by phone or in person with households that do not respond to the questionnaire. Making these efforts even more difficult are external factors beyond the Bureau’s control, such as low unemployment rate, which can make it harder to recruit. According to Bureau plans, recruiting of potential employees will be conducted throughout the ACOs’ geographic area, based on projected operational workloads and staffing models developed for 2020 Census operations. Selected candidates will be invited to be fingerprinted and submit selected appointment paperwork prior to attending classroom training. The candidates will be sworn in and hired during the first day of training. The ACO staff model is as follows: one ACO Manager, one Lead Census Field Manager, one Administrative Manager, one Recruiting Manager, one Information Technology (IT) Manager, and Office Operations Supervisors, Clerks, and Recruiting Assistants. For data collection, it is: multiple Census Field Managers, Census Field Supervisors, and Enumerators; specific numbers based on workload; and supervisory ratios to be determined (see fig. 1). Training According to Bureau plans, the 2010 Census approach to training was predominantly instructor-led training with some hands-on training. This primarily consisted of instructors standing in front of a room of trainees and reading training materials to them from a prepared script. For 2020, the Bureau has developed training materials that use a blended training approach including instructor-led training, computer-based training, and hands-on training. This approach is intended to maximize trainee learning and on-the-job performance during the 2020 Census. According to the Bureau’s Detailed Operational Plan for the Field Infrastructure and Decennial Logistics Management Operations, it has developed training materials based on the lessons learned from previous censuses, such as the need to provide computer-based training. The Bureau’s Detailed Operational Plan for the Field Infrastructure and Decennial Logistics Management Operations also states that this innovation to training combines multiple modes of training delivery designed to maximize training outcomes for various types of learning styles: visual, auditory, and hands-on, blending online training methods, instructor-led classroom training, and on-the-job training or role-playing to prepare field staff to effectively fulfill their duties. Blended training is intended to: Provide a standardization of training, limiting the impact of instructor interpretation. Allow for easily updateable training materials in the case of errors or operational changes, minimizing the burden of errata materials. Provide automated assessment tools to enable a more consistent and reliable way to measure learner understanding of concepts. Provide post-training support through easily accessible online manuals and job aids. Training materials are designed to maximize self-paced learning. These accompanying training materials are developed to provide the most up-to- date methodologies for recruiting, onboarding, and training-the-trainer to carry out field data collection activities. The Bureau’s Efforts to Open Area Census Offices Appear on Track, Despite Some Schedule Slippages For the 2020 Census, the Bureau plans to open 248 ACOs. Similar to the 2010 Census, the total number of ACOs for 2020 was derived from the projected workload for field operations based on the number of enumerators needed for nonresponse follow-up. The Bureau allotted a specific number of ACOs to each of its six regional offices. Regions then developed boundaries for the ACO based on seven mandatory criteria that are described in a program memorandum, including that every state have at least one ACO; federally-recognized American Indian areas and military bases (regardless of county, state, or regional boundaries) will be managed by only one ACO; and ACO areas of responsibility will not cross state boundaries (with the exception of Indian reservations and military bases). See figure 2 below for the location of the 248 offices. Requirements for Leased Office Space for Area Census Offices In addition to the criteria used to delineate boundaries for its ACOs, the Bureau also had requirements for the ACO leased space. These requirements, for example, included that the ACO have a certain amount of contiguous square footage depending on the ACO type, and that an ACO not be co-located in a building that also houses agencies with law enforcement responsibilities because of privacy and confidentiality concerns. The Bureau also designated an “area of consideration” for each of its ACOs. According to Bureau officials, the area of consideration, which is a smaller geographic range where they would like to house the office, was based on such factors as access to public transit, general centrality within the ACO work boundaries, and proximity to eating establishments. In some cases, the Bureau had to deviate from its requirements for leased space or initial area of consideration. The decision to deviate from requirements usually arose from a lack of viable options in the real-estate market coupled with the Bureau’s need to meet its time frames. According to RCC staff, any deviations from requirements were presented at weekly staff meetings and then subsequently approved by the Regional Director, and in some cases such as co-location with law enforcement Bureau Headquarters approval was needed. According to Bureau officials, co-location with law enforcement is sensitive because of concerns that census data may be shared with others. Census data are kept confidential for 72 years. However, Bureau officials told us either the law enforcement offices were deemed innocuous, for example, the office housed a public defender or the law enforcement offices operated undercover, whereby no one entering the building would have been aware of their presence. In another case, Bureau officials told us that the Philadelphia region was struggling to find space for its ACO in Frederick, Maryland. When the General Services Administration (GSA) proposed a space in Hagerstown, Maryland, 30 miles away, the Bureau accepted it, though it was outside the initial area of consideration. According to officials at the regional office, the Bureau saved time and money by using a readily available cost-effective option by choosing Hagerstown, Maryland. The Bureau also had to expand the area of consideration for more than 31 percent or 77 of its 248 ACOs. According to Bureau officials, designating an area of consideration was an iterative process based on market availability, and having to expand the area was often necessary to secure space (see table 1). In select cases, the Bureau co-located ACOs in the same building. For example, instead of having one office in North Philadelphia and one in South Philadelphia, Bureau officials in the Philadelphia Region Census Center agreed to accept space in the same building located within the boundaries of the South Philadelphia ACO. The Bureau hired staff for each ACO from the original designated areas and kept the two offices completely separated. Bureau officials provided documentation indicating that this compromise came with considerable cost savings. The Bureau also abandoned other planned requirements in a number of cases to secure space, such as access to loading docks, assigned parking, and freight elevators. When we reviewed selected ACO files at the regional offices to determine whether the files included support for when deviations from space requirements and initial areas of consideration were documented, we did not find documentation. Instead, documentation was in staff emails. Files included a checklist of documents required, such as the signed lease and design intent drawings; however, there was not a requirement that documentation of deviations from space requirements or initial areas of consideration be maintained. Bureau officials at the regional level said that all procedures for handling waivers and expansions of the area of consideration were driven by the RCCs as well as informal guidance that was not documented. Standards for Internal Control in the Federal Government calls for documentation and records to be properly managed and maintained. Based on our suggestion that the Bureau develop a procedure for documenting these deviations in ACO areas of consideration or requirements, Bureau officials sent an email requiring that staff keep documentation (electronic or paper) on deviations in ACO areas of consideration or requirements in the ACO’s lease file folders. In cases where decisions are made via telephone or email, Bureau officials asked staff to write notes and scan emails, and add them to the ACO files. Maintaining this documentation will help ensure the transparency and integrity of Bureau decision-making, and ensure the information is readily available. The Bureau Is Managing Schedule Slippage in Opening Area Census Offices The Bureau experienced some early delays when regions were trying to find space and acquire leases. The Bureau attributed some of these delays to the use of the GSA’s Automated Advanced Acquisition Program (AAAP) process. This procurement process provides building owners and their authorized representatives with the opportunity to offer general purpose office space for lease to the federal government. The AAAP process accepts bids the first week of each monthly cycle. Then the remaining three weeks of the month are used to evaluate submitted offers and identify a potential lessor. According to GSA documents, in tight real estate markets, the first cycle did not always yield a suitable lessor due to lack of available inventory, and the short lease term the Bureau was seeking. Therefore, the Bureau had to wait three weeks until the start of the next cycle to re-open the bidding process. Bureau officials stated that during these 3 weeks, the Bureau regions would conduct additional market outreach and communicate outreach efforts with GSA to find a lessor. According to GSA, they agreed that too much time was elapsing in Wave I trying to receive offers without making any changes to the requirements or areas of consideration. To address this issue for Wave 2, the Bureau stated that GSA provided additional training to the Bureau’s regional staff, increased market outreach which included dedicated support from GSA’s national office, and the development of a strategy to use all of GSA’s tools, such as using GSA’s contract brokers in regions with the greatest number of Wave 2 ACOs. Bureau regional staff also told us they were able to meet leasing milestones in part because of flexibility in their requirements and in the areas of consideration. As of June 2019, there were signed leases for 247 of 248 offices. However, during our review, the Bureau reported that it had missed several construction (meaning renovations such as new electrical layouts, heating, ventilation, and air conditioning) and deployment deadlines. According to Bureau documents, for Wave 1 offices, nine of 39 offices had missed the February 28, 2019 deadline for having furniture and IT equipment; and for Wave 2 offices, 49 of 209 offices missed the February 20, 2019 deadline for having construction drawings complete. According to Bureau officials they are managing each of these delays on an office- by-office basis, and headquarters officials meet weekly with the RCCs to discuss the status of each office. They are also actively communicating with GSA on how to best work with the landlord to meet deadlines. Agency officials also indicated that the schedule deadlines for the later phases of construction allow for more time than may be necessary, allowing them to make up time lost from early delays. For example, at the Concord, New Hampshire ACO, the Bureau plans to make up lost time in construction with actions such as using a fence to divide two office areas instead of adding a wall, and using a “cage” for badging instead of constructing a separate room inside the space. As of June 3, 2019, 38 of 39 Wave 1 offices are ready for business. Seven of 209 Wave 2 offices are still working to finish the milestone of completing construction drawings, which had an original deadline of February 20, 2019. According to Bureau officials, the seven offices without completed construction drawings are being given priority attention by both GSA and the Bureau. We will continue to monitor the opening of ACOs in ongoing work. The Bureau Has Exceeded Its Early Recruiting Goals for the 2020 Census; However, It Faces Some Challenges Going Forward The Bureau Has Exceeded Its Early Recruiting Goals According to Bureau reporting documents, as of June 2019, the Bureau is exceeding its recruiting goals for early operations. This includes field staff for in-field address canvassing where census staff verify address and map information for housing units in selected areas of the country, office staff at the 39 Wave 1 ACOs, recruiting assistants, and partnership specialists. The Bureau had a goal of recruiting approximately 205,000 individuals for its 2020 early operations efforts by the end of June 2019, and plans to recruit between 2.4 million and 2.6 million applicants for all field operations. By comparison, in 2010, the Bureau recruited about 3.9 million applicants. As of June 17, 2019, the Bureau had processed job applications and assessments for approximately 428,000 applicants which represent about 208 percent of its roughly 205,000 recruiting goal. For the 2020 Census, the Bureau plans to hire nearly 400,000 temporary field staff from its applicant pool for two key operations: in-field address canvassing and nonresponse follow-up, where census staff visit households that do not return census forms to collect data in person. In 2010, the Bureau hired approximately 628,000 temporary workers to conduct the address canvassing and nonresponse follow-up field operations. Below is the recruiting and hiring timeline for the in-field address canvassing and nonresponse follow-up operations (see fig. 3). According to Bureau officials, they are recruiting and hiring fewer temporary staff in 2020 compared to 2010, in part, because automation has made field operations more efficient. For example, there is less paper to manage and process as daily payroll records and daily field work assignments are electronic. As a result, productivity has increased and mileage and labor costs have decreased because census field staff do not meet daily with their supervisors, as was the case in 2010. Moreover, the automation of assignment routing to housing units has optimized the time spent by enumerators driving to housing units. During the 2018 End- to-End Test, the Bureau found the productivity for in-field address canvassing had exceeded its goal at all three test sites (see table 2). The Bureau attributes these efficiencies to the automation of work assignments. The Bureau Plans to Use Successful Recruitment Strategies from Prior Censuses while Leveraging Technology For the 2020 Census, the Bureau plans to use some of the same strategies it used to recruit and hire all temporary workers as during the 2010 Census—because those strategies were successful—while also leveraging technology and social media. For example, according to the Bureau, the overarching strategy for hiring enumerators is to hire people who will work in the communities where they live. This strategy provides the Bureau with enumerators who are familiar with the areas where they will be working and who speak the languages of the local community. To recruit staff, recruiting assistants are to work with local partnership staff and use paid advertisements and earned media (e.g., publicity gained through promotional efforts, news reports, etc.). The Bureau plans to also continue to use its recruiting website, http://www.2020census.gov/jobs, which provides information about the various positions, local pay rates, application materials, and job qualifications. Moreover, Bureau officials stated that a diverse multilingual workforce is needed and that the Bureau has tailored its approach to that end. For example, the website includes Spanish language pages and recruitment materials (see fig. 4). Bureau documentation indicates that similar to 2010, the Bureau will continue to use waivers and hiring exemptions to enable well-qualified individuals to work on the 2020 Census who otherwise might not have applied for jobs, particularly in hard-to-recruit areas. These waivers allow the Bureau to temporarily hire federal retirees and individuals receiving public assistance without impacting their benefits, and to hire current federal employees without impacting their job status or salary. As of February 27, 2019, the Office of Personnel Management (OPM) had given the Bureau approval to hire 44 re-employed annuitants for the 2020 Census. The Bureau also had dual employment agreements with 28 federal agencies and commissions. For the 2010 Census, the Bureau had these agreements with a total of 81 federal agencies. To obtain waivers for individuals on public assistance, the Bureau is partnering with the Office of Management and Budget and working with Health and Human Services to obtain waivers for Temporary Assistance for Needy Families and Supplemental Nutrition Assistance Program recipients. The Bureau is also working with tribal governments to acquire similar waivers. In addition to these previously used strategies, the Bureau is planning to leverage technology in its recruiting strategy for 2020. This technology includes the Bureau-developed Response Outreach Area Mapper (ROAM) application, a publicly available online mapping tool that Bureau staff can use to better understand the sociodemographic makeup of their assigned areas. The Bureau plans to use ROAM to identify areas where recruiting could be hard and to develop recruitment strategies such as hiring staff with specific language skills. The new technology also includes the MOJO Recruiting Dashboard (also referred to as MOJO Recruit), which is software for Census recruiting personnel to plan and manage recruiting activities and track recruiting progress. For example, MOJO Recruit includes an interactive mapping feature which lets the Bureau plan recruiting activities and track recruiting status for each census tract. The map draws attention to areas that may be experiencing recruiting problems (see fig. 5). Red indicates areas where the Bureau is less than 50 percent of the way toward meeting its recruiting goal. Yellow indicates areas where the Bureau is 50 to 79 percent of the way toward meetings its goal. Green indicates areas where the Bureau is 80 percent or higher of the way toward meetings its goal. Bureau officials also stated that that they plan to increase the use of social media platforms such as Facebook, Twitter, and Instagram to promote and advertise 2020 Census job opportunities. For example, the Bureau’s 2020 Census Recruitment Toolkit includes social media guidelines, tips, sample posts, and sample email messages to assist recruiting staff in providing information about 2020 Census job opportunities. It also assists recruiting staff with responding to questions and concerns or directing people to the appropriate location for more information about jobs. The Bureau Took Steps to Improve Its Application and Assessment Process for Potential Hires For the 2020 Census, the Bureau revised its application and assessment process to ease the burden on job applicants and to better assist the Bureau in identifying qualified applicants. Job candidates are to apply and take a skill assessment online, as opposed to attending recruiting sessions in person and taking a written test. The Bureau has also streamlined both the application and assessment process by asking fewer questions and requiring only one assessment for all nonsupervisory positions. According to Bureau officials, the 2020 Census job application should take 10 minutes to complete, by comparison the 2010 Census job application took 30 minutes to complete. Moreover, for prior censuses, applicants had to complete one of two 45-minute assessments to determine the appropriate skill set for either working in the office or in the field. For 2020, OPM has approved the Bureau giving one assessment for all five short-term census positions: Recruiting Assistant, Clerk, Office Operations Supervisor, Enumerator, and Census Field Supervisor, thereby eliminating the need to give separate assessments for the office and field positions. Finally, for those considering a supervisor position, a separate supervisory assessment is required. For 2020, this consists of nine questions compared to 29 questions in 2010. According to Bureau officials, this supervisory assessment should take an additional 10 minutes to complete instead of 1 hour, as it did on 2010. For 2020, the Bureau has also changed the assessment questions it asks applicants from situational-judgment questions to biodata and personality questions. In making this decision, during the 2018 End-to-End Test, the Bureau asked situational-judgment questions in the assessment questionnaire, and then administered a set of biodata and personality questions after hiring. The Bureau conducted an analysis of both types of questions and concluded that the biodata and personality questions were a better predictor of job success. Bureau officials told us they will be evaluating their new job assessment processes for 2020, including the use of biodata. Despite the Progress Made in Recruiting, the Bureau Still Faces Several Hiring Challenges The Bureau has identified challenges that exist in some areas, such as: (1) delayed background checks; (2) low unemployment; and (3) language barriers. Delayed Background Checks Employment with the Bureau is contingent upon successfully completing a background check. The Bureau found that the process for four positions (recruiting assistants, office operation supervisors, clerks, and partnership specialists) was taking longer than it expected. These positions require a full background check because employees will have access to the Bureau’s network, they will be issued expensive equipment (e.g., laptops and desktops), and their employment will likely last more than 6 months. For the full background checks, applicants must complete two security background forms—Standard Form 85: Questionnaire for Nonsensitive positions (SF85) through the Electronic Questionnaires for Investigations Processing system (e-QIP) and Optional Form 306: Declaration for Federal Employment (OF306)—and must have their fingerprints processed, in which the Federal Bureau of Investigations conducts a review for any prior arrest or convictions. Once completed, the forms are reviewed by the Census Investigative Services (CIS) where OPM-trained staff make either a favorable, unfavorable, or inconclusive precheck employment determination. According to Bureau officials, certain crimes, for example violent crimes, automatically exclude the applicant from further consideration. If the determination is inconclusive, then CIS is to send the form to the Office of Employee Relations to make a favorable or unfavorable determination. All favorable determinations are then sent to OPM for adjudication with a full background check (see fig. 6). According to Bureau officials, in December of 2018, they began to encounter a backlog of pre-employment background checks Bureau-wide as they began hiring some 800 recruiting assistants and about 1,970 office staff for the first wave of 39 ACO openings. As of March 21, 2019, Bureau officials told us that Bureau-wide there were 7,092 background clearances pending, of which, 4,900 were for field positions. In response to the backlog, Bureau leadership said it created a team to determine the cause of the backlog and started having weekly meetings to prioritize which job positions needed to be cleared first. Bureau officials stated that the delays arose, in part, because a significant number of applicants did not completely or correctly fill out the e-QIP form. This, they said, coupled with the increase in required pre- employment background checks, resulted in a growing backlog of clearances for which the Bureau did not have the resources to clear. In response, in February 2019, the Bureau began to bring on, through a combination of new hires and reassignments, about 130 temporary staff. New staff was assigned to either review the forms for accuracy and completeness prior to being submitted to the CIS office, or help the CIS offices conduct the pre-employment background checks. Additionally, Bureau officials told us that they meet weekly to reprioritize job positions for the clearance process. The CIS office is to process background checks for all Census employees requiring them, including decennial census field staff, decennial census contractors, and staff needed for nondecennial census surveys in headquarters and in the field. According to Bureau officials, the decennial census takes precedence and within the decennial census positions are also prioritized. For example, in January 2019, the 800 recruiting assistants were given priority and now the hiring of 1,501 partnership specialists has been given priority. Bureau officials told us that in December 2018 they were processing 110 background checks a week, and have set a goal that each CIS analyst process 25 pre-employment packages a week. There are 40 analysts on board, giving the Bureau the ability to process 1,000 pre-employment background check packages a week. Bureau officials also told us that they anticipate the clearance process for the positions of enumerator/lister and census field supervisor will not experience the same delays because these positions only require fingerprint processing, which is quicker. According to Bureau officials, these results can be made available within 3 hours. Moreover, although the Bureau has taken steps to address the backlog, the bulk of pre-employment background clearances has yet to be processed and Bureau officials told us that they remain concerned. In the coming months, the Bureau will need to conduct background checks for an additional 3,991 recruiting assistants and about 10,300 office staff for the remaining 209 offices. We will continue to monitor the backlog of background clearances through our ongoing work. Low Unemployment Although the Bureau has exceeded its recruiting goals for early operations, recruiting a sufficient number of job applicants for the job of partnership specialist is a challenge. Bureau officials told us that a robust economy and low unemployment rate have resulted in a smaller pool of applicants for that position. For example, as part of its 2020 Census efforts, the Bureau had planned to hire 1,181 partnership specialists by May 1, 2019 and 1,501 partnership specialists by June 30, 2019, to help increase awareness and participation in the 2020 Census in minority communities and hard-to-reach populations. The Bureau did not meet its goal to hire 1,181 partnership specialists by May 1, 2019. To hire sufficient partnership staff, Bureau officials told us they have an “open and continuous” posting for partnership specialist positions instead of discrete individual job postings, and they are selecting two candidates from each certification list of qualified applicants. Moreover, Census leadership tracks the weekly progress of the partnership specialist positions. As of July 9, 2019, the Bureau’s latest biweekly reporting indicated that it had hired 813 partnership specialists as of June 22, 2019. Moreover, as of July 10, 2019 Bureau officials told us that another 830 applicants were waiting to have their background checks completed. According to Bureau officials hiring data are based on payroll dates generated biweekly, while background check data are tracked internally. Therefore, according to Bureau officials, more current hiring data were not available as of July 10, 2019 to indicate whether the Bureau had met its June 30 hiring goal. Hiring partnership specialists in a timely manner is key to the Bureau’s ability to carry out its planned outreach efforts, especially for hard-to- count communities. In addition, several RCC officials said the pay rate and the low unemployment rate in some ACO locations initially affected their ability to recruit well-qualified staff for office positions. Atlanta RCC officials stated it was challenging to recruit managers in the Gainesville, Florida, area. According to Bureau officials, the pay rate was too low and potential recruits were seeking employment elsewhere. The Bureau increased the managers’ pay rate to be more competitive for the area. Philadelphia RCC officials stated that in rural ACO locations the pay rate is lower and potential recruits would rather travel to the metro areas to get the higher pay rates offered there. The Denver RCC reported that low unemployment rates throughout the regions make recruiting difficult, and that Census enumerators jobs are not as competitive with many other wages offered in the region. The Los Angeles RCC reported having difficulty recruiting local applicants in high-cost areas like Beverly Hills, the San Francisco Bay Area, and Silicon Valley. Bureau headquarters officials acknowledge that some ACO locations have experienced some recruiting challenges, but said that the RCCs were ultimately able to fill the office positions. Headquarters officials stated that their pay rates either match or exceed the competitive pay rate in the majority of the ACO locations. According to Bureau headquarters officials, regional offices that may be experiencing challenges recruiting staff must demonstrate or prove that the pay rate for a specific ACO is causing difficulty recruiting. The Field Division is responsible for approving or denying the request to adjust pay. For the 2010 Census, the Bureau reported 124 requests for pay rate adjustments, of which 64 were approved. The Bureau stated that it will continue to monitor how low unemployment affects its ability to recruit and hire. Language Barriers The Bureau reports that the demographic and cultural makeup of the United States continues to increase in complexity, including a growing number of households and individuals whose proficiency in English is limited. Language barriers could make it difficult to enumerate these households, whose members may have varying levels of comfort with government involvement. Several RCC officials also mentioned that language barriers could impact their recruiting efforts: Both the Los Angeles and New York RCCs reported it is hard to recruit in immigrant communities where residents speak a foreign language or dialects, and often have no organizational infrastructure (such as associations of individuals of the same national origin, print news media, or radio). The New York RCC reported challenges in locating applicants who are bilingual in English and other languages such as Chinese, Russian, Arabic, Korean, Creole, Polish, Portuguese, Bengali, Urdu, Punjabi, Gujarati, Hindi, and Hebrew, as well as Yiddish and African languages. The Atlanta RCC reported challenges related to the diverse language needs (e.g., Spanish, Chinese, Vietnamese, Creole, Portuguese, etc.) in south and central Florida. The Chicago RCC reported recruiting outreach challenges in urban areas, including Chicago, Indianapolis, Detroit, Minneapolis/St. Paul, St. Louis, and Kansas City, that have higher minority and immigrant populations as well as in rural areas with increasing diversity. Bureau officials responded that later this fall, in preparation for their peak operations effort, they will begin to focus recruiting efforts on foreign language recruiting. Specifically, partnership and recruiting staff plan to work with partners and advertise jobs locally (at the grassroots level) in places where persons with these skills are likely to look to ensure they are meeting recruiting goals in those areas. The Bureau Is Following Its Planned Training Approach for 2020, but Has Opportunities to Improve Its Ability to Assess Performance For the 2020 Census, the Bureau is following its plans to use a blended training approach combining technology-assisted training with classroom instruction. According to Bureau planning documents, on the first day of in-person classroom training, the Bureau will provide orientation information and issue devices that trainees will use to conduct census operations. The Bureau plans to use local institutions such as schools, libraries, churches, and fire halls to host training. ACO staff are to coordinate the training location setup, device deliveries to training sites, and manage other logistics for large-scale field staff training. After the first day of training, field staff will spend the next 4 to 6 days (depending on the operation) completing at-home training online using their own personal device at their own pace. This training will include, for example, operation-specific skills, use of the data collection device (smart phone or tablet), and general field processes. Trainees who complete the online portion of the training program will return to the classroom to practice what they learned through role-playing, mock interviews, or live cases (for listing operations) facilitated by managers or supervisors. According to Bureau officials, employees will also have access to just-in- time training materials on their devices for use in the field. The Bureau Took Steps to Manage Some Challenges in Implementing Its Blended Training Approach During the 2018 End-to-End Test The Bureau encountered a number of challenges in implementing and testing its blended training approach, but is taking steps to mitigate those challenges. Specifically, during the 2018 End-to-End Test, the Bureau (1) experienced problems with the proper recording of online training scores for census staff, (2) was unable to test online training for one of its operations because the operation was added late, and (3) encountered challenges with census staff not always having access to the internet, which is required to complete the training. The Bureau Is Taking Action to Ensure the Completion of Training Is Accurately Tracked The 2018 End-to-End Test of address canvassing and nonresponse follow-up training revealed some technical challenges in using the Learning Management System. The Learning Management System is the online training system for the 2020 Census; it contains online training modules and tracks final assessment scores and training certifications. In February 2019, the Department of Commerce (Commerce) Office of the Inspector General (OIG) noted that during the address canvassing operation there was no final assessment scores recorded for 23 trained listers. The Bureau was also unable to provide documentation that another three lister trainees who failed the final assessment had been observed by their supervisor before being permitted to work. Bureau officials said they provided an action plan to the Commerce OIG in April 2019. According to Bureau officials, the action plan has not been finalized because they are incorporating changes to the action plan based on Commerce OIG comments. In December 2018, we reported that roughly 100 enumerator trainees in the nonresponse follow-up operation were unable to transmit their final test scores because the Learning Management System had an erroneous setting. According to Bureau officials, this problem delayed the start of unsupervised work for these otherwise-qualified enumerator trainees by an average of 2 days per trainee, and resulted in the attrition of some who were able to quickly find other work. Bureau officials reported that they have fixed the system setting. Moreover, according to Bureau officials, they have also developed an alternative means to certify training by incorporating the employee final assessment into the final day of classroom training. The Bureau was Unable to Test All Online Training, but Has Plans in Place to Conduct Dry-Runs of the Untested Training According to Bureau officials, Update Leave online training was not tested during the 2018 End-to-End Test due to the late addition of the operation to the 2020 Census design. Officials told us that the Update Leave operation was approved in May 2017, leaving just 10 months for the development team to create and implement software and the systems to support this field operation for the End-to-End Test. This left no time to develop online training that would be ready for the End-to-End Test in March 2018. Therefore, the Bureau classroom-trained headquarters staff instead of temporary field staff for the operation. According to the Bureau’s risk register, the utilization of Bureau headquarters staff did not properly simulate training conditions or staff characteristics in which new employees have no prior knowledge of census operations. Therefore, the 2018 End-to-End Test did not allow for proper training feedback or the capture of lessons learned with regard to temporary staff or the mode of training. According to Bureau officials, the Bureau plans to conduct scheduled dry runs of training in September 2019 to collect feedback and, if necessary, make changes to Update Leave-specific training. The Bureau Has Plan to Address Trainee Access to Online Training In June 2018, we reported that some listers had difficulty accessing the internet to take online training for address canvassing. According to the Bureau, in addition to the Bureau-provided laptop, listers also needed a personal home computer or laptop and internet access at their home to complete the training. However, while the Bureau reported that listers had access to a personal computer to complete the training, we found some listers did not have access to the internet at their home and had to find workarounds to access the training. We recommended that the Bureau finalize plans for alternate training locations in areas where internet access is a barrier to completing training. The Bureau took action and in March 2019 finalized its plans for identifying alternate training locations in areas where internet access is a barrier to completing training. Specifically, Bureau officials told us that in areas of known low connectivity rates, regional staff will identify sites that trainees can access to complete online components of the training. In addition, the Bureau provided us with a training module for identifying training field staff locations that emphasized training sites need to be located in areas with a good cellular connection and also have access to the internet. The Bureau Has Generally Met the Criteria for Selected Leading Practices for Training Development, but Could Better Document Measures of Success Effective training can enhance the Bureau’s ability to attract and retain employees with the skills and competencies needed to conduct the 2020 Census. Our Guide for Assessing Strategic Training and Development Efforts in the Federal Government describes components for developing effective training in the federal government. Our strategic training guide identifies four phases of the training—planning, design/development, implementation, and evaluation. We assessed the Bureau’s training approach and found that it generally aligned with selected leading practices. This report includes the design/development and evaluation phases of training. We did not assess the implementation phase because field staff training had not yet begun during our audit, and we did not assess the planning phase because practices in that phase are more applicable to agency-wide rather than program-specific training development. Design/Development The design/development phase involves identifying specific training and development initiatives that the agency will use, along with other strategies, to improve individual and agency performance. According to the guide, well-designed training and development programs are linked to agency goals and to the organizational, occupational, and individual skills and competencies needed for the agency to perform effectively. Moreover, in response to emerging demands and the increasing availability of new technologies, agencies, including the Bureau for the 2020 Census, are faced with the challenge of choosing the optimal mix for the specific purpose and situation from a wide range of mechanisms, including classroom and online learning as well as structured on-the-job experiences (see fig. 7). In developing its training approach we found the Bureau met all five selected leading practices related to design/development. Specifically, Bureau training aligned with achieving results for the Bureau’s re- engineered field operations. Specifically, the Bureau has a formal online training program that uses the Learning Management System as a control mechanism to provide and record training results for all 2020 Census field staff who take online training. The Bureau’s training program is integrated with other strategies to improve performance such as building team relationships. For example, the training includes modules for supervisors that focus on guiding and motivating employees, communicating effectively, and resolving conduct issues. To ensure the training is properly integrated with device issuance, for larger scale operations, the Bureau plans to stagger training sessions to help ensure there is the necessary support during the first day of training when census field staff receive their devices. The Bureau also plans to use different training delivery mechanisms. For example, the Bureau will use a blended training approach which includes a mix of computer-based and instructor-led classroom training. The Bureau has measures of effectiveness in its course design. The Bureau relied on an in-house training development team that worked with the data collection operations staff to develop learning objectives. We found that that the Bureau has procedures to incorporate feedback. Specifically, the Bureau incorporated lessons learned from previous census tests, such as the refinement of procedures for reassigning work in the field and emphasizing the importance of knocking on doors to find a proxy respondent during the nonresponse follow-up operation. Finally, the Bureau’s training documents contained goals for achieving results for its new training approach. Specifically, the Operational Assessment Study Plan for Recruiting, Onboarding, and Training for the 2018 End-to-End Test contained the following measures of success for training—reduce cost and increase efficiency over what was reported in 2010. Evaluation In developing its evaluation phase for training, the Bureau met five of six selected leading practices and partially met one leading practice. The evaluation phase involves assessing the extent to which training and development efforts contribute to improved performance and results. We have previously found that it is increasingly important for agencies to be able to evaluate their training and development programs, and demonstrate how these efforts help develop employees and improve the agencies’ performance (see fig. 8). Overall, we found that the Bureau has a robust evaluation plan for the 2020 Census that gathers data from multiple sources. For example, The Bureau has a plan to evaluate the effectiveness of training for the 2020 Census. Specifically, operational and assessment study plans set priorities for evaluations and cover the methods, timing, and responsibilities for data collection, including assessment questions, metrics, data sources and expected delivery dates, and division responsibilities. The Bureau has an analytical approach to assess training programs. For example, the Field Decennial Data Collection Training Branch has developed three separate training evaluation surveys which will be administered to field staff through the Learning Management System. The three evaluations provide training feedback after the completion of the online training; after the completion of the classroom training; and near the completion of the operation. According to the Bureau, these assessments will help determine the effectiveness of training. The Bureau incorporated evaluation feedback into planning and design of training. For example, the Bureau held debrief sessions with census workers during the 2018 End-to-End Test and told us they were also incorporating recommendations made by a training vendor. Feedback from the 2018 End-to-End Test is being used to inform training for the 2020 Census. The Bureau incorporates different perspectives in assessing the impact of training. Bureau officials stated that they incorporated feedback from a variety of stakeholders when evaluating the effectiveness of its training during testing, including participant debriefs and evaluations from vendors. As previously discussed, the Bureau used three different surveys at different points in time to evaluate training, and relied on debrief sessions with census managers and staff in the field. Bureau officials said they considered the training methods of another organization. For example, Bureau officials told us they used training vendors that followed requirements, including e-learning content developed by the Department of Defense. However, we found that the Bureau does not have performance goals or measures for training in its corresponding study plan for the 2020 Census. Specifically, we found that in the Detailed Operational Plan for the Field Infrastructure and Decennial Logistics Management Operations for the 2020 Census, the Bureau had planned to include the following success measures: Process Measures that indicate how well the process works, typically including measures related to completion dates, rates, and productivity rates. Cost Measures that drive the cost of the operation and comparisons of actual costs to planned budgets. Costs can include workload as well as different types of resource costs. Quality Measures, such as, the results of the operation, typically including rework rates, and error rates. However, according to Bureau officials they decided not to include the measures from the study plan for training because the study plan was intended to provide descriptive information about operations rather than evaluate them. We have previously reported that a fundamental element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance and to measure progress toward those goals. Thus, without specific performance goals and measures for its new blended training approach that considers cost and benefits when compared to 2010, the Bureau will not be able to determine whether its blended training approach reduced costs or increased efficiency. Moreover, not having goals and measures in place could inhibit the Bureau’s ability to develop meaningful lessons learned from the 2020 Census. Bureau officials agreed and stated they will consider including goals and measures on cost and efficiency in its plans; however, the Bureau has not yet provided us with documentation to reflect the goals and measures it will use to evaluate training, and has no time frame for doing so. Training for in-field address canvassing operation will begin in July 2019. Having performance goals and measures will help the Bureau assess the impact of its new training approach on cost, quality, and resources expended. Conclusions Successfully carrying out the thousands of activities needed to complete an accurate, cost-effective head count on schedule is an enormous and challenging task. However, for those activities we examined, the Bureau appears to be positioned to carry them out as planned, if implemented properly. While Bureau officials acknowledged there were some early delays when regions were trying to find office space and acquire leases, they said that the deadlines for the later phases of construction allow extra time—giving them a chance to make up lost time. Regarding recruiting and hiring, the Bureau was exceeding its recruiting goals for early operations, but identified challenges in areas such as promptly completing background checks, hiring in a time of low unemployment, and overcoming language barriers. Moreover, although the Bureau has exceeded its recruiting goal for early operations, recruiting a sufficient number of job applicants for partnership specialist is a challenge. The Bureau’s continued response to and management of these challenges will be important as it begins recruiting for its peak operation efforts later this fall. The Bureau has generally followed its training plans for 2020, but has opportunities to improve its ability to evaluate training efforts. The Bureau notes that the blended training approach is intended to maximize trainee learning and on the job performance during the 2020 Census. However, 2020 Census documents do not contain performance goals or measures for determining the cost and benefits of the training when compared to 2010. Revising plans to include goals and measures will better position the Bureau to determine how its blended training approach will impact the cost, quality, and resources expended on the 2020 Census. Recommendation for Executive Action We recommend that the Secretary of Commerce direct the U.S. Census Bureau to revise plans to include goals and measures for assessing the cost and benefits of the Bureau’s new blended training approach. These measures might include, but are not limited to, measures of cost, quality, and resources associated with training when compared to 2010. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of Commerce. In its written comments, reproduced in appendix II, the Department of Commerce agreed with our findings and recommendation and said it would develop an action plan to address our recommendation. The Census Bureau also provided technical comments, which we incorporated. We are sending copies of this report to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objective, Scope, and Methodology This report assesses the extent to which the Census Bureau (Bureau) is following its plans for space acquisition, recruiting and hiring, and training. For all of our objectives, we reviewed current Bureau planning documents and schedules, and interviewed Bureau officials including officials at the Bureau’s six regional offices. To assess the Bureau’s progress in opening area census offices (ACO), we obtained and reviewed current Bureau leasing agreement information and construction (meaning renovations such as new electrical layouts, heating, ventilation, and air conditioning) and deployment information. We also gathered information on the General Services Administration’s role in obtaining office space. To determine whether the Bureau is on track, we compared the current status of opening, construction, and deployment of ACOs to the Bureau’s plans, schedules, and timelines, and identified differences for follow-up with Bureau officials. We also reviewed a randomly selected nongeneralizable sample of ACO files at the Philadelphia RCC to determine whether justification was included when changes to ACO locations occurred. To determine the extent to which the Bureau is following its field hiring and recruiting strategy for the 2020 Census, we reviewed Bureau documentation regarding its strategy for recruiting and hiring temporary field staff for the 2020 Census. We also reviewed output and analysis from relevant Bureau human resources systems/databases, such as MOJO Recruit. We interviewed Bureau officials in both headquarters and the field who are knowledgeable about and responsible for recruiting and hiring temporary field staff to determine the extent to which the Bureau is meeting its recruiting and hiring goals, to describe their perspectives on any challenges facing the 2020 Census, and to understand the Bureau’s actions to mitigate any challenges. To understand changes from 2010, we compared the 2010 Census recruiting and hiring plans to those of the 2020 Census to determine differences, and interviewed Bureau officials to discuss what drove these changes. Finally, to determine the extent to which the Bureau has followed its plans for training field staff, and whether this training approach is consistent with selected leading practices, we examined relevant documents and interviewed Bureau officials to determine the Bureau’s planned approach for training, lessons learned from prior Census tests, the extent to which the Bureau is incorporating lessons learned as a result of its own testing, and what changes to training need to be made before the start of 2020 field operations. Additionally, we interviewed Bureau officials responsible for developing training curriculum to understand how training was developed (e.g. what courses to develop, challenges to using technology, etc.). We also reviewed federal guidance and our prior reports, and selected 11 leading practices from GAO’s Guide for Assessing Strategic Training and Development Efforts in the Federal Government (GAO-04-546G) as leading practices for training. Our strategic training guide identifies four phases of the training development process (planning/analysis, design/development, implementation, and evaluation). We assessed the approach against leading practices in two of these phases: design/development and evaluation. We did not assess the implementation phase because field staff training for the 2020 Census had not yet begun during our audit, and we did not assess the planning/analysis phase because practices in that phase are more applicable to agency-wide rather than program-specific training development, and focus on full-time permanent employees rather than temporary employees. Moreover, within the design/development phase and evaluation phase, we did not assess all best practices because some of those best practices were also more applicable to agency-wide rather than program-specific training development, or we had already evaluated such practices as cost. Moreover, this report primarily focuses on training for the address canvassing and nonresponse follow-up operations. We then compared the Bureau’s training approach to those leading practices and identified practices being followed and any differences. We conducted this performance audit from August 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Commerce Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Other key contributors to this report include Lisa Pearson, Assistant Director; Timothy Wexler, Analyst-in-Charge; Mark Abraham; Michael Bechetti; Jessica Blackband; James Cook; Rob Gebhart; Kerstin Hudon; Kayla Robinson; and Cynthia Saunders.
Why GAO Did This Study The decennial census is a crucial, constitutionally mandated activity with immutable deadlines. To meet these statutory deadlines, the Bureau carries out thousands of activities that need to be successfully completed on schedule for an accurate, cost-effective head count. These activities include opening area census offices, recruiting and hiring a large temporary workforce, and training that workforce. GAO was asked to review the Bureau's plans for critical logistical support activities. This report (1) assesses the Bureau's progress in opening area census offices; (2) determines the extent to which the Bureau is following its field hiring and recruiting strategy for the 2020 Census; and (3) determines the extent to which the Bureau has followed its plans for training field staff, and whether this training approach is consistent with selected leading practices. To assess the extent to which the Bureau is following its plans for opening area census offices, recruiting and hiring, and training, GAO reviewed current Bureau planning documents and schedules, and interviewed Bureau officials, including officials at the Bureau's six regional offices. GAO used its guide to training ( GAO-04-546G ) as criteria for selected leading practices. What GAO Found To help control the cost of the 2020 Census while maintaining accuracy, the Census Bureau (Bureau) is making significant changes in three areas—office space, recruiting and hiring, and training—compared to prior decennials. The Bureau is reducing its use of office space, hiring fewer census field staff, and adopting a blended training approach of instructor-led, computer-based, and hands-on training (see figure). GAO found that the the Bureau generally appears to be positioned to carry out these activities as planned, if implemented properly. Opening offices. While the Bureau experienced early delays when regions were trying to find office space and acquire leases, Bureau officials said that the deadlines for the later phases of renovations will allow them to make up time lost. As of June 2019, there were signed leases for 247 of 248 offices. Recruiting and hiring. As of June 2019, the Bureau was exceeding its recruiting goals for early operations, but identified challenges in areas such as completing background checks and hiring during low unemployment, especially for partnership specialist positions. GAO will continue to monitor these challenges, as recruiting and hiring for the census continues. Training. The Bureau generally followed its training plans for 2020 and generally followed selected leading practices for its training approach. However, GAO found that the Bureau does not have goals and performance measures for evaluating its new training approach. Without goals and performance measures the Bureau will not be able to accurately assess the cost and benefits of its new training approach. What GAO Recommends The Secretary of Commerce should direct the U.S. Census Bureau to revise plans to include goals and performance measures for evaluating its new training approach. The Department of Commerce agreed with GAO's recommendation.
gao_GAO-20-311T
gao_GAO-20-311T_0
Background The federal government plans to invest over $90 billion for IT in fiscal year 2019. Nevertheless, we have previously reported that investments in federal IT too often resulted in failed projects that incurred cost overruns and schedule slippages, while contributing little to the desired mission- related outcomes. For example: The United States Coast Guard (Coast Guard) decided to terminate its Integrated Health Information System project in 2015. As reported by the agency in August 2017, the Coast Guard spent approximately $60 million over 7 years on this project, which resulted in no equipment or software that could be used for future efforts. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. The Department of Homeland Security’s (DHS) Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion for the program. Our past work has found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from CIOs. For example, we have reported that some CIOs’ roles were limited because they did not have the authority to review and approve the entire agency IT portfolio. In addition to failures when acquiring IT, our cybersecurity work at federal agencies continues to highlight information security deficiencies. The following examples describe the types of risks we have found at federal agencies. In September 2018, we reported that the Department of Education’s Office of Federal Student Aid exercised minimal oversight of lenders’ protection of student data and lacked assurance that appropriate risk- based safeguards were being effectively implemented, tested, and monitored. In August 2017, we issued a report stating that, since the 2015 data breaches, the Office of Personnel Management (OPM) had taken actions to prevent, mitigate, and respond to data breaches involving sensitive personal and background investigation information. However, we noted that the agency had not fully implemented recommendations that DHS’s United States Computer Emergency Readiness Team made to OPM to help the agency improve its overall security posture and improve its ability to protect its systems and information from security breaches. We reported in July 2017 that information security at the Internal Revenue Service had weaknesses that limited its effectiveness in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer data. An underlying reason for these weaknesses was that the Internal Revenue Service had not effectively implemented elements of its information security program. We reported in August 2016 that the information security of the Food and Drug Administration had significant weaknesses that jeopardized the confidentiality, integrity, and availability of its information systems and industry and public health data. In May 2016, we found that the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veteran Affairs did not always control access to selected high-impact systems, patch known software vulnerabilities, or plan for contingencies. An underlying reason for these weaknesses was that the agencies had not fully implemented key elements of their information security programs. FITARA Increases CIO Authorities and Responsibilities for Managing IT Congress and the President have enacted various key pieces of reform legislation to address IT management issues. These include the federal IT acquisition reform legislation commonly referred to as the Federal Information Technology Acquisition Reform Act (FITARA). This legislation was intended to improve covered agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law includes specific requirements related to seven areas: Agency CIO authority enhancements. CIOs at covered agencies have the authority to, among other things, (1) approve the IT budget requests of their respective agencies and (2) review and approve IT contracts. Federal data center consolidation initiative (FDCCI). Agencies covered by FITARA are required, among other things, to provide a strategy for consolidating and optimizing their data centers and issue quarterly updates on the progress made. Enhanced transparency and improved risk management. The Office of Management and Budget (OMB) and covered agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their investments by level of risk. Portfolio review. Covered agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. Expansion of training and use of IT acquisition cadres. Covered agencies are to update their acquisition human capital plans to support timely and effective IT acquisitions. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres (i.e., multi-functional groups of professionals to acquire and manage complex programs), or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the Federal Strategic Sourcing Initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing Initiative. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance was intended to, among other things: assist agencies in aligning their IT resources with statutory requirements; establish government-wide IT management controls to meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. The guidance identifies a number of actions that agencies are to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials and, thus, to implement the authorities described in the law. For example, agencies are to conduct a self-assessment and submit a plan describing the changes they intend to make to ensure that common baseline responsibilities are implemented. In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance directed agencies to develop a data center consolidation and optimization strategic plan that defined the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy was to include, among other things, a statement from the agency CIO indicating whether the agency had complied with all data center reporting requirements in FITARA. Further, the guidance states that OMB is to maintain a public dashboard to display consolidation-related costs savings and optimization performance information for the agencies. In June 2019, OMB issued Memorandum M-19-19, which updated the data center optimization initiative and redefined a data center as a purpose-built, physically separate, dedicated space that meets certain criteria. It also revised the priorities for consolidating and optimizing federal data centers. Specifically, OMB directed agencies to report on spaces designed to be data centers (i.e., tiered data centers) as part of their inventories and to focus efforts on data centers that host business applications, rather than special purpose data centers. According to OMB’s August 2019 quarterly reporting instructions, non-tiered data centers may be flagged for removal in one reporting period and removed in the next unless OMB provides a written denial within a specified time frame. In addition, OMB described criteria for designating certain data centers as mission critical facilities, and would therefore not be taken into consideration when setting new agency-specific closure targets. Those mission critical designations are also assumed to be granted unless OMB specifically overturns them. Regarding cost savings, OMB’s new memorandum, M-19-19, noted that agency-specific targets would be set in collaboration with each agency and appropriately aligned to that agency’s mission and budget. OMB’s new memorandum also replaced the previous optimization metrics with new measures that focus on reporting the numbers of agencies’ virtualized hosts, underutilized servers, data centers with advanced energy metering, and the percentage of time that data centers were expected to be available to provide services. In contrast to the previous guidance, the new memorandum does not specify government-wide performance targets for the optimization metrics. Instead, OMB worked with agencies to establish agency-specific targets. In addition, the guidance describes how agencies could apply for an optimization performance exemption for data centers where typical optimization activities (consolidation of data collection, storage, and processing to a central location) are technically possible but increase the response time for systems beyond a reasonable limit. Congress Has Undertaken Efforts to Continue Selected FITARA Provisions and Modernize Federal IT Congress has recognized the importance of agencies’ continued implementation of FITARA provisions, and has taken legislative action to extend selected provisions beyond their original dates of expiration. Specifically, Congress and the President enacted laws to: remove the expiration dates for the enhanced transparency and improved risk management provisions, which were set to expire in 2019; remove the expiration date for portfolio review, which was set to expire in 2019; and extend the expiration date for FDCCI from 2018 to 2020. In addition, Congress and the President enacted a law to authorize the availability of funding mechanisms to help further agencies’ efforts to modernize IT. The law, known as the Modernizing Government Technology (MGT) Act, authorizes agencies to establish working capital funds for use in transitioning away from legacy IT systems, as well as for addressing evolving threats to information security. The law also creates the Technology Modernization Fund within the Department of the Treasury, from which agencies can “borrow” money to retire and replace legacy systems, as well as to acquire or develop systems. Further, in February 2018, OMB issued guidance for agencies on implementing the MGT Act. The guidance was intended to provide agencies additional information regarding the Technology Modernization Fund, as well as the administration and funding of the related IT working capital funds. Specifically, the guidance encouraged agencies to begin submitting initial project proposals for modernization on February 27, 2018. In addition, in accordance with the MGT Act, the guidance provided details regarding a Technology Modernization Board, which is to consist of (1) the Federal CIO; (2) a senior IT official from the General Services Administration; (3) a member of DHS’s National Protection and Program Directorate; and (4) four federal employees with technical expertise in IT development, financial management, cybersecurity and privacy, and acquisition that were appointed by the Director of OMB. FISMA Establishes Responsibilities for Agencies to Address Federal Cybersecurity Congress and the President enacted the Federal Information Security Modernization Act of 2014 (FISMA) to improve federal cybersecurity and clarify government-wide responsibilities. The act highlights the increasing sophistication of cybersecurity attacks, promotes the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provides for improved oversight of federal agencies’ information security programs. To this end, the act clarifies and assigns specific responsibilities to entities such as OMB, DHS, and the federal agencies. Table 1 describes a selection of the OMB, DHS, and agency responsibilities. The Administration Has Undertaken Efforts to Improve and Modernize Federal IT and Strengthen Cybersecurity Beyond the implementation of FITARA, FISMA, and related actions, the administration has also initiated other efforts intended to improve federal IT and the nation’s cybersecurity. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services. In doing so, the office is to consult with both OMB and the Office of Science and Technology Policy on policies and plans intended to improve government operations and services, improve the quality of life for Americans, and spur job creation. In May 2017, the Administration also established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service Administrator are among the members. In addition, in May 2017, the President signed Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. This executive order outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation’s cyber posture and capabilities against cybersecurity threats. Among other things, the order tasked the Director of the American Technology Council to coordinate a report to the President from the Secretary of DHS, the Director of OMB, and the Administrator of the General Services Administration, in consultation with the Secretary of Commerce, regarding the modernization of federal IT. In response, the Report to the President on Federal IT Modernization was issued in December 2017 and outlined the current and envisioned state of federal IT. The report focused on modernization efforts to improve the security posture of federal IT. Further, it recognized that agencies have attempted to modernize systems but have been stymied by a variety of factors, including resource prioritization, ability to procure services quickly, and technical issues. The report provided multiple recommendations intended to address these issues through the modernization and consolidation of networks and the use of shared services to enable future network architectures. Further, in March 2018, the Administration issued the President’s Management Agenda, which laid out a long-term vision for modernizing the federal government. The agenda identified three related drivers of transformation—IT modernization; data, accountability, and transparency; and the workforce of the future—that are intended to push change across the federal government. The Administration also established 14 related Cross-Agency Priority goals, many of which have elements that involve IT. In particular, the Cross-Agency Priority goal on IT modernization stated that modern IT must function as the backbone of how government serves the public in the digital age. This goal established three priorities that are to guide the Administration’s efforts to modernize federal IT: (1) enhancing mission effectiveness by improving the quality and efficiency of critical services, including the increased utilization of cloud-based solutions; (2) reducing cybersecurity risks to the federal mission by leveraging current commercial capabilities and implementing cutting edge cybersecurity capabilities; and (3) building a modern IT workforce by recruiting, reskilling, and retaining professionals able to help drive modernization with up-to-date technology. On May 15, 2018, the President signed Executive Order 13833: Enhancing the Effectiveness of Agency Chief Information Officers. Among other things, this executive order was intended to better position agencies to modernize their IT systems, execute IT programs more efficiently, and reduce cybersecurity risks. The order pertains to 22 of the 24 Chief Financial Officers (CFO) Act agencies; the Department of Defense and the Nuclear Regulatory Commission are exempt. For the covered agencies, the executive order strengthened the role of agency CIOs by, among other things, requiring them to report directly to their agency head; serve as their agency head’s primary IT strategic advisor; and have a significant role in all management, governance, and oversight processes related to IT. In addition, one of the cybersecurity requirements directed agencies to ensure that the CIO works closely with an integrated team of senior executives, including those with expertise in IT, security, and privacy, to implement appropriate risk management measures. Agencies Need to Address the IT Acquisitions and Operations High-Risk Area In the March 2019 update to our high-risk series, we reported that agencies still needed to complete significant work related to the management of IT acquisitions and operations. As government-wide spending on IT increases every year, the need for appropriate stewardship of that investment increases as well. However, we pointed out that OMB and federal agencies have not made significant progress since 2017 in taking the steps needed to improve how these financial resources are budgeted and realized. To address this issue, we highlighted the need for OMB and federal agencies to further implement the requirements of federal IT acquisition reforms, including the enhancement of CIO authority. Our update to the IT acquisitions and operations high-risk area also stressed that OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, since fiscal year 2010, we have made 1,320 recommendations and one matter for Congressional consideration to address shortcomings in IT acquisitions and operations. As stated in our 2019 high-risk update, OMB and agencies should demonstrate government-wide progress by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations. As of November 2019, OMB and agencies had fully implemented 807 (or 61 percent) of their 1,320 recommendations. In addition, the matter for Congressional consideration had also been implemented. Figure 1 summarizes the progress that OMB and agencies have made in addressing our recommendations compared to the 80 percent target. Overall, federal agencies would be better positioned to realize billions in cost savings and additional management improvements if they address these recommendations, including those aimed at implementing CIO responsibilities, reviewing IT acquisitions, improving data center consolidation, and managing software licenses. Agencies Need to Address Shortcomings and Challenges in Implementing CIO Responsibilities In all, various laws, such as FITARA and related guidance, assign 35 IT management responsibilities to CIOs in six key areas. These areas are: leadership and accountability, budgeting, information security, investment management, workforce, and strategic planning. In August 2018, we reported that none of the 24 agencies we reviewed had policies that fully addressed the role of their CIO, as called for by federal laws and guidance. In this regard, a majority of the agencies had fully or substantially addressed the role of their CIOs for the area of leadership and accountability. In addition, a majority of the agencies had substantially or partially addressed the role of their CIOs for two areas: information security and IT budgeting. However, most agencies had partially or minimally addressed the role of their CIOs for two areas: investment management and strategic planning. Further, the majority of the agencies minimally addressed or did not address the role of their CIOs for the remaining area: IT workforce. Figure 2 depicts the extent to which the 24 agencies had policies that addressed the role of their CIOs for the six areas. Notwithstanding the shortfalls in agencies’ policies addressing the roles of their CIOs, most agency officials stated that their CIOs are implementing the responsibilities even if the agencies do not have policies requiring implementation. Nevertheless, in their responses to our survey, the CIOs of the 24 selected agencies acknowledged that they were not always very effective in implementing the six IT management areas. Specifically, at least 10 of the CIOs indicated that they were less than very effective for each of the six areas of responsibility. We believe that until agencies fully address the role of CIOs in their policies, they will be limited in addressing longstanding IT management challenges. Figure 3 depicts the extent to which the CIOs reported their effectiveness in implementing the six areas of responsibility. Beyond the actions of the agencies, however, shortcomings in agencies’ policies were also partially attributable to two weaknesses in OMB’s guidance. First, the guidance did not comprehensively address all CIO responsibilities, such as those related to assessing the extent to which personnel meet IT management knowledge and skill requirements and ensuring that personnel are held accountable for complying with the information security program. Correspondingly, the majority of the agencies’ policies did not fully address nearly all of the responsibilities that were not included in OMB’s guidance. Second, OMB’s guidance did not ensure that CIOs had a significant role in (1) IT planning, programming, and budgeting decisions; and (2) execution decisions and the management, governance, and oversight processes related to IT, as required by federal law and guidance. In the absence of comprehensive guidance, CIOs would not be positioned to effectively acquire, maintain, and secure their IT systems. In response to the survey conducted for our August 2018 report, the 24 agency CIOs also identified a number of factors that enabled and challenged their ability to effectively manage IT. Specifically, most agency CIOs cited five factors as being enablers to effectively carrying out their responsibilities: (1) NIST guidance, (2) the CIO’s position within the agency hierarchy, (3) OMB guidance, (4) coordination with the Chief Acquisition Officer (CAO), and (5) legal authority. Further, the CIOs cited three factors as major challenges to their ability to effectively carry out responsibilities: (1) processes for hiring, recruiting, and retaining IT personnel; (2) financial resources; and (3) the availability of personnel/staff resources. As shown in figure 4, the five enabling factors were identified by at least half of the 24 CIOs and the three factors cited as major challenges were identified by at least half of the CIOs. Although OMB issued guidance aimed at addressing the three factors identified by a majority of the CIOs as major challenges, the guidance did not fully do so. Further, regarding the financial resources challenge, OMB recently required agencies to provide data on CIO authority over IT spending; however, its guidance did not provide a complete definition of that authority. In the absence of such guidance, agencies created varying definitions of CIO authority. Until OMB updates its guidance to include a complete definition of the authority that CIOs are to have over IT spending, it will be difficult for OMB to identify any deficiencies in this area and to help agencies make any needed improvements. In order to address challenges in implementing CIO responsibilities, we made three recommendations to OMB and one recommendation to each of the selected 24 federal agencies for each of the six IT management areas. Most agencies agreed with or had no comments on the recommendations. However, as of November 2019, none of the 27 recommendations had been implemented. We will continue to monitor the implementation of these recommendations. Agencies Need to Ensure That IT Acquisitions Are Reviewed and Approved by CIOs FITARA includes a provision to enhance covered agency CIOs’ authority through, among other things, requiring agency heads to ensure that CIOs review and approve IT contracts. OMB’s FITARA implementation guidance expanded upon this aspect of the legislation in a number of ways. Specifically, according to the guidance: CIOs may review and approve IT acquisition strategies and plans, rather than individual IT contracts; CIOs can designate other agency officials to act as their representatives, but the CIOs must retain accountability; CAOs are responsible for ensuring that all IT contract actions are consistent with CIO-approved acquisition strategies and plans; and CAOs are to indicate to the CIOs when acquisition strategies and acquisition plans include IT. In January 2018, we reported that most of the CIOs at 22 selected agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. For instance, most of the 22 agencies did not identify all of their IT contracts. In this regard, the agencies identified 78,249 IT- related contracts, to which they obligated $14.7 billion in fiscal year 2016. However, we identified 31,493 additional IT contracts with combined obligations totaling $4.5 billion, raising the total amount obligated to IT contracts by these agencies in fiscal year 2016 to at least $19.2 billion. Figure 5 reflects the obligations that the 22 selected agencies reported to us relative to the obligations we identified. The percentage of additional IT contract obligations we identified varied among the selected agencies. For example, the Department of State did not identify 1 percent of its IT contract obligations. Conversely, eight agencies did not identify over 40 percent of their IT contract obligations. Many of the selected agencies that did not identify these IT contract obligations also did not follow OMB guidance. Specifically, 14 of the 22 agencies did not involve the acquisition office in their process to identify IT acquisitions for CIO review, as required by OMB. In addition, seven agencies did not establish guidance to aid officials in recognizing IT. We concluded that, until these agencies involve the acquisition office in their IT acquisition identification processes and establish supporting guidance, they cannot ensure that they will identify all such acquisitions. Without proper identification of IT acquisitions, these agencies and their CIOs cannot effectively provide oversight of the acquisitions. In addition to not identifying all IT contracts, 14 of the 22 selected agencies did not fully satisfy OMB’s requirement that the CIO review and approve IT acquisition plans or strategies. Further, only 11 of 96 randomly selected IT contracts at 10 of the 22 agencies were CIO-reviewed and approved as required by OMB’s guidance. The 85 contracts that were not reviewed had a total possible value of approximately $23.8 billion. Until agencies ensure that CIOs are able to review and approve all IT acquisitions, CIOs will continue to have limited visibility and input into their agencies’ planned IT expenditures and will not be able to effectively use the increased authority that FITARA’s contract approval provision is intended to provide. Further, agencies will likely miss an opportunity to strengthen their CIOs’ authority and the oversight of acquisitions. As a result, agencies may award IT contracts that are duplicative, wasteful, or poorly conceived. As a result of these findings, we made 39 recommendations in our January 2018 report. Among these, we recommended that agencies ensure that their acquisition offices are involved in identifying IT acquisitions and issuing related guidance, and that IT acquisitions are reviewed in accordance with OMB guidance. OMB and the majority of the agencies generally agreed with or did not comment on the recommendations. As of November 2019, 23 of the 39 recommendations had been implemented. We will continue to monitor the implementation of the remaining recommendations. Agencies Have Made Significant Progress in Consolidating Data Centers, but Need to Take Action to Achieve Planned Cost Savings Data center consolidation efforts are key to implementing FITARA. Specifically, OMB established the FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. The enactment of FITARA in 2014 codified and expanded the initiative. In addition, OMB’s August 2016 memorandum that established the Data Center Optimization Initiative (DCOI) included guidance on how to implement the data center consolidation and optimization provisions of FITARA. Among other things, the guidance required agencies to consolidate inefficient infrastructure, optimize existing facilities, improve their security posture, and achieve cost savings. According to the 24 agencies covered by the initiative, data center consolidation and optimization efforts had resulted in approximately $4.7 billion in cost savings through August 2018. Even so, additional work remains to fully carry out the initiative. Specifically, in a series of reports that we issued from July 2011 through April 2019, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans, data center optimization, and OMB’s tracking and reporting on related cost savings. In April 2019, we reported that agencies continued to report mixed progress toward achieving OMB’s goals for closing data centers and realizing the associated savings by September 2018. Specifically, as of August 2018, over half of the agencies reported that they had met, or planned to meet, all of their OMB-assigned closure goals for tiered data centers by the deadline. Six agencies reported that they did not plan to meet their goals for tiered data centers. In addition, as of August 2018, 11 agencies reported that they had already met the goal for closing 60 percent of their non-tiered centers, three agencies reported that they planned to meet the goal by the end of fiscal year 2018, and nine agencies reported that they did not plan to meet the goal by the end of fiscal year 2018. In all, the 24 agencies reported a total of 6,250 data center closures as of August 2018, which represented about half of the total reported number of federal data centers. In addition, the agencies reported 1,009 planned closures by the end of fiscal year 2018, with an additional 191 closures planned through fiscal year 2023, for a total of 1,200 further closures. Further, in August 2018, 22 agencies reported that they had achieved $1.94 billion in cost savings for fiscal years 2016 through 2018, while two agencies reported that they had not achieved any savings. In addition to that amount, 21 agencies identified an additional $0.42 billion in planned savings through fiscal year 2018—for a total of $2.36 billion in planned cost savings from fiscal years 2016 through 2018. Nevertheless, this total is about $0.37 billion less than OMB’s goal of $2.7 billion for overall DCOI savings. From July 2011 through April 2019, we made a total of 196 recommendations to OMB and 24 agencies to improve the execution and oversight of the initiative. Most agencies and OMB agreed with our recommendations or had no comments. As of November 2019, 121 of these 196 recommendations had been implemented. We also have ongoing work to review and verify the quality and completeness of federal data center inventories and strategies for consolidation submitted by the agencies covered by the law. We expect to issue the report related to this work in early 2020. Agencies Have Improved Management of Software Licenses In our 2015 high-risk report’s discussion of IT acquisitions and operations, we identified the management of software licenses as a focus area, in part because of the potential for cost savings. Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization’s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees. As part of its PortfolioStat initiative, OMB has developed a policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. In May 2014, we reported on federal agencies’ management of software licenses and determined that better management was needed to achieve significant savings government-wide. Of the 24 selected agencies we reviewed, only two had comprehensive policies that included the establishment of clear roles and central oversight authority for managing enterprise software license agreements, among other things. Of the remaining 22 agencies, 18 had policies that were not comprehensive, and four had not developed any policies. Further, we found that only two of the 24 selected agencies had established comprehensive software license inventories, a leading practice that would help them to adequately manage their software licenses. The inadequate implementation of this and other leading practices in software license management was partially due to weaknesses in agencies’ policies. As a result, we concluded that agencies’ oversight of software license spending was limited or lacking, thus, potentially leading to missed savings. However, the potential savings could be significant considering that, in fiscal year 2012, one major federal agency reported saving approximately $181 million by consolidating its enterprise license agreements, even when its oversight process was ad hoc. Accordingly, we recommended that OMB issue a directive to help guide agencies in managing software licenses. We also made 135 recommendations to the 24 agencies to improve their policies and practices for managing licenses. Among other things, we recommended that the agencies (1) regularly track and maintain a comprehensive inventory of software licenses and (2) analyze the inventory to identify opportunities to reduce costs and better inform investment decision making. Most agencies generally agreed with the recommendations or had no comments. As of November 2019, all but 19 of the 135 recommendations had been implemented. In particular, for our recommendations on maintaining and analyzing a comprehensive inventory of software licenses, agencies had fully implemented 42 out of 48 recommendations. Table 2 reflects the extent to which the 24 agencies implemented the recommendations in these two areas. Agencies Need to Address Shortcomings in Cybersecurity Area Safeguarding federal computer systems has been a longstanding concern. This year marks the 22nd anniversary of GAO’s first designation of information security as a government-wide high-risk area in 1997. We expanded this high-risk area to include safeguarding the systems supporting our nation’s critical infrastructure in 2003, protecting the privacy of personally identifiable information in 2015, and establishing a comprehensive cybersecurity strategy and performing effective oversight in 2018. Most recently, we identified federal information security as a government-wide high-risk area in our March 2019 high-risk update. As we have previously noted, in order to strengthen the federal government’s cybersecurity posture, agencies should fully implement the information security programs required by FISMA. In this regard, FISMA provides a framework for ensuring the effectiveness of information security controls for federal information resources. The law requires each agency to develop, document, and implement an agency-wide information security program. Such a program should include risk assessments; the development and implementation of policies and procedures to cost- effectively reduce risks; plans for providing adequate information security for networks, facilities, and systems; security awareness and specialized training; the testing and evaluation of the effectiveness of controls; the planning, implementation, evaluation, and documentation of remedial actions to address information security deficiencies; procedures for detecting, reporting, and responding to security incidents; and plans and procedures to ensure continuity of operations. Since fiscal year 2010, we have made 3,323 recommendations to agencies aimed at addressing the four cybersecurity challenges. These recommendations have identified actions for agencies to take to strengthen technical security controls over their computer networks and systems. They also have included recommendations for agencies to fully implement aspects of their information security programs, as mandated by FISMA. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part, because many of these recommendations have not been implemented. Of the 3,323 recommendations made since 2010, 2,511 (or 76 percent) had been implemented as of November 2019, leaving 812 recommendations (or 24 percent) not implemented. Agencies’ Inspectors General Are to Identify Information Security Program Weaknesses In order to determine the effectiveness of the agencies’ information security programs and practices, FISMA requires federal agencies’ inspectors general to conduct annual independent evaluations. The agencies are to report the results of these evaluations to OMB, and OMB is to summarize the results in annual reports to Congress. In these evaluations, the inspectors general are to frame the scope of their analyses, identify key findings, and detail recommendations to address the findings. The evaluations also are to capture maturity model ratings for their respective agencies. Toward this end, in fiscal year 2017, the inspector general community, in partnership with OMB and DHS, finalized a 3-year effort to create a maturity model for FISMA metrics. The maturity model aligns with the five function areas in the NIST Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework): identify, protect, detect, respond, and recover. This alignment is intended to help promote consistent and comparable metrics and criteria and provide agencies with a meaningful independent assessment of their information security programs. The maturity model is designed to summarize the status of agencies’ information security programs on a five-level capability maturity scale. The five maturity levels are defined as follows: Level 1 (Ad hoc): Policies, procedures, and strategy are not formalized; activities are performed in an ad-hoc, reactive manner. Level 2 (Defined): Policies, procedures, and strategy are formalized and documented but not consistently implemented. Level 3 (Consistently Implemented): Policies, procedures, and strategy are consistently implemented, but quantitative and qualitative effectiveness measures are lacking. Level 4 (Managed and Measurable): Quantitative and qualitative measures on the effectiveness of policies, procedures, and strategy are collected across the organizations and used to assess them and make necessary changes. Level 5 (Optimized): Policies, procedures, and strategy are fully institutionalized, repeatable, self-generating, consistently implemented and regularly updated based on a changing threat and technology landscape and business/mission needs. According to this maturity model, Level 4 (managed and measurable) represents an effective level of security. Therefore, if an inspector general rates an agency’s information security program at Level 4 or Level 5, then that agency is considered to have an effective information security program. For fiscal year 2018, most of the 23 civilian CFO Act agencies’ inspectors general reported that their agencies were at Level 2 (defined) for the detect function; Level 3 (consistently implemented) for the identify, protect, and recover functions; and at Level 4 (managed and measurable) for the respond function. Table 3 shows the individual maturity ratings for each covered agency. OMB Requires Agencies to Meet Targets for Cybersecurity Metrics In its efforts toward strengthening the federal government’s cybersecurity, OMB also requires agencies to submit related cybersecurity metrics as part of its Cross-Agency Priority goals. In particular, OMB developed a goal so that federal agencies will be able to build and maintain more modern, secure, and resilient IT. A key part of this goal is to reduce cybersecurity risks to the federal mission through three strategies: manage asset security, protect networks and data, and limit personnel access. The key targets supporting each of these strategies correspond to areas within the FISMA metrics. Table 4 outlines the strategies, their associated targets, and the 23 civilian CFO Act agencies’ progress in meeting those targets, as of June 2019. In conclusion, by addressing the high-risk areas on improving the management of IT acquisitions and operations and ensuring the cybersecurity of the nation, the government has the opportunity to both save billions of dollars and advance the efficiency and effectiveness of government services. Most agencies have taken steps to execute key IT management and cybersecurity requirements and initiatives, including implementing CIO responsibilities, requiring CIO reviews of IT acquisitions, realizing data center consolidation cost savings, managing software assets, and complying with FISMA requirements. The agencies have also continued to address the recommendations that we have made over the past several years. Nevertheless, further efforts by OMB and federal agencies to implement our previous recommendations would better position them to improve the management and security of federal IT. To help ensure that these efforts succeed, we will continue to monitor agencies’ efforts toward implementing the recommendations. Chairman Connolly, Ranking Member Meadows, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Carol C. Harris, Director of Information Technology Acquisition Management Issues, at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kevin Walsh (Assistant Director), Jessica Waselkow (Assistant Director), Chris Businsky, and Rebecca Eyler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The federal government plans to spend over $90 billion in fiscal year 2019 on IT. Even so, IT investments have too often failed or contributed little to mission-related outcomes. Further, increasingly sophisticated threats and frequent cyber incidents underscore the need for effective information security. To focus attention on these concerns, GAO has included both the management of IT acquisitions and operations and cybersecurity on its high-risk list. For this statement, GAO summarized its key related reports and assessed agencies' progress in implementing the reports' recommendations. Specifically, GAO reviewed the implementation of recommendations on (1) CIO responsibilities, (2) IT acquisition review requirements, (3) data center consolidation, (4) the management of software licenses, and (5) cybersecurity. What GAO Found Federal agencies and the Office of Management and Budget (OMB) have taken steps to improve the management of information technology (IT) acquisitions and operations and ensure the nation's cybersecurity through a series of initiatives. As of November 2019, federal agencies had fully implemented 61 percent of the 1,320 IT management-related recommendations that GAO has made to them since fiscal year 2010. Likewise, agencies had implemented 76 percent of the 3,323 security-related recommendations that GAO has made since fiscal year 2010. Significant actions remain to be completed to build on this progress. Chief Information Officer (CIO) responsibilities . Laws such as the Federal Information Technology Acquisition Reform Act (FITARA) and related guidance assign 35 key responsibilities to agency CIOs to help address longstanding IT management challenges. In August 2018, GAO reported that none of the 24 selected agencies had established policies that fully addressed the role of their CIO. GAO recommended that OMB and the 24 agencies take actions to improve the effectiveness of CIOs' implementation of their responsibilities. Although most agencies agreed or did not comment, none of the 27 recommendations have yet been implemented. CIO IT acquisition review . According to FITARA, covered agencies' CIOs are required to review and approve IT contracts. Nevertheless, in January 2018, GAO reported that most of the CIOs at 22 covered agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. Consequently, GAO made 39 recommendations to improve CIO oversight for these acquisitions. Since then, 23 of the recommendations have been implemented. Consolidating data centers . OMB launched an initiative in 2010 to reduce data centers. In August 2018, 22 agencies reported that they had achieved $1.94 billion in cost savings for fiscal years 2016 through 2018, while two agencies reported that they had not achieved any savings. GAO has made 196 recommendations to OMB and agencies to improve the reporting of related cost savings and to achieve optimization targets. As of November 2019, 121 of the recommendations have been implemented. Managing software licenses . Effective management of software licenses can help avoid purchasing too many licenses that result in unused software. In May 2014, GAO reported that better management of licenses was needed to achieve savings, and made 135 recommendations to improve such management. As of November 2019, all but 19 of the recommendations had been implemented. Ensuring the nation's cybersecurity . While the government has acted to protect federal information systems, GAO has consistently identified shortcomings in the federal government's approach to cybersecurity. The 3,323 recommendations that GAO made to agencies since 2010 have been aimed at addressing cybersecurity challenges. These recommendations have identified actions for agencies to take to fully implement aspects of their information security programs and strengthen technical security controls over their computer networks and systems. As of November 2019, 76 percent of the recommendations had been implemented. What GAO Recommends Since fiscal year 2010, GAO has made about 1,300 recommendations to OMB and agencies to address shortcomings in IT acquisitions and operations, as well as approximately 3,300 recommendations to agencies to improve the security of federal systems. These recommendations addressed, among other things, implementation of CIO responsibilities, oversight of the data center consolidation initiative, management of software licenses, and the efficacy of security programs. Implementation of these recommendations is essential to strengthening federal agencies' acquisitions, operations, and cybersecurity efforts.
gao_GAO-20-68
gao_GAO-20-68_0
Background History of NASA Human Spaceflight Plans NASA’s human spaceflight plans have changed focus three times over the last 15 years. These plans have shifted back and forth between conducting a human lunar landing in order to inform the longer-term goal of human exploration of Mars and a mission that sends astronauts to an asteroid boulder in orbit around the Moon but does not include a lunar landing. Figure 1 highlights key events in NASA’s human spaceflight plans from 2005 to 2019. We have found that NASA has faced challenges developing systems capable of transporting humans to space over the past two decades. These include development efforts under NASA’s prior human spaceflight program—the Constellation program—which was canceled in the face of acquisition problems and funding-related issues. More recently, we have found that NASA has struggled to complete its current human spaceflight programs—Orion, SLS, and Exploration Ground Systems—within their established cost and schedule baselines. Establishing a sound business case to ensure resources align with requirements includes following best practices for product development and creating cost estimates and schedules. NASA’s prior human spaceflight programs highlight challenges created when programs do not establish a sound business case. For example: In 2009, we found that NASA had not developed a solid business case—including firm requirements, mature technologies, a realistic cost estimate, and sufficient funding and time—needed to justify moving the Constellation program forward into the implementation phase. We found that the program had not developed a solid business case because the program had a poorly phased funding plan that increased the risk of funding shortfalls, among other reasons. This resulted in the program not completing planned work to support schedules and milestones, and ultimately the program was canceled. Over the past 5 years, we have issued several reports assessing the progress of NASA’s Orion, SLS, and Exploration Ground Systems programs relative to their agency baseline commitments and on technical challenges facing the programs. In 2018, we found that all three programs have been at risk of cost and schedule growth since NASA approved their baselines, and have since experienced cost growth and schedule delays. This was in part because NASA did not follow best practices for establishing cost and schedule baselines for these programs, including not updating cost and schedule analyses to reflect new risks. As a result, NASA overpromised what it could deliver from a cost and schedule perspective. Further, in 2019 we found that NASA should enhance contract management and oversight to improve SLS and Orion program outcomes. NASA’s past approach in this area has left it ill-positioned to identify early warning signs of impending schedule delays and cost growth or reap the benefits of competition. We have made 20 recommendations in prior reports to strengthen NASA’s acquisition management of SLS, Orion, and Exploration Ground Systems. NASA generally agreed with GAO’s recommendations, and has implemented eight of the recommendations. Further action is needed to fully implement the remaining recommendations. For example, in 2019, we recommended that NASA direct the SLS and Orion programs to reevaluate their strategies for incentivizing contractors and determine whether they could more effectively incentivize contractors to achieve the outcomes intended as part of ongoing and planned contract negotiations. NASA agreed with the intent of this recommendation and stated that the SLS and Orion program offices will reevaluate their strategies for incentivizing contract performance as part of contracting activities including contract restructures, contract baseline adjustments, and new contract actions. We will continue to follow up on the actions the agency is taking to address this recommendation. NASA Acquisition Life Cycle NASA initiates space flight programs and projects to accomplish its scientific or exploration goals. A NASA program has a dedicated funding profile and defined management structure, and may or may not include several projects. Projects are specific investments under a program that have defined requirements, life-cycle costs, schedules, and their own management structure. NASA uses the term “tightly coupled program” to refer to a program that is composed of multiple projects that work together to complete the program’s mission. NASA policy states that programs and projects shall follow their appropriate life cycle. The life cycle for programs and projects consists of two phases: 1. formulation, which takes a program or project from concept to 2. implementation, which includes building, launching, and operating the system, among other activities. Senior NASA officials must approve programs and projects at milestone reviews, known as key decision points (KDP), before they can enter each new phase. The life cycle for a single program closely resembles the life cycle for a spaceflight project. For example, the SLS program follows the project acquisition life cycle because it is not composed of multiple projects. Figure 2 depicts a notional NASA life cycle for a tightly coupled program and for a project. The formulation phase culminates in a review at KDP I for tightly coupled programs and KDP C for projects. This decision point is also known as confirmation, at which cost and schedule baselines are established and documented in a decision memorandum. The decision memorandum outlines the management agreement and the agency baseline commitment. The management agreement can be viewed as a contract between the agency and the program or project manager. The program or project manager has the authority to manage the program or project within the parameters outlined in the agreement. The agency baseline commitment includes the cost and schedule baselines against which the agency’s performance on a program or project may be measured. To inform the management agreement and the agency baseline commitment, each program and project with a life-cycle cost estimated to be greater than $250 million must also develop a joint cost and schedule confidence level (JCL). A JCL produces a point-in-time estimate that includes, among other things, all cost and schedule elements from the start of formulation through launch, and incorporates and quantifies known risks, assesses the effects of cost and schedule to date on the estimate, and addresses available annual resources. The results of a JCL indicate the probability of a program or project’s success of meeting cost and schedule targets. Key Elements of NASA’s Planned Return to the Moon NASA has initiated multiple programs to help the agency achieve its Artemis III mission and longer-term lunar exploration goals. These programs include a platform in the lunar orbit, a landing system to put humans on the surface of the Moon, and robotic lunar landing services. Gateway. The Gateway program aims to build a sustainable platform in lunar orbit to support human lunar exploration and scientific experiments by NASA and its commercial and international partners. NASA is planning for Gateway to maneuver to different orbits around the Moon, which will allow access to a variety of locations on the lunar surface. The Gateway program is the first program NASA has designated as a tightly coupled program. The program is composed of multiple projects, which are responsible for executing portions of the Gateway mission. Individual teams manage the projects and each project will have its own cost estimate and launch readiness date. Gateway program management is responsible for ensuring the overall integration of all the individual projects. See figure 3 for a description of the three Gateway projects that NASA has initiated. In addition to Gateway, NASA initiated several other programs: Human Landing System. The Human Landing System, or lunar landers, is to provide crew transportation from Gateway to the lunar surface and back and demonstrate capabilities required for deep space missions. NASA anticipates that there will be three stages to the landers—a descent, ascent, and transfer stage—but the number of stages may vary depending on the contractors that NASA selects to develop the system. NASA is planning for the descent stage to serve as a crew and cargo lander; the ascent stage to bring crew back to Gateway from the lunar surface; and the transfer stage to transfer the ascent and descent stages from Gateway orbit to a lower lunar orbit for the landing. Space Suits. NASA plans to update the design of its space suits, which supply life support, including oxygen and water, among other things, to astronauts. The updates include additional protection from extreme temperatures and hazards in the lunar environment, such as dust; increased mobility; and extended service life for lunar surface operations. Commercial Lunar Payload Services. Under Commercial Lunar Payload Services, commercial partners provide end-to-end commercial payload delivery services to the surface of the Moon. The services include integrating payloads onto a robotic lander, launching the lander, and operating the lander and payloads. The payloads include science instruments and technology demonstrations that will characterize the lunar environment and inform the development of future landers and other exploration systems needed for humans to return to the lunar surface. Volatiles Investigating Polar Exploration Rover. NASA plans to develop a robotic lunar rover for long duration operations to investigate volatiles—which include water, carbon dioxide, and other chemicals that boil at low temperatures—at the lunar South Pole that could be used to support sustained human presence on the Lunar surface. NASA plans to utilize landers from the Commercial Lunar Payload Services to deliver the rover to the lunar surface. Orion and SLS. Orion is the crew capsule to transport humans from the Earth to Gateway and beyond. SLS is the vehicle NASA will use to launch the Orion crew capsule and cargo beyond low-Earth orbit, including to Gateway. Figure 4 shows a notional configuration of Gateway, the first integrated Human Landing System, and the Orion crew capsule. In this configuration, the Human Landing System ascent stage, Gateway Logistics and Power and Propulsion Element (PPE), and Orion crew capsule are designed to dock with the Gateway Habitation and Logistics Outpost. The Advanced Exploration Systems organization is responsible for overseeing the Gateway and Human Landing System programs and reports to NASA’s Associate Administrator for Human Exploration and Operations Mission Directorate (HEOMD). Another organization within HEOMD—Exploration Systems Development—is responsible for the development of the Orion crew capsule. The Office of the Chief Engineer and Office of the Chief Financial Officer are responsible for NASA policies and guidance related to the development of these systems. NASA Adjusted Its Acquisition Plans to Support 2024 Lunar Landing After the March 2019 announcement to accelerate the human lunar landing to 2024, NASA acknowledged that it could not complete all of its original plans under the new time frame. The original plans for a human lunar landing in 2028 included an expanded Gateway and uncrewed demonstrations of components of the Human Landing System. In response to the new direction, NASA decided to execute its lunar plans in two phases. Phase 1 focuses on systems NASA identified to support the Artemis III mission in 2024. Phase 2 builds upon Phase 1 efforts and focuses on establishing a long-term presence on the lunar surface through future Artemis missions, and is not currently the focus of NASA’s efforts (see figure 5). NASA made several changes to its prior lunar plans to increase the speed of developing the systems needed to meet the aggressive timeline for the Artemis III mission. For example: NASA reduced the scope of the Gateway program for Phase 1 by deferring or eliminating components, and changing its configuration. NASA removed a component that an international partner had planned to contribute and deferred work on a habitation component and other potential international contributions to Phase 2. Acknowledging that some elements of Gateway had to be deferred or eliminated for the first phase is a positive step NASA has taken to try to achieve an aggressive schedule. In some cases, NASA changed the acquisition strategy to increase the speed of development work. For example, NASA had planned to build the Habitation and Logistics Outpost in-house, but due to the 2024 acceleration announcement, now plans to award a contract for its development. In addition, NASA changed its plans to acquire the Human Landing System as an integrated system instead of by stage to meet the accelerated timeline. NASA developed a broad agency announcement for the Human Landing System with the goal of awarding contracts by the end of January 2020. NASA released a draft broad agency announcement for the integrated system in July 2019, about 4 months after receiving direction to land humans on the Moon by 2024. Human Landing System program officials raised concerns about the program’s ability to meet the 2024 timeline, but said they are trying to mitigate this risk by incorporating input from prior studies and feedback from industry into the program’s draft broad agency announcement. See table 1 for the status of NASA’s lunar programs, including changes NASA made to prior plans and timelines to meet the 2024 lunar landing goal. NASA is still considering the extent to which competition will be part of its acquisition plans to meet the accelerated 2024 landing. Competition may be a critical tool for achieving the best possible return on investment for taxpayers, and can help improve contractor performance. In addition, in 2014, we found there were competition opportunities for future SLS development work that may promote long-term affordability. We recommended that NASA assess the extent to which the agency could competitively procure development and production of future elements of the SLS to promote affordability. NASA agreed with this recommendation. However, NASA’s progress implementing it has been limited. For example, NASA awarded a sole-source contract for the upper stage engine, which further limits an opportunity for competition for the program. For Gateway Logistics Services and the Human Landing System, NASA officials stated that they are considering awarding multiple initial contracts. If NASA does award multiple contracts, NASA officials stated they would then be able to have the contractors compete for further development of the components and possibly for specific missions. Conversely, NASA does not plan to competitively award a contract for the Gateway Habitation and Logistics Outpost, citing the aggressive Artemis III schedule as a factor for this decision. NASA Risks Integration Challenges Because Lunar Mission Requirements Have Not Yet Been Established NASA has identified the components of its lunar architecture—such as Gateway and lunar landers—but it has not fully defined a system architecture or established requirements for its lunar mission. A system architecture, among other things, defines the dependencies and interfaces between the components. The NASA systems engineering handbook states that defining the system architecture early enables NASA to develop components separately from each other while ensuring that they work together effectively to achieve top-level requirements. For example, a system architecture for the Artemis III mission would describe the relationships and interfaces between Gateway and the Human Landing System, ensuring that after the two programs are completed, they will work together properly to execute the mission. Figure 6 is an illustration of how specific program and project requirements flow down from NASA’s strategic goals and objectives. NASA officials told us they started with defining individual program and project requirements, and then plan to define the system architecture in an architecture definition document and the lunar system requirements in six separate HEOMD documents. These documents are in various stages of completion. HEOMD officials said they expect to finalize the overall architecture definition document at the end of 2019. They plan for this document to include a description of the integrated architecture, including the architecture’s components and high-level interfaces required for initial human lunar surface missions. In addition, HEOMD has six other documents that establish requirements for human space exploration missions, among other things. Three of these documents are currently outdated because they do not address lunar landings. HEOMD officials stated that they do not expect the documents to be updated before the end of 2019. NASA officials told us that they did not start with these higher-level architecture and all of the requirements documents because they thought it was important to first establish requirements for individual programs and review what contractors proposed for Gateway and the Human Landing System, and incorporate industry input on what requirements are feasible. The Human Landing System draft request for proposals contained a notional architecture that has three stages, but the agency is open to selecting contractors that do not follow this notional architecture. In our work to develop a framework for assessing and improving enterprise architecture management, we found that a mature architecture should ensure that components of the architecture align their plans with enterprise-level plans. Establishing such alignment is essential to achieving goals and supporting solutions that are appropriately integrated and compatible. NASA’s approach of defining the lunar architecture and associated requirements concurrently with programs setting their own requirements presents the risk of mismatches of requirements across and within programs. Such mismatches increase the risk of technical problems, schedule delays, and cost overruns. For example, the Gateway program is tracking the potential misalignment of requirements as a risk because the PPE project finalized its requirements before the Gateway program finalized corresponding requirements at the program level. PPE officials stated they finalized their requirements first because they had started work under a prior project and, as a result, moved quickly through early development activities. The Gateway program and PPE project officials said that when they reviewed the PPE requirements with Gateway’s requirements, they found two possible gaps. For example, NASA officials explained that there is a difference in the amount of power the PPE contractor is required to deliver for the PPE and Gateway’s requirements for power. The program is working with the PPE project office and contractor to study the potential gaps and determine how to resolve them if needed. The Gateway program officials said they would continue to assess gaps and risks related to requirements alignment for all projects. HEOMD officials agreed that there is a risk of discovering integration challenges across programs. NASA officials have taken action on one strategy to minimize this risk, and are considering two other potential mitigation strategies. To help ensure that the components of the lunar architecture can work together, NASA included international interoperability standards in its requests for proposals for the lunar programs. For example, there are standards for how the components will dock with each other. NASA officials said that including these standards would help mitigate integration challenges. The two other potential mitigation strategies are the following: Establish a Lunar Exploration Control Board. NASA is in the process of establishing a board that would act as an architecture configuration management body. Configuration management is a process used to control changes to top-level requirements. In our prior work on developing and maintaining systems or networks, we found that effective configuration management is a key means for ensuring that additions, deletions, or other changes to a system do not compromise the system’s ability to perform as intended. The board could serve as a body to make decisions that affect multiple lunar programs and ensure that changes to components of the lunar architecture do not affect NASA’s ability to accomplish a successful lunar landing. Hold cross-program synchronization or integration reviews. To help ensure that requirements are aligned across programs, a senior HEOMD official said NASA plans to hold cross-program synchronization or integration reviews. However, the official said NASA has not defined at what level those reviews would occur, when those reviews would occur, or what specific contractor data would be reviewed. Ensuring the Lunar Exploration Control Board is involved in these reviews will help the board in its role as a configuration management body and inform decisions that affect multiple lunar programs. NASA’s system engineering handbook states that activities to integrate systems throughout a system life cycle help to make sure that integrated system functions properly. These activities include conducting analysis to define and understand integration between systems. NASA is moving quickly to develop individual programs and projects that must work together as part of the broader lunar architecture. Delaying decisions about how and when NASA plans to hold synchronization or integration reviews risks discovery of changes late in the acquisition process. As stated in NASA’s system engineering guidance, the later in the development process changes occur, the more expensive they become. NASA’s Initial Decisions for Cost and Schedule Estimating Include Benefits, but Limit Some Information for Decision Makers NASA has taken positive steps to increase the visibility into the cost and schedule performance of the Gateway program’s projects, but decisions on analyses to support program-level cost and schedule are still pending. In addition, the NASA Administrator has stated that Artemis III may cost between $20 billion to $30 billion, but NASA officials stated that the agency does not plan to establish an official cost estimate. Gateway Structure Provides Increased Visibility for Project Cost and Schedule Performance, but Decisions on Program Reviews and Analysis Are Pending Gateway Costs As of October 2019, NASA was still defining its approach for developing cost and schedule estimates for all programs and projects in the lunar architecture, but we found NASA has made some decisions related to the structure of the Gateway program that will provide visibility into cost and schedule performance. In particular, NASA’s decision to structure the Gateway program as a tightly coupled program means that the projects that compose the Gateway—Power and Propulsion, Habitation and Logistics Outpost, and Logistics—are to develop individual project cost and schedule baselines by which performance will be measured. NASA officials stated that they expect this will provide accountability for each project to adhere to its cost and schedule baseline. This structure is a positive step for NASA to improve management of large, complex programs, and could have been beneficial to previous human spaceflight programs. For example, cost and schedule baselines for key hardware elements of the Space Launch System program—such as the core stage—might have provided earlier warning signs of development challenges affecting cost and schedule performance. NASA policy requires tightly coupled programs with a life cycle cost estimate greater than $250 million to conduct a program-level joint cost and schedule confidence level (JCL) to inform an agency baseline commitment. A JCL is a calculation that NASA uses to estimate the probability of success of a program or project meeting its cost and schedule baselines. However, NASA decided to remove the requirement for the Gateway program to establish an agency baseline commitment, and instead, require the program to document its cost and schedule estimates for phase 1 in a program commitment agreement. NASA officials explained that the agency viewed requiring the Gateway program to conduct a JCL to inform cost and schedule baselines as duplicative of analysis the projects are required to conduct to inform their project level baselines. In October 2019, Gateway program officials stated they have reconsidered this direction and now plan to conduct a program-level JCL. However, given that NASA officials previously determined they would not require the Gateway program to establish a baseline that is informed by a program-level JCL, the decision to conduct a JCL is subject to change again. NASA’s commitment to the program’s October 2019 decision to conduct a program-level JCL would enhance oversight and management for Gateway. NASA’s cost estimating handbook states that a JCL can serve as a valuable management tool that helps enforce some best practices of program planning and control, and potentially enhance vital communication to various stakeholders. Having a program-level JCL could help the program identify additional cost and schedule risks associated with integration of, or dependencies across, Gateway components that individual projects may not identify. As a tightly coupled program, Gateway has project schedules that are dependent on one another. For example, PPE provides power to subsequent Gateway components, such as the Habitation and Logistics Outpost, and must be launched and in lunar orbit for the outpost to dock with PPE. A program- level JCL would be able to quantify risk of delay across all dependent activities, regardless of which individual project experiences the delay. It would also provide NASA decision-makers and external stakeholders, such as Congress, with the probability of the program meeting both its cost and schedule commitments to support the Artemis III mission. Gateway Schedule The Gateway program is also the program in the lunar architecture that is the furthest along in developing a schedule aside from the SLS, Orion, and Exploration Ground Systems programs. The program expects to have an integrated master schedule in late 2019, but in the meantime has developed a high-level notional schedule. We identified two challenges with the Gateway program’s schedule that stem from decisions to meet the program’s rapid pace of development. Program and project technical reviews do not align. The NASA program management handbook states that lower-level technical reviews, such as project preliminary design reviews, are typically conducted prior to the program-level reviews. In addition, GAO’s Schedule Assessment Guide states that lower-level project schedules should be consistent with upper-level program review milestones. This creates consistency between program and project schedules, which enables different teams to work to the same schedule expectations and ensures the proper sequencing of activities. The Gateway program obtained approval from the NASA Associate Administrator to tailor its review schedule. This includes the replacement of traditional reviews with program sync reviews informed by project-level technical reviews. The program has some of the project-level technical reviews for its projects—PPE, Habitation and Logistics Outpost, and Logistics—occurring after equivalent Gateway program-level reviews. The Gateway program-level reviews are referred to as sync reviews, during which information is assessed across all projects. For example, the Logistics project plans to hold its preliminary design review after the Gateway program preliminary design-informed sync review. Figure 7 shows the preliminary Gateway program schedule and identifies reviews that differ from the notional tightly coupled program schedule found in NASA guidance. Without the results of project-level reviews, program officials may have limited information to assess progress at program-level reviews. This opens up the possibility of costly re-designs at later stages of the program life cycle. Gateway program officials said as the program progresses, they plan to assess the risk of holding a project-level review after a program-level review against the risk of delaying a program-level review to hold all the project-level reviews first. Officials added that they are still reviewing their approach for the timing of the reviews. We will continue to follow up through future work on the Gateway program’s risk assessments related to the timing of the technical reviews. Scheduling of key program milestone reviews after 2021 deferred. The Gateway Program does not yet have key milestone reviews—known as key decision points (KDP)—scheduled after 2021 (see figure 7). Currently, the final key decision point scheduled for the program is KDP I in 2021, which evaluates the completeness of the preliminary design, including for projects, and determines the program’s readiness to begin the detailed design phase. However, NASA policy requires the program to conduct two other key decision points that the Gateway program has not yet scheduled. Program officials told us that they want to determine the need for subsequent decision points after the systems have matured further in their development. During the period between 2021 and 2024, the Gateway program plans to launch and assemble its three components—PPE, Habitation and Logistics Outpost, and the first logistics vehicle—and integrate with the Human Landing System and Orion. It may be appropriate not to schedule a KDP III—a decision point that evaluates the readiness of the program, including its projects, for launch and early operations—for the Gateway program since the projects will launch separately and conduct operations on different timelines. However, not having a KDP II—a decision point that evaluates the program’s readiness for assembly, integration, and testing, prior to a system integration review—will limit information available to senior leaders for decision-making. Without scheduling a KDP II, NASA risks not having a formal mechanism to ensure that NASA has identified and sufficiently addressed any integration issues across the three projects. NASA Does Not Plan to Develop a Lunar Mission Cost Estimate The NASA Administrator made a public statement that the Artemis III mission may cost between $20 billion and $30 billion, but NASA officials told us they do not plan to develop an official cost estimate for the Artemis III mission. A senior HEOMD official said that the agency developed a cost estimate that included costs for the lunar mission to 2028 to support budget submissions. However, the official said this life-cycle cost estimate included costs outside of the Artemis III mission, such as for missions later than Artemis III, and may not include integration and overall management costs. NASA officials told us that it is complicated to separate out costs for each mission and, as a result, do not plan to develop an Artemis III cost estimate. In addition, senior NASA officials stated that many of the programs needed to execute the mission are currently in the early stages of acquisition, and therefore NASA has limited cost information. Meanwhile, NASA requested an additional $1.6 billion in fiscal year 2020 above its initial budget request to support the Artemis III mission. Cost estimates provide management with critical cost-risk information to improve the control of resources in the present and future. GAO’s Cost Estimating and Assessment Guide states that a life-cycle cost estimate enhances decision-making, especially in early planning of an acquisition. Individual program cost estimates would not capture the integration costs across programs. Without an Artemis III cost estimate, NASA will not be able to effectively monitor total mission costs and Congress would have limited insight into mission or program affordability when making decisions about each year’s budget request. NASA Conducted Studies to Inform Lunar Plans, but Did Not Fully Assess Alternatives Given the breadth of activity and funding required for NASA to achieve a human lunar landing, a number of stakeholders have advocated for NASA to carry out this mission in a different way than NASA is pursuing. For example, one advocate proposed alternative lunar architectures that do not include the use of Orion, SLS, or Gateway, and instead rely on the use of commercial vehicles, and a former NASA associate administrator has promoted increased use of NASA’s current programs, including SLS. Agencies can use the process of assessing alternatives to justify their decisions and demonstrate careful planning. While NASA policy does not require programs to analyze alternatives before starting work, GAO best practices state that analyzing alternatives provides a framework to help ensure that entities consistently and reliably select the alternative that best meets the mission need based on selection criteria, such as safety, cost, or schedule. Similarly, the Department of Defense, an agency that also invests billions of dollars in acquisitions, considers an analysis of alternatives a key input to defining a system’s capabilities and assessing affordability. We previously found that analyzing alternatives is a key element in establishing a sound business case for a new architecture or program. Having a strong business case, including a formal assessment of alternatives, would help NASA effectively communicate its decisions to various stakeholders and facilitate a better understanding of its current lunar plans. NASA officials told us that they arrived at the current architecture and the designs of its lunar programs by conducting numerous studies and analyses over multiple decades. These studies looked at aspects of the various lunar missions NASA has planned over time, including the prior Constellation program and Journey to Mars effort. A HEOMD official responsible for mission directorate analyses said that the studies ranged from quick turn-around analyses to long-term, thorough studies. NASA officials identified 12 studies completed since the conclusion of the Constellation program in 2010 that informed their decision to build Gateway and other aspects of the lunar architecture. The studies varied in focus, ranging from a study on the overall framework for a mission to Mars to a study exclusively on the human lunar landers. We reviewed these 12 studies to determine the extent to which NASA analyzed alternatives to inform its current lunar architecture. We found that some of the studies contained detailed analyses, but had a narrow scope. For example: NASA conducted a study on the design of its human lunar landers that identified several alternative designs for the lander configuration, including two- and three-stage landers. The study provided an analysis on each alternative in order to compare those alternatives, given the physical constraints of SLS and commercial launch vehicles. HEOMD reviewed prior studies on a cislunar habitation facility conducted by internal and external partners that informed an Assessment of Alternatives for the Gateway program. At the time the mission directorate conducted this assessment, the concept was focused on the Journey to Mars effort, and mentioned lunar landers only as a potential secondary mission. The assessment analyzed various alternative configurations that Gateway might use and selected one of them based feasibility and schedule. NASA conducted studies on the best orbit in which to place Gateway. While these studies were robust, they did not more broadly analyze whether Gateway was the best solution to meet the mission need based on selection criteria. The following are examples of topics that NASA could have addressed if they had analyzed alternatives with a broader scope: Assessing commercial alternatives to SLS and Orion for a human landing on the Moon. Each of the studies assumes the use of SLS and the Orion capsule in order to conduct the required mission. A HEOMD official told us that they did not assess commercial alternatives to SLS and Orion because commercial alternatives are not available. If commercial technology to replace SLS and Orion becomes available, the official said NASA can on- ramp those options if SLS and Orion are not delivered on time. Assessing how a more capable SLS could have affected the lunar architecture. NASA did not assess whether refocusing investment on more capable versions of its current programs, including SLS, might affect risk, cost, and schedule for a lunar landing mission. For example, developing a more capable SLS earlier may have enabled NASA to propose a different lunar lander design or to launch components of the architecture in fewer launches. In the study on the design of its human lunar landers, NASA assumed that a more capable version of SLS would not be available until at least 2028, and therefore did not assess using it as a part of its architecture. Further, at the time of the study in 2018, NASA was unsure it would have enough SLS core stages available to utilize them for any components of the architecture other than to transport crew. Identifying alternatives to a lunar landing without using Gateway. All of the studies assumed the use of Gateway or similar capability as opposed to a capability that would take astronauts directly to the lunar surface. A HEOMD official told us that NASA did not assess architectures without Gateway because they planned to utilize SLS and Orion, and NASA did not design the Orion capsule for a direct-to- moon landing. However, a HEOMD official provided us with a quick turn-around analysis that NASA conducted in 2019, after NASA initiated the Gateway program, in response to questions about alternative lunar architectures. This analysis compared a lunar landing from Gateway to a landing without Gateway and found that NASA would have to upgrade the Orion Capsule to have a direct-to-moon landing, which would increase the cost and development time of the program. As a result, the analysis concluded that a lunar landing using Gateway was the superior option. Additionally, officials said Gateway helped develop an architecture that was sustainable and could contribute to a mission to Mars. In addition, only one of the studies focused on a lunar landing mission because NASA completed most of the studies prior to the December 2017 Space Policy Directive-1. NASA officials stated that this is because they were told not to analyze a lunar landing during the previous administration. As a result, none of these studies represents a comprehensive assessment for NASA’s current plans to return to the Moon and are, in total, missing information on potential alternatives. While conducting a formal analysis of alternatives for the lunar architecture is no longer viable given NASA’s schedule, by not having such an analysis NASA is ill-equipped to consider other alternatives as off-ramps if the current lunar architecture plans run into delays. Further, none of the studies contained a life-cycle cost estimate and without this, NASA does not know the costs of its architecture or of potential alternatives. In October 2019, NASA officials stated they had begun to develop an Architecture Campaign Document, which would provide a summary of the studies and analyses that have informed NASA’s lunar architecture. However, this document was still in draft form at the time of our review and officials did not commit to a completion date. Until NASA completes this summary, it will not have a cohesive document outlining the rationale for how it selected its current lunar architecture and lunar programs. Lastly, the practice of formally assessing alternatives is a beneficial practice for future architectures and programs. However, NASA policy and guidance describe an analysis of alternatives as a tool, but does not require officials to analyze alternatives prior to starting work to develop a system architecture or initiating directed missions. NASA may analyze alternatives for an architecture, program, project, or specific design or capability, but conducting a formal analysis of alternatives is optional. Without a requirement to conduct an analysis of alternatives prior to NASA authorizing the initial planning of a program, NASA could miss opportunities to move forward with a more viable architecture or program to meet mission needs in the future. For a new architecture or large programs that require a lot of investment, such as future exploration efforts including Mars, conducting an analysis of alternatives would better position NASA to build a sound business case, justify and document its decisions, and advocate for its plans. Conclusions Effectively executing the Artemis III mission will require extensive coordination within NASA and its commercial partners, and for each individual program to meet aggressive development time frames. As NASA continues to develop its architecture and program schedules, it will be important that the agency use program management tools and practices to set these new programs up for success. Ensuring that NASA identifies points in time to conduct synchronization reviews, that the role of the proposed Lunar Exploration Control Board in these reviews is understood, and that programs are prepared with the necessary information to make the reviews successful will help NASA mitigate the risk of discovering integration challenges across the lunar programs. The reviews could be a helpful checkpoint on the agency’s progress towards meeting the aggressive timeline of the Artemis III mission. Further, ensuring that the Gateway program has an integrated schedule early on will help the program plan work to meet critical deadlines and avoid unnecessary rework due to the misalignment of requirements or design changes. To date, NASA has provided decision makers with limited cost information to inform decisions on the overall lunar investment. Without an overall cost estimate for the Artemis III mission, NASA is asking Congress to appropriate additional funding to meet a 2024 lunar deadline without having information available on how much it will cost in total to support such plans. Further, NASA senior leadership made a decision that resulted in limiting information regarding the probability of the Gateway program meeting cost and schedule estimates to support the 2024 lunar landing. Requiring the program to conduct a joint cost and schedule confidence level analysis would help to determine whether NASA can meet its lunar goal and whether it has resources to be able to do so. NASA will continue to have many stakeholders interested in its human space exploration plans, which requires NASA to establish a lunar architecture and programs that the agency can defend over time and to demonstrate that it has a solid business case. However, NASA is ill- positioned to explain how it arrived at its current lunar architecture without a comprehensive assessment that documents how NASA decided that its current plans are the best way to meet the agency’s long-term lunar exploration goals. NASA has taken a positive step by planning to create a summary of the studies and analyses that informed its lunar architecture, but has not committed to a date to finalize it. Finally, ensuring that NASA conducts a formal analysis of alternatives for future strategic missions and architectures, including as it further develops its plans for a human mission to Mars, will better position the agency to consistently and reliably select alternatives that best meet the mission need. Recommendations for Executive Action We are making the following six recommendations to NASA. The NASA Administrator should ensure that the NASA Associate Administrator for Human Exploration and Operations directs the Advanced Exploration Systems division to define and determine a schedule for synchronization reviews, including the role of the proposed Lunar Exploration Control Board, to help ensure that requirements between mission and program levels are reconciled. (Recommendation 1) The NASA Administrator should ensure that the NASA Associate Administrator for Human Exploration and Operations directs the Gateway program to conduct a joint cost and schedule confidence level at the program level for the Artemis III mission. (Recommendation 2) The NASA Administrator should ensure that the NASA Associate Administrator for Human Exploration and Operations directs the Gateway program to update its overall schedule for 2024 to add a KDP II to occur before system integration. (Recommendation 3) The NASA Administrator should ensure that the NASA Associate Administrator for Human Exploration and Operations creates a life-cycle cost estimate for the Artemis III mission. (Recommendation 4) The NASA Administrator should ensure that the NASA Associate Administrator for Human Exploration and Operations directs the Advanced Exploration Systems division to commit to a completion date and finalize a cohesive document outlining the rationale for selecting its current lunar architecture and lunar programs. (Recommendation 5) The NASA Administrator should ensure that the Office of the Chief Engineer determines under what conditions it is appropriate to complete an analysis of alternatives, particularly when there are multiple pathways—including architectures or programs—that NASA could pursue in the future, and document the justification for not completing an analysis. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to NASA for comment. In written comments, NASA agreed with our six recommendations. NASA provided estimated dates of completion for all of the recommendations ranging from April 2020 to September 2021. The comments are reprinted in appendix I. NASA also provided technical comments, which have been addressed in the report, as appropriate. We are sending copies of this report to the NASA Administrator and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the National Aeronautics and Space Administration Appendix II: GAO Contact and Staff Acknowledgments Cristina T. Chaplain, (202) 512-4841 or chaplainc@gao.gov. Staff Acknowledgments In addition to the contact named above, Molly Traci, Assistant Director; Katie Bassion; Lorraine Ettaro; Laura Greifner; Anna Irvine, Erin Kennedy; Jason Lee, Assistant Director; Jennifer Leotta, Assistant Director; Ryan Lester; Dennis Mayo; Sylvia Schatz; Roxanna Sun; Jay Tallon, Assistant Director; Alyssa Weir; and Tonya Woodbury made significant contributions to this report.
Why GAO Did This Study In March 2019, the White House directed NASA to accelerate its plans to return humans to the moon by 4 years, to 2024. To accomplish a lunar landing, NASA is developing programs including a small platform in lunar orbit, known as Gateway, and a lunar lander. NASA plans to use the Space Launch System and Orion crew capsule—two programs with a history of cost growth and schedule delays—to launch and transport crew to Gateway. The House Committee on Appropriations included a provision in its 2018 report for GAO to review NASA's proposed lunar-focused programs, including the Gateway program. GAO's report assesses (1) how NASA updated its lunar plans to support the accelerated 2024 landing timeline; (2) the extent to which NASA has made initial decisions about requirements, cost, and schedule for its lunar mission and programs; and (3) the extent to which NASA analyzed alternatives for its lunar plans, including the Gateway program. GAO analyzed NASA lunar mission and program documents, assessed NASA studies that informed NASA's lunar plans, and interviewed NASA officials. What GAO Found To support accelerated plans to land astronauts on the moon by 2024—four years earlier than planned—the National Aeronautics and Space Administration (NASA) quickly refocused its acquisition plans. In particular, NASA separated its lunar plans into two phases, with the first phase focused on the systems NASA identified to support the new timeline (see figure). One system, Gateway, includes three components—power and propulsion, habitation, and logistics—to form a small platform in lunar orbit. NASA has begun making decisions related to requirements, cost, and schedule for programs, but is behind in taking these steps for the whole lunar mission: NASA risks the discovery of integration challenges and needed changes late in the development process because it established some requirements for individual lunar programs before finalizing requirements for the overall lunar mission. NASA plans to take steps to mitigate this risk, such as by holding reviews to ensure that requirements align across programs, but has not yet defined these reviews or determined when they would occur. NASA has made some decisions that will increase visibility into the costs and schedules for individual lunar programs, but does not plan to develop a cost estimate for the first mission. Cost estimates provide management with critical cost-risk information to improve control of resources. Without a cost estimate for this mission, Congress will not have insight into affordability and NASA will not have insight into monitoring total mission costs. NASA conducted studies to inform its lunar plans, but did not fully assess a range of alternatives to these plans. GAO best practices state that analyzing alternatives provides a framework to help ensure that entities consistently and reliably select the alternative that best meets the mission need and justify agency decisions. Given NASA's schedule, conducting this analysis is no longer viable. Instead, NASA intends to create a summary of the studies that informed its lunar plans. However, it has not committed to a completion date. Without a documented rationale, NASA is ill-positioned to effectively communicate its decisions to stakeholders and facilitate a better understanding of its plans. What GAO Recommends GAO is making a total of 6 recommendations to NASA, including to define and schedule reviews that align requirements across lunar programs; create a cost estimate for the first lunar mission; and commit to a completion date and finalize a cohesive document outlining the rationale for selecting its current lunar plans. NASA concurred with the recommendations made in this report.
gao_GAO-19-359
gao_GAO-19-359_0
Background This section describes tribal energy resources, EPACT05, and federal agencies’ authority and processes to purchase energy. Tribal Energy Resources Tribal lands have untapped energy resources that, if developed, could help to alleviate economic hardships among tribal populations. According to DOE, while tribal lands account for only 2 percent of all U.S. land, tribal land contains an estimated 50 percent of potential uranium reserves, 30 percent of coal reserves west of the Mississippi, and 20 percent of known oil and gas reserves. Furthermore, DOE’s National Renewable Energy Laboratory also reports that these tribal lands contain about 6.5 percent of all utility-scale potential U.S. renewable energy resources. Ninety percent of this potential renewable energy capacity is for solar energy. According to the laboratory, tribal lands have the potential for over 6,000 gigawatts of utility-scale solar photovoltaic capacity. To put this in perspective, a single gigawatt of power running at full capacity has approximately enough energy potential to power over 800,000 homes. However, 86 percent of tribal lands with energy potential are undeveloped, according to DOE. In July 2018, DOE announced the release of its Tribal Energy Atlas and an accompanying report on the renewable energy potential on tribal lands. The atlas, developed by the National Renewable Energy Laboratory, is an interactive, web-based geospatial application that provides information about energy resource potential on tribal lands (see fig. 1). According to DOE, the atlas is the first of its kind; further, it is available to tribal energy project planners, technicians, and investors to assist with analyzing energy options on tribal lands. Previously, DOE had identified multiple tribal lands with undeveloped energy resources that could potentially meet DOD energy needs. Specifically, in a May 2013 report, DOE identified 15 reservations that had, among other things, the potential to meet DOD energy needs and were near existing transmission lines that could be used to transport the energy from the reservation to the installation. The report was based on a DOE survey of DOD installations that could have an interest in purchasing energy from tribal sources based on the tribes’ proximity to the installations and the tribal energy sources’ potential to meet installation energy needs, among other factors. Energy Policy Act of 2005 EPACT05 has several provisions related to tribal energy resource development. As previously noted, one of these provisions authorizes federal agencies to give preference to majority tribally owned energy suppliers over other potential energy suppliers when purchasing energy. More specifically, EPACT05 specifies that federal agencies “may” give such a preference as long as the agencies do not pay more than prevailing market prices or obtain less-than-prevailing-market terms and conditions. In addition, EPACT05 doubles the credit that agencies receive toward their mandated renewable energy goals if the renewable energy that agencies contract for is produced on tribal lands. GSA, DOD, and DOE Authorities and Processes for Entering into Federal Energy Contracts As noted earlier, GSA has primary authority to enter into energy contracts for federal agencies, and it has delegated this authority to DOD and DOE as well, by regulation. In addition to these statutory and regulatory authorities, the acquisition and supply of energy for federal agencies is governed by the Federal Acquisition Regulation (FAR), which is issued and maintained by the Federal Acquisition Regulatory Council (FAR Council). The process GSA has prescribed for entering into federal energy contracts varies by location, depending on market conditions and state law. In traditional energy markets, retail customers such as GSA, DOD, and DOE are typically required to contract with the local utility operating in the area for energy. In deregulated markets, these agencies publicly issue requests for proposals for energy, and energy providers engage in a competitive bidding process for federal energy contracts. Federal officials seeking to enter into energy contracts may specify energy of certain types (for example, renewable sources may be given priority) in the requests for proposals, and the energy contracts are typically awarded to the best-value provider who meets the requirements of the request for proposal. Federal Officials, Tribal Representatives, and Stakeholders We Interviewed Identified Potential Limiting Factors and Suggestions for Federal Energy Purchases from Tribes Federal officials, tribal representatives, and stakeholders we interviewed identified a number of factors that have the potential to limit federal government energy purchases from tribal sources, and they offered suggestions to address some of these factors. The factors, which sometimes overlapped, included requirements to purchase from monopoly utilities, difficulty entering the market at the prevailing rate, access to transmission infrastructure, access to capital, and technical capacity. Requirements to purchase from monopoly utilities. In traditional regulated energy markets, retail customers, including federal agencies, generally can only purchase energy from the local monopoly utility in that region. According to officials from GSA, DOD, and DOE, this requirement prevents agencies from purchasing from tribes. A representative we interviewed from one tribal energy corporation concurred with the agency officials’ assessment. That tribal energy corporation currently sells energy in wholesale markets and is interested in selling energy to federal agencies, but it has not succeeded in doing so because agencies typically make purchases as retail, not wholesale, customers, according to the tribal representative. Nonetheless, according to the tribal representative, retail customers, including federal agencies, may have the option to purchase electricity as wholesale customers in traditional markets if the entity is large enough, which would allow them to purchase from sources such as the tribal corporation. GSA officials told us that purchasing energy as a wholesale customer may not be in the best interest of the federal government, given the associated technical requirements, including connecting to the grid in ways GSA is not currently equipped for, and regulatory risk, such as managing power in a way not required of retail customers. In particular, GSA officials expressed concern about the regulatory requirements associated with reselling any potential excess energy that may come with a wholesale purchase. Additionally, DOE officials said there might be cost considerations related to achieving and maintaining status as a wholesale customer, as well as risks in giving up retail customer status, including the loss of the utility’s obligation to service the agency’s facilities because it is no longer a retail customer. In addition, according to DOE officials, switching from retail to wholesale purchasing has historically presented significant litigation risk, such as the utility challenging the legal and technical basis for the government’s change from retail to wholesale customer. Difficulty entering the market at the prevailing rate. According to GSA procurement guidance, the contracting process for public utility services should obtain the best-value product for the government, which GSA officials said typically awards the contract to the lowest-cost provider that also meets technical requirements, potentially limiting federal agency opportunities to purchase energy from tribes. In particular, tribes may find it difficult to enter the energy market at competitive rates, according to four federal officials and one stakeholder. For example, two DOE officials provided examples of DOE receiving bids from tribes for federal energy contracts but stated that both bids were unsuitable because their price was higher than the market rate. One DOE official said that tribes developing renewable energy projects would have to compete with lower- cost natural gas and hydroelectric energy, which could prevent tribes from meeting the prevailing market rate. To help foster the success of such tribal projects, one DOE official and one stakeholder suggested allowing federal agencies to purchase energy from tribes at rates that exceed the prevailing market rates. However, some tribes have successfully entered the energy market and have sold energy at competitive rates. For example, representatives we interviewed from one tribe and a renewable energy development corporation owned by several tribes said they anticipate that their current or future projects will allow them to sell energy at competitive rates, and at least two tribes have entered into contracts with the federal government. Moreover, since the beginning of 2017, DOE has seen an increased interest from tribes in renewable energy projects because the price of renewable energy has become more competitive with other, lower-cost forms of energy, according to DOE officials. As tribes develop more renewable energy projects, there may be additional opportunities for federal agencies to purchase from tribes, which will also help these agencies meet federal renewable energy goals. Access to transmission infrastructure. Lack of access to energy transmission infrastructure may prevent tribes from transmitting their energy off tribal lands, according to 10 federal officials, tribal representatives, and stakeholders whom we interviewed. One DOE official said the biggest challenge in contracting for energy with tribes can be getting a physical connection to transmit power between the tribal energy providers and a federal building. Federal officials from GSA and DOE noted that there are few federal buildings close to tribal lands, making transmission from those lands to federal buildings more complex and expensive. A 2013 Edison Electric Institute report said that the cost of new construction of overhead transmission lines can range from $174,000 to $11 million per mile. DOE’s Tribal Energy Atlas may assist tribes in overcoming this factor because it provides information on existing infrastructure, including transmission lines, giving tribes access to data they need to make informed decisions about their energy development options. Further, tribes are not limited to developing energy projects on their own lands, which can eliminate issues with proximity to federal purchasers. For example, one tribe near San Diego partnered with a private developer to build a wind farm in Illinois to sell energy to GSA. The purchase was the largest wind energy purchase from a single source in federal contracting history, according to GSA officials. Access to capital. Tribal energy development may be hindered because of difficulty obtaining access to capital, potentially limiting federal energy purchases from tribes. For example, one industry official we interviewed who worked with a tribe in the process of developing a wind farm on its reservation said the tribe does not have the necessary capital to connect to the local transmission infrastructure. As a result, it cannot provide power beyond the reservation. Likewise, nine federal officials and stakeholders whom we interviewed said securing financing for energy development could be difficult for some tribes. To overcome this potential limitation, one group of tribes combined their resources and formed a multi-tribal power authority, which allowed them to raise the necessary capital to take on a larger-scale project while maintaining tribal ownership and creating jobs in the tribal communities. The multi-tribal authority plans to develop one of the largest wind farms in the country and sell the energy at a competitive price, according to representatives from a renewable energy development corporation owned by several tribes. Another option for tribes to address this limitation is tribes leasing their land to private developers to operate and maintain energy projects, thereby benefiting from their energy resources without having to raise the capital needed to develop them but still receiving additional benefits for the tribal community. For example, one tribal representative and one stakeholder mentioned that training and educational programs for tribal members could be part of these agreements between private developers and tribes. Technical capacity. Some tribes may not have the technical capacity to develop their energy resources, which can also limit federal energy purchases from tribal sources, according to tribal representatives and stakeholders we interviewed. For example, one tribal representative and four stakeholders we interviewed said that some tribes lack experience with energy development, which potentially limits their ability to take on large-scale projects that could meet federal energy needs. Two stakeholders noted the importance of tribes having access to professionals with experience in running energy development projects to help overcome this potential limitation. Federal agencies offer programs that could assist tribes with building technical capacity. For example, the Department of the Interior provides technical and financial assistance to tribes for the exploration, development, and management of tribal energy resources. In addition, DOE offers grants and education through webinars, forums, and workshops. For example, DOE in August 2018 selected 15 tribal projects to receive funding for developing their energy resources to reduce or stabilize energy costs, as well as to increase energy security and resilience on tribal lands. DOE has also provided technical assistance, technology and market analysis, and capacity building for tribes, as well as webinars on utility-scale energy development, fundamentals of energy markets for tribes, and effective tribal project partnerships. However, DOE’s efforts have focused primarily on reducing tribal energy costs and assisting tribes in developing energy for use on reservations, rather than on selling energy to outside sources, according to DOE officials. GSA, DOD, and DOE Have Not Used the Tribal Energy Preference Since Its Establishment, and EPACT05 Is Not Specific about Its Use Since the establishment of the tribal energy preference, GSA, DOD, and DOE have not entered into an energy contract with a tribe using the preference. The preference and other tribal energy resource development provisions added in the tribal energy section of EPACT05 provide federal agencies with mechanisms to support tribal energy development and use. As noted previously, the section provided for grants to assist tribes in developing their energy resources, authorization for federal agencies to give preference to tribal energy sources when contracting for energy, and double credit towards mandated renewable energy goals when federal agencies contract for energy produced on tribal lands. GSA, DOD, and DOE officials we interviewed identified five instances in the past when a tribe bid on a federal energy contract, and agencies did not use the tribal energy preference in any of these instances. Two of the instances led to contracts with GSA because, in one of those instances, officials said that the tribe submitted the best bid, and in the other, GSA used the small business preference authority instead, as discussed further below. The other three instances were bids to DOD and DOE; these instances did not lead to contracts because either the cost was too high or the proposal was unsolicited and not needed by the agency, according to agency officials. Officials from GSA and DOD noted that EPACT05 makes use of the preference discretionary because it says that federal agencies “may give preference” to a majority tribally owned energy source. Officials from DOD said they cannot authorize agency officials to use the preference without a policy or FAR requirement to use the preference. DOD officials said they follow FAR regulations and guidance when implementing policy and guidance for the agency, but the FAR has no provisions specifically addressing the preference. Similarly, GSA officials told us they would be hesitant to use the preference because they believe it limits competition solely to tribal sources, which may not be in the best interest of the federal government. GSA officials attempted to use the preference to limit an energy contract solicitation solely to tribal sources in 2014, according to a stakeholder that worked on the project, but the GSA Administrator expressed concern about limiting competition in that manner. The stakeholder noted that GSA instead decided to open the solicitation to small businesses, and the tribe ultimately won the contract through the small business preference authority. When we reported on implementation of the tribal energy preference in November 2016, we found that federal agencies had not used the preference because of uncertainty about how to do so and lack of guidance. Because GSA has primary energy purchasing authority for the federal government, we recommended that GSA develop implementing guidance to clarify how contracting officials across the federal government should use the preference. GSA partially agreed with the recommendation, stating that guidance would be beneficial, but GSA officials stated that government-wide rulemaking from the FAR Council, of which GSA is a member, is necessary to clarify how agencies should use the preference. Subsequently, GSA officials told us that in April 2017, GSA presented the FAR Council with a business case that included an analysis of the problem we identified. After reviewing the business case, the FAR Council determined the preference has limited application government-wide. GSA officials told us that the FAR Council declined to pursue regulatory changes to the FAR because, according to the council, the preference only impacts agencies responsible for entering into federal energy contracts, mainly GSA, DOD, and DOE. Further, the FAR Council recommended that GSA consider nonregulatory paths, in keeping with Executive Order 13771, which aims to reduce costs associated with regulatory compliance. In response to the FAR Council’s recommendation, GSA added the preference language from EPACT05 to the form it uses to delegate purchasing authority to other federal agencies that may seek this authority in the future. As we reported in November 2016, DOE in 2013 issued agency-specific guidance on use of the preference, such as for limiting competition to qualified majority tribally owned suppliers for the purchase of renewable energy and energy by-products. DOE distributed the tribal energy preference guidance through a February 2013 acquisition letter. However, in our interviews with officials responsible for purchasing energy in nine DOE offices, we found that officials in five of these nine offices were unaware of the DOE guidance or unaware of the preference. DOE headquarters officials stated that the agency did not take further action to communicate the guidance or ensure relevant officials were aware of it after its initial distribution. Under federal standards for internal control, management should internally communicate the necessary quality information to achieve the entity’s objectives. According to DOE documentation, the objectives of the guidance are to promote tribal renewable energy development, reaffirm the federal government’s trust responsibility to tribes, and reinforce key national policy objectives such as the acquisition and use of clean energy products. DOE officials agreed that officials responsible for purchasing energy should be aware of the agency’s guidance and the preference. For example, officials from one DOE office stated that its contracting officials are aware of the preference because it has included the preference language in its requests for proposals. By taking steps to communicate the guidance to all DOE officials responsible for purchasing energy, DOE will be better positioned to ensure that these officials are aware of the preference, which may increase its use. In addition, officials from GSA, DOD, and DOE who are responsible for purchasing energy told us they are still uncertain about how they would use the preference. For example, officials from GSA stated that they would use the preference as a tiebreaker at a minimum, but they also noted that ties are unlikely and they had not seen any ties in bids to provide energy in the last 4 years. They also noted that they would rely on GSA’s legal and acquisition policy offices for any instruction regarding using the preference. Similarly, DOD officials responsible for energy purchases stated they would have to consult with DOD’s acquisition policy office, which stated that DOD does not have guidance and could not authorize use of the preference without a requirement in the FAR, as discussed previously. Likewise, DOE officials were unclear about how they would use the preference, stating that they would use the preference by awarding the energy contract to a tribe if the tribe had the lowest bid. However, the agency would not need to use the preference in such situations because the agency generally awards contracts to the lowest bidder, according to DOE officials. According to officials from GSA and DOD, other statutes that authorize agencies to apply preferences for acquisition of goods and services from specific sources include more specific requirements in their statutory language, making the requirements easier to apply. For example, GSA officials explained that the specific requirements and measurable goals set under the Small Business Act, as amended, increases contracts awarded to small businesses. In contrast, EPACT05’s tribal energy provision does not contain analogous specific requirements for how agencies should use the preference. DOD officials stated that the agency would potentially pursue using the tribal energy preference if EPACT05 required a certain amount of energy contracts to go to tribes, similar to the Small Business Act’s requirements for small businesses. Conclusions Energy resources on tribal lands present an opportunity for individual tribes that pursue development of these resources to improve their socioeconomic status by generating income, jobs, and associated economic development. The federal government, as a significant energy consumer, is in a position to support energy development on tribal lands. Through EPACT05’s tribal energy resource development provisions, including the tribal energy preference, Congress has provided federal agencies with mechanisms for such support. GSA and DOE have taken steps intended to promote use of the preference—GSA by adding the preference language when delegating energy contracting authority in the future, and DOE by issuing guidance. However, no federal agency has used the preference since its establishment in 2005, in part because EPACT05 does not require its use or include goals specifying how agencies should use it. Further, officials we interviewed at GSA, DOD, and DOE told us they were uncertain about how to use the preference. Specific incentives or requirements for the use of the tribal energy preference could help create additional opportunities for federal energy purchases from tribes as they develop more renewable energy projects. DOE’s issuance of agency-specific guidance for implementing the preference is an important positive step. However, some DOE officials responsible for purchasing energy were unaware of the DOE guidance. By taking steps to communicate the guidance to all DOE officials responsible for purchasing energy, DOE will be better positioned to ensure that these officials are aware of the preference, which may increase its use. Matter for Congressional Consideration To the extent that Congress wants to further encourage federal agencies to use tribal energy sources, it should consider amending the Energy Policy Act of 2005 to provide more specific direction to federal agencies for implementing the tribal energy preference, to include consideration of additional incentives or requirements to use these energy sources. (Matter for Consideration 1) Recommendation for Executive Action The Secretary of Energy should communicate DOE’s tribal energy preference guidance to all DOE officials responsible for purchasing energy. (Recommendation 1) Agency Comments We provided a draft of this report for review and comment to DOE, DOD, and GSA. In its written comments, reproduced in appendix I, DOE concurred with our recommendation and outlined planned action to implement it. Specifically, DOE plans to issue and disseminate a new policy flash to its acquisition personnel to draw renewed attention to its tribal energy preference guidance. DOD and GSA stated that they did not have any comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of General Services, the Secretary of Defense, the Secretary of Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Energy Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Karla Springer (Assistant Director), Andrew Moore (Analyst in Charge), Justin Bolivar, William Gerard, Cindy Gilbert, Cynthia Norris, and Caroline Prado made key contributions to this report.
Why GAO Did This Study Tribal lands hold considerable energy resources—oil, gas, coal, wind, solar, geothermal, and biomass. Tribal energy projects can help tribes fund programs and services that improve tribal members' quality of life. Federal agencies are large consumers of energy in the United States, spending about $6 billion in 2017 on energy for their facilities. Congress has provided a mechanism for agencies to support development and use of tribal energy by authorizing agencies to give preference to majority tribally owned suppliers when purchasing energy. GAO was asked to review federal efforts to use the preference. This report examines, among other objectives, the extent to which GSA, DOD, and DOE have used the tribal energy preference. GAO reviewed available agency information on use of the preference and interviewed federal agency officials to understand how agencies would use the preference when entering into contracts with tribal suppliers. What GAO Found None of the three primary federal agencies with authority to enter into energy contracts—the General Services Administration (GSA) and the Departments of Defense (DOD) and Energy (DOE)—have used the tribal energy preference since it was established in the Energy Policy Act of 2005 (EPACT05). The section of the act that includes the preference provides federal agencies with mechanisms that can support development and use of tribal energy resources. The mechanisms include grants to assist tribes in developing their energy resources and authorization for agencies to give preference to majority tribally owned sources in federal energy purchases, so long as they pay no more than prevailing market prices and obtain no less than prevailing market rate terms and conditions. According to DOE, tribal lands account for 2 percent of U.S. land but contain about 6.5 percent of all utility-scale U.S. renewable energy potential. GSA, DOD, and DOE officials identified five instances in the past when a tribe bid for a federal energy contract, and the agencies did not use the preference in any of those instances. GSA awarded a contract to tribes in two of the instances. In the first instance, the tribe submitted the best bid. In the second, GSA officials attempted to use the preference by limiting the energy contract solicitation solely to tribal sources, according to a stakeholder that worked on the project, but the GSA Administrator expressed concern about limiting competition in that manner. GSA instead used the small business preference authority, through which the tribe ultimately won the contract. DOD and DOE received the other three bids, which did not lead to contracts because either the cost was too high or the bid was not needed by the agency, according to agency officials. Federal officials noted that use of the preference is discretionary. EPACT05, which says agencies “may give preference,” does not require use of the preference, and the Federal Acquisition Regulation does not specifically address the preference. In November 2016, GAO reported that one reason federal agency officials cited for not using the preference was uncertainty about how to do so. GAO recommended that GSA develop guidance to clarify use of the preference across the federal government. GSA agreed that such guidance would be beneficial but stated that the Federal Acquisition Regulatory Council is the regulatory body empowered to address this issue. In April 2017, GSA presented the council with a business case on the issue. However, GSA officials told GAO that the council determined that the preference has limited application government-wide because it mainly affects GSA, DOD, and DOE, and that, accordingly, the council declined to issue regulations and recommended GSA consider nonregulatory paths. GSA then added the preference language to the form it will use if it delegates purchasing authority in the future. In 2018, federal agency officials told GAO they were uncertain how to use the preference. According to GSA and DOD officials, other statutes that authorize agencies to apply preferences for acquisition of goods and services from specific sources include more specific requirements in their statutory language, making them easier to apply. GSA officials noted that the Small Business Act, as amended, contains specific requirements and measurable goals that increase contracts awarded to small businesses. DOD officials stated that the agency might use the tribal energy preference if EPACT05 had similar requirements. What GAO Recommends To the extent that Congress wants to further encourage use of tribally owned energy sources, it should consider amending EPACT05 to provide more specific direction to federal agencies for implementing the tribal energy preference, to include consideration of additional incentives or requirements.
gao_GAO-20-467
gao_GAO-20-467_0
Background American Samoa’s Geography and Demographics American Samoa consists of five volcanic islands and two coral atolls in the South Pacific, about 2,600 miles southwest of Hawaii (see fig. 1). American Samoa has a combined land area of 76 square miles, slightly larger than Washington, D.C. Approximately 98 percent of the population of American Samoa lives on the main island of Tutuila, and most economic activity (including tuna canning) and government operations take place in and around the harbor of the capital city, Pago Pago, on Tutuila (see fig. 2). Most of Tutuila consists of rugged terrain with little level land. With a significant portion of its population and infrastructure located in low-lying coastal areas, American Samoa faces the risk of tsunamis and other coastal hazards. In September 2009, a tsunami following a magnitude 8.1 earthquake left 34 people dead in American Samoa, and caused severe damage to homes, businesses, and water and electrical infrastructure. In February 2018, Tropical Storm Gita struck the territory, causing damage with at least 50 percent of American Samoan residents facing some level of loss to property, according to American Samoa Department of Commerce estimates. The American Samoa government estimates that the disaster caused nearly $200 million in damages to public and private property. In response to both natural disasters, the federal government issued major disaster declarations and assisted with recovery efforts. The 2010 U.S. Census found American Samoa’s population to be 55,519, a decrease of 3 percent from its 2000 population. Individuals who are neither U.S. citizens nor U.S. nationals, most of them from the Independent State of Samoa, constituted approximately 35 percent of the territory’s population in that year. BEA most recently estimated American Samoa’s 2018 population to be approximately 58,000. The 2010 census also reported that American Samoa’s median household income remained well below, and its poverty rate well above, that of the United States. In 2009, American Samoa’s median household income was $23,892, 47 percent of the U.S. median household income, while its poverty rate was 57.8 percent, nearly four times the U.S. rate of 15.1 percent. American Samoa’s Relations with the United States U.S. interest in the Samoan islands began in 1872 with efforts by the U.S. Navy to establish a naval station in Pago Pago Harbor. A U.S.-British- German protectorate over all Samoan islands ended in 1899, when the islands that constitute American Samoa were placed under U.S. control. The U.S. Naval Station in the territory was established in 1900. From 1900 through 1904, the U.S. government negotiated control over American Samoa, and the U.S. Navy subsequently took responsibility for federal governance of the territory. In 1951, governance was transferred to the Secretary of the Interior. In 1960, American Samoa residents adopted their own constitution, but amendments to the constitution may be made only by an act of Congress. Persons born to non-U.S. citizen parents in American Samoa are U.S. nationals but may apply to become naturalized U.S. citizens. In addition, U.S. non-citizen nationals from American Samoa have the right to travel freely, live, and work throughout the United States. American Samoa exercises authority over its immigration system and customs through locally adopted laws. While American Samoans may serve in the U.S. military, they do not have voting representation for legislation passed before the full U.S. Congress, including legislation setting the minimum wage in American Samoa. The United States provides assistance to the American Samoa government, including funding the majority of its revenue. In fiscal year 2018, the American Samoa government’s financial audit reported that U.S. federal grants provided approximately $150 million of $246 million in total American Samoa government revenue. Ranked by approximate grant expenditures, the largest federal grantors were the Departments of Health and Human Services ($43 million), Agriculture ($33 million), Interior ($30 million), Education ($28 million), Transportation ($18 million), and Homeland Security ($5 million). Minimum Wage Law in American Samoa The federal minimum wage was first enacted as part of the Fair Labor Standards Act of 1938 (FLSA). The FLSA specified that for industries engaged in commerce or in the production of goods for commerce, its policy was to correct and, as rapidly as practicable, to eliminate labor conditions detrimental to the maintenance of the minimum standard of living necessary for health, efficiency, and general well-being of workers without substantially curtailing employment or earning power. Since 1938, there have been nine amendments to the FLSA establishing new minimum wages and usually raising the rate through a series of steps over 2 to 4 years. The FLSA was amended in 1956 to provide for American Samoa minimum wages to be established through a special industry committee (SIC) process similar to that used in Puerto Rico and the U.S. Virgin Islands. Federal policy called for the minimum wage rates for industries in American Samoa to reach the federal level as rapidly as was economically feasible without substantially curtailing employment. The final SIC, which recommended minimum wages to be applied in 2005 and 2006, recommended minimum wages for 18 industry categories. These 18 industry categories remain in existence for present-day minimum wages in American Samoa. Since 2007, U.S. federal law has determined minimum wages in American Samoa. In 2007, Congress passed the Fair Minimum Wage Act of 2007, which eliminated the SICs and created a schedule of increases to American Samoa minimum wages that has since been revised and applied over a number of years. The Fair Minimum Wage Act of 2007 amended the FLSA, raising the federal minimum wage in a series of three steps from $5.15 to, effective July 2009, $7.25 per hour. The amended provision also eliminated the SICs in American Samoa and introduced a schedule for raising the minimum wages, by equal amounts, until all 18 minimum wage categories in American Samoa reached the federal level. According to the U.S. Department of Labor, when the law was enacted, nearly 80 percent of eligible American Samoa workers earned less than $7.25 per hour. The initial Fair Minimum Wage Act of 2007 schedule, which called for $0.50 annual increases, would have increased all American Samoa minimum wages to the current federal level by May 2016. After the initial (2007) schedule, each subsequent law revising the schedule of minimum wage increases for American Samoa extended the projected dates for American Samoa minimum wages to reach the federal level. Measures adopted in 2009 and 2010 retained the $0.50 increases but delayed their application, so that convergence between the American Samoa minimum wages and the federal level would have occurred in 2018 rather than 2016. Subsequent measures—applying increases every third year and reducing each increase from $0.50 to $0.40—delayed convergence of American Samoa minimum wages with the federal level by more substantial intervals. The current schedule establishes increases of $0.40 every 3 years for all 18 industry categories in American Samoa, with the most recent increase in September 2018 and the next increase scheduled for September 2021. If American Samoa minimum wages continue to increase by $0.40 every 3 years as scheduled, and if the current federal level does not increase, the highest minimum wage in American Samoa, for the stevedoring industry, will reach the federal level in 2027, while the lowest minimum wage, for the garment manufacturing industry, will reach the federal level in 2036. Minimum wages for the largest employer overall, government, and the largest private-sector employer, the fish canning and processing industry, will reach the federal level by 2036 and 2033, respectively. Table 1 shows past and projected minimum wages in American Samoa for these industries. (App. III shows the current federal minimum wage in American Samoa by industry.) Since 1957, American Samoa minimum wages have risen, first as recommended by SICs and then in accordance with schedules set by legislation. However, with the exception of 1986, when the highest American Samoa minimum wages—for fish canning and processing, petroleum marketing, and stevedoring—converged with the federal level of $3.35, American Samoa minimum wages have remained below the federal level (see fig. 3). From 2007 to 2018, American Samoa’s Economy Contracted and American Samoa Employment Varied, While Workers’ Earnings Generally Declined American Samoa’s economy largely contracted during the past decade. Adjusted for inflation, gross domestic product declined by 18.2 percent from 2007 to 2017, though it increased by 2.2 percent in 2018. According to the American Samoa Department of Commerce, the 2018 uptick is likely to be temporary, partly reflecting reconstruction activity for Tropical Storm Gita. Changes in government spending and the tuna canning industry, including disaster-related federal funding and cannery closures, have impacted American Samoa’s economy. From 2007 to 2018, American Samoa employment varied by year without a clear trend, while workers’ inflation-adjusted earnings generally declined. American Samoa continues to depend on the territorial government and tuna canning industry as key sectors. The American Samoa government continues efforts to diversify the economy, and in recent years, these efforts have centered on the development of a new industry, telecommunications. American Samoa Employment Varied by Year from 2007 to 2018 While Workers’ Inflation- Adjusted Earnings Generally Declined Employment Employment did not exhibit a clear trend, but varied from year to year from 2007 to 2018. Specifically, it ranged from about 16,000 to about 20,000 with a peak year in 2009. In 2018, employment was at the same level as it was in 2007, at about 17,000. Figure 5 shows the trend in employment in American Samoa over this period. In addition, we analyzed data from alternative sources, which also showed that employment lacked a clear trend from year to year. According to American Samoa Statistical Year Book data, employment ranged from about 14,000 to 19,000 from 2007 to 2017 with a peak in 2010. According to the U.S. Census Bureau’s County Business Pattern data, which mostly excludes certain groups such as the public sector, private sector employment ranged from about 7,000 to 10,000 from 2008 to 2017, with a peak in 2009. For more information, see appendix IV. Inflation-Adjusted Earnings Average earnings of employed workers contracted from 2007 to 2018 when adjusted for inflation. For the overall period from 2007 to 2018, average inflation-adjusted earnings fell by about 11 percent (from about $11,000 to about $10,000), reflecting an increase in average annual earnings of about 29 percent and an increase in prices of about 44 percent. For the most recent year available, 2017 to 2018, average inflation-adjusted earnings was almost unchanged—growing by about 1 percent. Figure 6 shows the trend in earnings in American Samoa from 2007 to 2018. For more information, see appendix IV. Government and Tuna Canning Remain Key Sectors of American Samoa’s Economy The territorial government and tuna canning industry are important sectors of American Samoa’s economy, contributing almost half of American Samoa’s employment and GDP. The American Samoa government and the tuna canning industry have historically employed the largest numbers of workers in American Samoa. In 2018, the government sector employed about 42 percent of the American Samoa’s workforce and the tuna cannery employed about 14 percent (see fig. 7). The territorial government continues to be the largest employer, while the tuna canning industry continues to be the largest private sector employer. The government and the tuna canning industry also remain large contributors to GDP in American Samoa. In 2017, government and manufacturing (primarily composed of tuna canning) contributed 42 percent of American Samoa’s total GDP (see fig. 7). The tuna canning industry plays a key role contributing to the territory’s trade, primarily through exports. According to U.S. Census Bureau data, processed tuna annually accounted for over 88 percent of exports from American Samoa to the United States from 1995 to 2018. According to American Samoa government officials, government and the tuna canning industry are the two main pillars of the economy and sustain other industries across the territory. The territory’s component units, including the Lyndon B. Johnson Tropical Medical Center, American Samoa Community College, American Samoa Power Authority, and American Samoa Telecommunications Authority, provide healthcare, higher education, utility, and telecommunications services, respectively. The tuna canning industry provides direct and indirect benefits to other industries. American Samoa Department of Commerce officials stated that the remaining cannery generates demand for support industries such as transportation and warehousing, retail and wholesale, and construction. American Samoa government officials also noted that the cannery’s large demand for shipping, transportation, and energy might reduce the cost of these services for the entire territory. In 2017, canned tuna constituted over 90 percent of American Samoa’s exports, and fish for processing constituted over 35 percent of American Samoa’s imports (see fig. 8). The American Samoa Government Continues Efforts to Diversify the Economy To reduce the territory’s dependence on its government and the tuna canning industry, the American Samoa government continues its efforts to diversify the economy. According to the American Samoa government, the territory’s dependence on the government and the tuna canning industry has exposed the economy to external risks, including changes in federal grant funding and global competition in the tuna canning industry. To reduce this dependence, the government has developed plans to diversify the economy. American Samoa’s economic development implementation plan for fiscal years 2014 to 2017 and economic development strategy for 2018 to 2022 outline economic development goals for sectors such as transportation and tourism, as well as action items to achieve these goals. The American Samoa government has identified ecotourism as an economic opportunity because the island’s mountains, tropical rainforests, coral reefs, and National Park may be attractive to tourists (see fig. 9). However, the American Samoa government has cited the federal restrictions on competition in passenger air carrier service to American Samoa as an impediment to developing the tourism sector. The United States restricts foreign airlines from carrying U.S. domestic passengers or cargo between U.S. locations, other than as part of a through trip involving a foreign location (cabotage), unless authorized by the U.S. Department of Transportation on the basis of specific criteria. According to the American Samoa government, as of August 2019, there are two passenger air flights per week between American Samoa and the United States (via Hawaii), with a third weekly flight added during peak travel seasons. American Samoa’s 2016 Workforce Innovation and Opportunity Act Unified Plan targets the development of five industries: fisheries and agriculture, telecommunications and information technology, manufacturing, visitors, and handicrafts. The plan notes that American Samoa is experiencing emigration of workers to the United States, countered in part by immigration of tuna cannery workers from neighboring islands to American Samoa. The plan cites low wages as a reason that high-skilled members of the labor force leave the territory. In recent years, the American Samoa government’s efforts to diversify the economy have centered on the development of the telecommunications industry. The government has made major investments in telecommunications infrastructure over the past 5 years. American Samoa Telecommunications Authority officials told us that they have managed the development of the territory’s telecommunications infrastructure projects. Completed in 2015, the Broadband Linking the American Samoa Territory (BLAST) project replaced the territory’s copper infrastructure with a fiber optic network capable of delivering high-speed data, voice, and cellular backhaul services. The U.S. Department of Agriculture’s Rural Utility Service funded the over $90 million project with an approximately $81 million grant and $10 million loan. According to American Samoa Telecommunications Authority officials, the Hawaiki cable project, completed and activated in 2018, added bandwidth to the BLAST network by connecting the territory via an underwater cable branch to the main Hawaiki cable trunk in Hawaii. The officials stated that the Hawaiki cable is a 15,000 kilometer, high- capacity underwater cable connecting Australia and New Zealand to the mainland United States, American Samoa, and Hawaii. The American Samoa government invested approximately $30 million to acquire its connection to the Hawaiki cable, using funding from American Samoa’s 2018 general revenue bond series. According to American Samoa Telecommunications Authority officials, other ongoing, multi-million dollar projects to enhance the territory’s telecommunications infrastructure include projects to upgrade BLAST bandwidth distribution and replace the territory’s 2G network with LTE technology. The American Samoa government believes that the newly activated Hawaiki cable and BLAST fiber optic network have raised the territory’s potential to develop new industries tied to telecommunications, including information communication technology and business process outsourcing. According to an American Samoa Department of Commerce survey of over 50 public and private stakeholders, 64 percent of respondents—the largest share—identified information communication technology as one of the most promising economic development opportunities for the territory. The next four most promising opportunities identified by approximate share of respondents (in parentheses) included “Attracting investors for capital investment projects (58 percent), “General Tourism” (47 percent), “Ecotourism” (47 percent), and “Federal Programs” (47 percent). American Samoa government officials acknowledge that despite progress made, American Samoa’s telecommunications industry is still at an early stage of development. The American Samoa government seeks to attract new telecommunications businesses, including a proposed call center, by identifying various competitive advantages for locating in American Samoa. American Samoa Department of Commerce officials stated that these advantages include an English-(American) speaking workforce with the lowest labor costs in the United States, and the territory’s qualification as an on-shoring location for call centers and other business process outsourcing operators. American Samoa Department of Commerce and American Samoa Telecommunications Authority officials stated that they are currently developing a territorial broadband strategy and proof-of- concept for a call center industry, expected to be released in mid-2020. Additionally, American Samoa Telecommunications Authority officials expect the Territorial Bank of American Samoa, opened in October 2016, to support the efforts to develop the telecommunications industry by encouraging investment in financial technology businesses. American Samoa Telecommunications Authority officials stated that the bank is partnering with the authority to develop internet banking services, which are expected to be offered in the next 2-3 years. American Samoa’s Tuna Canning Industry Faces Multiple Challenges, Including Increased Competition and Minimum Wage Increases American Samoa’s tuna canning industry faces multiple challenges, including increased competition and minimum wage increases, which led to cannery closures from 2007 to 2018. The companies that experienced the closures explained that minimum wage increases were a factor in the closures but not a main factor. With the closures, employment of cannery workers decreased, but inflation-adjusted earnings of cannery workers who maintained their jobs increased. StarKist Co. now operates the single remaining cannery in American Samoa, StarKist Samoa, but faces financial challenges. In addition to increased competition and labor market challenges, the industry faces other challenges, such as lower wages relative to those in American Samoa for cannery workers in other tuna-exporting countries. However, American Samoa offers the tuna canning industry advantages relative to the U.S. mainland and other countries, including lower wages compared to those in the U.S. mainland as well as duty-free access to the U.S. canned tuna market, according to StarKist Samoa officials. American Samoa’s Tuna Canning Industry Experienced Cannery Closures from 2007 to 2018 American Samoa’s tuna canning industry experienced cannery closures from 2007 to 2018 that adversely impacted the economy in that time period, as mentioned earlier. (For a timeline of selected events related to American Samoa’s tuna canning industry, see app. V.) StarKist Co., Chicken of the Sea, and Samoa Tuna Processors, which is owned by Tri Marine International (Tri Marine), have each operated or closed canneries in American Samoa over the years, as follows. StarKist Co. StarKist Co. (headquarters in Pittsburgh, Pennsylvania) has operated a cannery, StarKist Samoa, in American Samoa since 1963. StarKist Samoa is the one remaining cannery on the island, as mentioned earlier (see fig. 10). As of June 2018, StarKist Samoa employed 2,439 hourly wage workers. Chicken of the Sea. Chicken of the Sea (headquarters in El Segundo, California) operated a cannery in American Samoa, which it closed in September 2009. According to CRS, in the 1950s, the Department of the Interior contracted with Van Camp Seafood Company to move onto the island and develop a fish processing plant. Thai Union closed the Chicken of the Sea Samoa Packing cannery in American Samoa in September 2009. According to Chicken of the Sea officials, limited tuna supply was a key factor in the decision to close the cannery. The American Samoa minimum wage increases were a minor factor, but not as significant as other factors related to tuna supply, labor availability, logistics, and utility costs in contributing to the cannery’s closure. The company relocated its canning operations to the U.S. state of Georgia while outsourcing the more labor-intensive processes, including cleaning and cooking tuna loins (a low-tariff U.S. import), to countries with lower labor costs. By relocating to Georgia, Chicken of the Sea noted that it improved flexibility in sourcing and processing fish from multiple locations depending on where supply was readily available. Tri Marine International (Tri Marine). Tri Marine (headquarters in Bellevue, Washington) acquired the former Chicken of the Sea cannery in American Samoa in October 2010, undertook a multi- million dollar investment to renovate and expand it, and opened the new facility under the name Samoa Tuna Processors in January 2015. However, Tri Marine suspended its canning operations in American Samoa indefinitely in December 2016, primarily in response to highly competitive price setting across the global tuna canning industry, according to Tri Marine. Tri Marine explained that the American Samoa minimum wage increases were a minor factor—not as significant as rising price competition and high production costs, such as for utilities—in contributing to Samoa Tuna Processors’ closure. The company subsequently transferred its canned tuna sourcing operations from American Samoa to Thailand, Peru, and the Solomon Islands to take advantage of decreased production costs. According to a report by the Pacific Islands Forum Fisheries Agency, in 2018, StarKist Co. signed a 10-year lease agreement to use Tri Marine’s Samoa Tuna Processors facility for StarKist Samoa operations. According to a Tri Marine official, in 2019, the Bolton Group (Italy) completed its acquisition of Tri Marine. The acquisition did not include Samoa Tuna Processors, and the Tri Marine ownership change did not affect the 10-year lease agreement between StarKist Co. and Samoa Tuna Processors, according to the official. Cannery Employment Level Decreased and Worker Inflation-Adjusted Earnings Increased from 2007 to 2018 From 2007 to 2018, cannery employment in American Samoa fell from about 4,500 in 2007 to 2,469 in 2018, a decline of 45 percent. Most of the decline occurred in the period between 2007 and 2010, with the closure of the Chicken of the Sea cannery. Figure 11 shows the trend in cannery employment in American Samoa over this period. The inflation-adjusted earnings of cannery workers in American Samoa who have maintained their jobs during this period have increased by more than inflation. In American Samoa, the vast majority of cannery workers earn close to the minimum wage. Moreover, the hourly wage of minimum wage cannery workers has increased by more than inflation since 2007. Specifically, during this period, the minimum wage has risen by 70 percent (from $3.26 to $5.56, from the first half of 2007), while prices have increased by 44 percent. However, this analysis does not include those workers who have lost employment or have had hours cut. StarKist Co. Faces Continuing Financial Challenges Because of Legal Issues StarKist Co. faces continuing financial challenges because of legal issues, as follows. In 2019, StarKist Co. was sentenced to pay a criminal fine of $100 million, the statutory maximum, for its role in a conspiracy to fix prices for canned tuna sold in the United States. This fine amounts to almost three times StarKist Samoa’s cost of labor in 2018. According to StarKist Co.’s General Counsel, the company will potentially have to close the cannery in American Samoa and move operations to a foreign country to afford to pay the fine for price-fixing. For its role in price-fixing, StarKist Co. has faced—and may continue to face— lawsuits from wholesalers, food service companies and retailers, and customers. For example, in January 2019, StarKist Co. announced that its portion of a settlement with Walmart was $20.5 million, based on a combination of cash payment and certain commercial terms. In addition, in September 2017, StarKist Co. agreed to pay a $6.3 million penalty resulting from violations of federal environmental laws, according to the U.S. Department of Justice. The U.S. Department of Justice and the U.S. Environmental Protection Agency reached an agreement with StarKist Co. and StarKist Samoa, requiring a series of upgrades to reduce pollution, improve safety measures, and comply with important federal environmental laws at their tuna processing facility in American Samoa, the department reported. American Samoa’s Tuna Canning Industry Faces Continuing Challenges in Addition to Minimum Wage Increases American Samoa’s tuna canning industry faces multiple challenges in addition to scheduled minimum wage increases. One challenge is rising competition in the global tuna canning industry, as the value of foreign processed tuna exports to the United States has increasingly exceeded the value of American Samoa processed tuna exports to the United States (see fig. 12). Specifically, tuna industry officials stated that firms in the U.S. canned tuna market are highly competitive in price setting as opposed to differentiating their product lines. A tuna canning industry official stated that price competition and the financial pressures of the recent anti-trust judgements have forced the U.S. canned tuna market into a cost-cutting environment. According to the same tuna canning industry official, firms must look to lower costs related to labor, energy usage, and shipping to remain competitive in the U.S. market. The official stated that firms implicated in the price-fixing scheme have agreed as part of a legal settlement resulting from a lawsuit to supply their product at lower prices. This puts more pressure on firms to implement cost-saving measures to maintain their U.S. market shares. For example, StarKist Samoa has implemented cost-saving measures to reduce labor and energy costs and has also raised prices and relocated business off the territory. American Samoa’s tuna canning industry also faces other challenges, as described below. Competitors’ canning production strategies. According to StarKist Co. officials, StarKist Co.’s main competitors implement a supply chain production process that spans more than one country. Conversely, StarKist Samoa’s full production process still occurs in American Samoa (see fig. 13). According to StarKist Samoa officials, the cost savings between a fully U.S.-based manufacturing process and an outsourced manufacturing process is substantial and places American Samoa at a distinct disadvantage. According to StarKist Samoa officials, StarKist Co.’s main competitors use a model that outsources the workforce-intensive process to extremely low-wage countries. They explained that StarKist Co.’s competitors clean, cook, and freeze the tuna before importing it—subject to an average tariff of $11 per metric ton—into the mainland United States, where it is then thawed and packaged. Furthermore, our analysis of the global tuna industry suggests that, under certain assumptions, this model can improve cost savings and competitiveness. See appendix VI for the results of our analysis of the global tuna industry and more details about the assumptions we used. Tuna canning industry officials also stated that a new production process combined with lower labor costs for packaging tuna in foreign countries decreases American Samoa’s competitiveness as a location of operation. Lower wages for cannery workers in other countries, relative to those in American Samoa. According to a tuna canning industry official, tuna canneries have moved operations from American Samoa to Thailand, Peru, and the Solomon Islands, in part because of the lower labor costs. According to an industry official, one prominent tuna exporting country offers wages as low as $10 dollars per day, whereas a full- time worker in 2020 at the cannery in American Samoa would earn over $44 per day. Upcoming minimum wage increases in American Samoa. Upcoming minimum wage increases in American Samoa will increase labor costs for the tuna canning industry. According to data provided by StarKist Samoa, most cannery workers in American Samoa would be impacted by a minimum wage increase. Specifically, over 90 percent of StarKist Samoa’s employment (roughly 2,200 workers) could be affected by the next minimum wage increase scheduled for September 30, 2021. At 2018 levels of employment, labor costs could increase by about $2 million at 2021 minimum wage levels. Decreased direct access to tuna supply. A number of factors have decreased direct access to tuna supply. The Pacific Remote Islands Marine National Monument regulations have had the biggest impact on tuna supply to the cannery, according to StarKist Co. officials. Also according to Starkist Co. officials, marine monuments in the region have closed fishing grounds to U.S. purse seine vessels that historically delivered tuna to local canneries in American Samoa, and the Rose Atoll Marine National Monument reduced fishing grounds in U.S. waters around American Samoa that were very important to the American Samoa longline fleet. In 2017, National Marine Fisheries Service removed a regulatory exemption that had allowed certain large U.S. longline vessels to fish in portions of the American Samoa Large Vessel Prohibited Area. Delivery volume from a Chinese tuna supplier that used to send fishing boats to supply canneries in American Samoa directly has decreased significantly as a result of China paying subsidies to Chinese fishing vessels in the Pacific, according to StarKist Co. officials. The subsidy draws potential tuna suppliers from the American Samoa market to the Chinese market, the officials stated. American Samoa Offers the Tuna Canning Industry Advantages Relative to the U.S. Mainland and Other Countries American Samoa offers the tuna canning industry certain competitive advantages relative to the U.S. mainland and other countries, as follows. Lower wages for cannery workers in American Samoa relative to those on the U.S. mainland. American Samoa offers lower labor costs relative to the U.S. mainland. For example, while the 2020 minimum wage for fish canning and processing in American Samoa is $5.56 per hour, Georgia’s wage for manufacturing is $15 per hour. Tariff-free access to the U.S. canned tuna market. According to StarKist Co. officials, U.S. trade policies provide tariff-free access to the U.S. market for processed tuna from American Samoa, while foreign suppliers generally are subject to tariffs for these goods. On average, foreign suppliers’ canned or pouched tuna is subject to an average tariff rate of 12 percent. However, U.S. trade agreements with certain countries may provide those countries tariff-free or reduced-tariff access to the United States. Tax credits provided by the federal and local government. The American Samoa tuna canning industry receives both federal and local tax benefits. U.S. tax policies have reduced federal taxes on income earned by qualifying U.S. corporations investing in American Samoa. Under the Internal Revenue Code, qualifying American Samoa tuna canneries have received an economic development credit for U.S. corporate income taxes. StarKist Samoa reported saving $5.9 million in 2016 through this tax credit. Canneries in American Samoa have also benefited from exemptions from local taxes. According to American Samoa government officials, the local tax exemption has allowed StarKist Samoa to reduce its corporate tax liability to the American Samoa government to 20–25 percent of the amount owed. According to American Samoa government officials, the total corporate and excise tax revenue loss to the American Samoa government is estimated to be $15–20 million annually. Federal procurement opportunities related to canned tuna. According to StarKist Samoa officials, operating in American Samoa offers access to certain U.S. government contracts that require U.S.-sourced and -processed fish, and allows them to comply with Buy American requirements. However, according to the officials, most school districts that enter into such contracts waive the Buy American requirements because StarKist Co. is the only tuna company that qualifies, and as a result, competitive bids reveal that the cost of domestic product is significantly higher than the cost of non-domestic product. StarKist Samoa reported that $15.8 million or 4 percent of its revenue in 2018 was from federal procurement that included contracts with the U.S. Department of Agriculture and the U.S. military. The American Samoa Government and Chamber of Commerce View the Minimum Wage Increases as Conflicting with Sustainable Economic Development; Employers and Workers Noted Benefits and Challenges The American Samoa Government and Chamber of Commerce View Minimum Wage Increases as Conflicting with Sustainable Economic Development The American Samoa government and Chamber of Commerce both view the minimum wage increases as conflicting with sustainable economic development. Both expressed concerns about the reliance of American Samoa’s economy on the tuna canning industry and the potential negative impact of minimum wage increases on the remaining cannery in American Samoa. The American Samoa government stated that it supports a minimum wage that its economy can support. While the American Samoa government noted that it is committed to ensuring that the people of American Samoa can meet the basic cost of living, it stated that the impact of upcoming minimum wage increases on StarKist Co. would be extensive. The American Samoa government predicts that it would take years for the economy to recover if StarKist Co. should cease operations in American Samoa, and suggested that the burden of any economic impact would be on the federal government. The American Samoa government specified challenges that it believes StarKist Co. currently faces, including recent federal fines, decreasing supply of tuna, higher infrastructure costs in American Samoa compared to those of other countries, and increased regulation costs by the U.S. Coast Guard and U.S. Environment Protection Agency. In October 2019, the American Samoa Minimum Wage Task Force, commissioned by the Governor of American Samoa, provided us with its findings and recommendations. It reported that American Samoa's economy is unique and starkly different from the economies of all U.S. states and territories, and that, aside from the American Samoa government, the remaining and only pervasive economic driving force in the territory is StarKist Samoa. It also noted that its main objective is to express to the U.S. Congress the importance of involving the territory in the process of determining the applicable minimum wage for American Samoa. The task force identified various policy options and recommended that a combination of a moratorium on minimum wage increases and special industry classification or a special industry committee would increase and maximize the opportunity for local stakeholder participation. These have been long-standing positions of the American Samoa government. In response to a prior report, the American Samoa government requested we convey its position to the U.S. Congress to postpone the minimum wage increases. In response to another prior report, the American Samoa government recommended the pursuit of a U.S. Department of Labor-constituted committee similar to a special industry committee. While the American Samoa Chamber of Commerce noted that its employers support fair minimum wages for their workers, it stated that it supports any delay in minimum wage increases for the cannery until another economic option is feasible. The American Samoa Chamber of Commerce explained that while data show a shift in employment away from the cannery, StarKist Samoa still provides significant financial benefits to American Samoa in the form of decreasing fuel and shipping costs. The American Samoa Chamber of Commerce predicts that any increase in minimum wage could force the closure of StarKist Samoa and drive American Samoa into a recession. Selected Employers and Workers Noted Benefits and Challenges Presented by Minimum Wage Increases Public and private sector employers and workers we interviewed commented on the impact of minimum wage increases, including potential benefits and challenges. Potential positive impact on the livelihood of workers. Multiple employers and workers we met with stated that increasing the minimum wage would have a positive impact on the livelihood of workers. For example, a worker said that minimum wage increases have helped offset the increasing prices of imported products and excise tax products. Another worker stated that minimum wage increases help people to meet their community and church financial obligations. Some employers and workers noted that minimum wage increases improve customers’ ability to pay bills and their likelihood of using necessary services. Potential negative impact on the remaining cannery. Multiple workers and employers we met with generally stated that minimum wage increases could lead to a potential negative impact on StarKist Samoa. Multiple workers stated that such impact could result in a loss of jobs and increases in shipping costs, among other things. Some public employers were concerned that minimum wage increases could lead to the closure of the remaining cannery, and one of them stated that the potential closure was the main factor in the minimum wage increase discussion. One public worker stated that StarKist Samoa closing the remaining cannery is a major concern because the company is the main source of tax revenue. Another public worker added that having already seen a cannery close after minimum wage increases has raised concerns that it might happen again with StarKist Samoa. In addition, a private employer stated that after the Samoa Tuna Processors cannery closed in 2016, the employer’s retail sales decreased sharply and the economy now relies on the remaining cannery, StarKist Samoa. Increased recruitment and retention of workers. Some employers and workers we met with noted that a higher minimum wage could lead to increased recruitment and retention. For example, multiple employers noted the challenges of recruiting and retaining skilled workers given the low wages on the island, which often compel such workers to leave the island for better opportunities. One employer said that it could not recruit without minimum wage increases. Another employer stated that even low-paid workers often leave the island to obtain better pay in higher-paying countries. Some employers and workers noted that the lack of staff, especially nurses and teachers, has led to challenges, such as a negative impact on healthcare and education on the island. One of these employers stated that the minimum wage is too low and there is a lack of good teachers on the island. This employer was upset that the local school did not have a math teacher, noting that teachers leave or simply do not come to work because the pay is too low. One of these workers stated that nurses have moved off-island because their pay is too low and because overwork has contributed to potential health hazards. Keeping American Samoan youth on the island. Multiple employers and workers we met with were concerned that the current minimum wage was insufficient to keep younger American Samoans on the island, especially those who are college-educated. For example, an employer stated that there is a lack of young talent because there are no jobs on the island and pay is low. Another employer stated that some American Samoans earn degrees abroad and come back to American Samoa, but find that they cannot advance their careers on the island and leave again after 1-2 years. Some workers we met with spoke as parents about their children leaving the island, and became emotional upon sharing that they did not anticipate their children returning. Wage stagnation versus wage compression. While some workers we met with said they believed that a lack of an increase would lead to wage stagnation, other workers, as well as employers, we met with said they believed that an increase would lead to wage compression. Some workers noted not receiving pay increases despite working for an employer for many years and obtaining certifications. For example, a worker stated that if it were not for minimum wage increases, the worker would not receive any pay raises. However, another worker was upset that colleagues who had just started working were receiving more money than those who had been with the employer for many years. Funding for minimum wage increases. Multiple employers and workers said they were concerned about how future minimum wage increases could be funded. For example, a public employer noted that it did not oppose the minimum wage increases because the current minimum wage was barely enough to survive on, but was concerned about where the funding and revenue to sustain the increases would come from. Another employer stated that identifying additional funds for minimum wage increases is a major challenge. This employer noted that the company had not yet laid off workers to fund minimum wage increases, but might have to consider it. Public and private sector employers and workers we interviewed also commented on the following topics related to minimum wage increases: Proposed alternatives. Multiple employers and workers suggested alternative ways of increasing minimum wages in American Samoa. For example, an employer stated that minimum wages should be set based on the actual conditions of American Samoa rather than on what it believed to be an arbitrary federal schedule, and a worker stated that the U.S. Department of Labor reviewing the minimum wage and making changes is preferred to scheduled changes. Minimum wage levels. In addition, while cannery workers we met with generally agreed that the current minimum wage was sufficient, other workers, as well as some employers, stated that the current minimum wage and the scheduled minimum wage increases were insufficient. While cannery workers generally noted that they were happy to have previous minimum wage increases, they were fearful that future increases could lead to a loss of hours or complete job loss should the cannery close. However, other workers disagreed. For example, one worker explained that minimum wage increases did not keep pace with the cost of living. Another worker stated that 40 cents every 3 years is only about 13 cents per year, which the worker considered insufficient. Some employers and workers became emotional when speaking about their own financial situations or those of their workers, relatives, or fellow American Samoans. Varying degrees of impact on the livelihood of workers. While public employers generally stated that the impact of the minimum wage increases on their workers was minimal, private employers noted varying degrees of impact on their workers. Some public employers stated that the majority of American Samoa government workers are paid above the minimum wage, and noted that the minimal impact was a result of the local government raising the minimum wage to $5 ahead of the 2018 minimum wage increase to $5.21. Potential positive impact on the economy if the remaining cannery closes. Some private employers stated that there could be a positive impact on the economy if the remaining cannery closes. For example, a private employer stated that the American Samoa economy is so used to having the cannery as its pillar that it has not truly tried to diversify the economy. This employer added that American Samoa needed to continue shifting away from the cannery and toward the rest of the private sector. Another private employer stated that the American Samoa economy is better off without the remaining cannery and that an economist’s analysis of the cannery’s true impact on the rest of the economy is needed. Cost of living. Multiple workers, as well as some employers, were concerned that minimum wage increases could lead to increases in the cost of living, with some noting that the cost of living in American Samoa is already high because living on a remote island requires a high amount of imported goods. While a public employer stated that business owners pass the cost of minimum wage increases to customers, a private employer stated that business owners are unable to do so because of competition. Another employer noted concerns about inflation, stating that minimum wage increases might drive up prices and rent. Cultural considerations. Multiple employers and workers cited the importance of considering American Samoa’s unique culture when setting minimum wage increases. While some workers and an employer noted that the cost of living in American Samoa is unique in that communal land and living off the land through fishing and gardening could minimize housing and food costs, others noted that community and church financial obligations are significant costs. One employer stated that these costs could amount to up to a quarter of worker’s paycheck. A worker stated that nonresidents, like many of the cannery workers, pay much higher medical costs; and an employer stated that foreign workers pay much higher housing costs. Other economic challenges. Multiple public and private employers and workers cited an array of economic challenges other than minimum wage increases, including the high cost of living on the island and increased taxes and fees. For example, one employer stated that American Samoa government taxes and fee increases are more of a challenge than minimum wage increases. Agency Comments, Third Party Views, and Our Evaluation We provided a draft of this report to the U.S. Departments of Commerce, the Interior, and Labor, and the American Samoa government for comment. The Department of Commerce provided technical comments, which we incorporated as appropriate. The Department of Labor informed us that it had no comments. In its comments, reproduced in appendix VII, the Department of the Interior said it would be beneficial to the American Samoa government if we provided information on all potential alternatives to setting minimum wages, including the once-utilized special industry committees. Such a study was beyond the scope of this report, which focused on (1) economic trends including changes in employment and earnings since the minimum wage increases in American Samoa began in 2007, (2) the status of the tuna canning industry, and (3) stakeholder views on the minimum wage increases. In its comments, reproduced in appendix VIII, the American Samoa government noted that the draft report did not reference findings and recommendations of the American Samoa Minimum Wage Task Force, commissioned by the Governor of American Samoa. The task force recommended the establishment of a special industry committee and a moratorium on minimum wage increases to allow ample time for such a special industry committee to form. We have added information on these findings and recommendations. We are sending copies of this report to the appropriate congressional committees, the U.S. Secretaries of Commerce, the Interior, and Labor, and the Governor of American Samoa. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-3149, or gootnickd@gao.gov; or Oliver Richard at (202) 512-8424, or richardo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology This report updates our previous reports on the impact of minimum wage increases in American Samoa and examines (1) economic trends including changes in employment and earnings since the minimum wage increases in American Samoa began in 2007, (2) the status of the tuna canning industry, and (3) stakeholder views on the minimum wage increases. To examine economic trends including changes in employment and earnings, we analyzed gross domestic product data from the U.S. Bureau of Economic Analysis; tax and administrative data from the American Samoa government; and employment, earnings, and wage data gathered through an employer questionnaire that we submitted to American Samoa’s tuna canning industry. To examine the status of the tuna canning industry, we estimated changes in employment and earnings using the employer questionnaire, analyzed tuna trade data from the U.S. Census Bureau, and interviewed tuna cannery industry representatives and stakeholders. To examine stakeholder views on the minimum wage increases, we conducted interviews with officials from the American Samoa government and American Samoa Chamber of Commerce, and employers and workers from the public and private sectors. The federal sources generally used to generate data on employment and earnings in the United States, including the Current Population Survey and the Current Employment Statistics program, do not cover American Samoa. Therefore, we collected our own data on employment and earnings in American Samoa. Employment and Earnings Data Consistent with our prior reports, we reported on employment and earnings from 2007 to the most recent year available. Employment and earnings figures are based on our analysis of combined worker data from various sources. We used employer-level data that we obtained from the American Samoa Department of Commerce and Department of Treasury to measure the annual employment of the American Samoa government and its component units: (1) American Samoa Community College, (2) LBJ Tropical Medical Center Authority, (3) American Samoa Power Authority, and (4) American Samoa Telecommunications Authority. We used tuna canning industry employers’ responses to our employer questionnaire to estimate cannery employment and earnings. We used individual-level tax records that we received from the American Samoa Department of Treasury to measure annual employment and earnings in American Samoa’s private sector excluding the canneries. To adjust earnings for inflation, we relied on the Consumer Price Index (CPI) as provided by the American Samoa government. To estimate employment and earnings for non-cannery workers in the private sector, we relied on individual-level tax data that we obtained from the American Samoa Departments of Commerce and Treasury. We restricted the sample to tax records received for tax years 2005 through 2018. We excluded tax records that contained invalid values in the variables that uniquely identify employers and workers. We also excluded records that contained non-numeric values in Social Security withholdings. Together, these records accounted for less than 2 percent of all tax records in the sample between 2005 and 2018, and accounted for less than 1 percent of total gross wages during this period. In addition, we excluded a small number of tax records—26 out of over 130,000 total records during this period—that reported zero annual earnings under gross wages, Social Security, and Medicare wages. In addition, in less than 100 cases, we adjusted the reported gross wages of workers if the worker had reported Medicare or Social Security wages but had reported gross wages that were very extreme in value (for example, zero or over $300,000), under the assumption that these were data errors. We estimated annual employment by summing the number of workers reported by each employer, for employers for which there was at least one tax record reporting positive wages for a given year. Under this approach, it is important to note that if a worker had multiple employers, the worker was counted more than once. Because of data limitations, we did not include data for tax year 2015 in our analysis of employment and earnings in American Samoa. We excluded this year because, according to the American Samoa government, individual-level tax records for that year are incomplete. Consistent with this observation, we found that the data contained lower counts of employers and workers in the private sector excluding the canneries in tax year 2015 than in any other tax year between 2005 and 2018. With the exception of tax year 2015 data, we found the data on employment and earnings sufficiently reliable for the purposes of our reporting objectives. For more details on our methodology for estimating employment and earnings in comparison to our methodology used in previous reports, see appendix IV. Employer Questionnaire To examine the status of the territory’s key private sector industry—tuna canning—we estimated changes in employment and earnings by submitting an employer questionnaire to American Samoa’s tuna canning industry. In accordance with other federal employment surveys and with our employer questionnaires for our 2010, 2011, 2014, and 2016 reports on the impact of minimum wage increases in American Samoa, our employer questionnaire requested employment and wage data for mid- June pay periods in 2016, 2017, and 2018 from American Samoa’s tuna canning industry—in this case, the territory’s one remaining cannery, StarKist Samoa. In our 2016 report, we asked for employment in the mid- January 2016 pay period. We used the 2016, 2017, and 2018 data to update and extend the time series of employment and earnings data received from our prior employer questionnaires provided to American Samoa’s tuna canning industry. We found the data collected through the employer questionnaire for prior reports and this report sufficiently reliable for the purposes of our reporting on changes in American Samoa employment and earnings from 2007 to 2018. Data based on employers’ questionnaire responses include the reported numbers of hourly workers as well as their annual earnings at the canneries as of June in the given year. The questionnaire asked separately for data regarding workers paid an hourly wage and workers paid an annual salary. For hourly wage workers, respondents were asked to provide the number of workers paid at each wage rate. For salaried workers, respondents were asked the number of full-time and part-time workers paid at each salary level. In compiling the questionnaire-based earnings data for a given year, we assumed that all hourly cannery workers earned the minimum wage for that year and worked all year. When the minimum wage changed midyear, we assumed that the original wage applied for the first half of the year and the revised wage for the second half of the year. To adjust earnings for inflation, we relied on the American Samoa CPI. Using employer questionnaire data, we determined the number of workers that would be affected by future minimum wage increases because their wages were at or below future scheduled minimum wage increases. We estimated the cost of future scheduled minimum wage increases by calculating the cost to the cannery of increasing each worker’s wages to scheduled levels. This estimate assumed that workers worked full-time and all year (i.e., 2,080 hours) and that the minimum- wage increase would not affect the wages of workers currently earning more than the minimum wage. In addition, we interviewed cannery representatives and industry experts to obtain their views on competitive challenges facing the industry, including changes in minimum wage rates, access to fishing grounds, and preferential trade status. To illustrate other potential tuna production scenarios, we developed a model where tuna production relocates from the current status quo in American Samoa to one of two alternative scenarios of loining or canning tuna, or both, in other locations. Changes in labor and tariff costs are compared to the status quo scenario in American Samoa. The model uses assumptions based on the tuna canning industry employment count from the employer questionnaire responses and information obtained during interviews with tuna cannery employers. (See app. VI for the results of our analysis and more details about the assumptions we used.) This model is an update of the model we used for our December 2016 report. Stakeholder Interviews To examine stakeholder views on the minimum wage increases, we conducted interviews with officials from the American Samoa government and American Samoa Chamber of Commerce, and employers and workers from the public and private sectors. During our fieldwork trip to American Samoa in October 2019, we conducted interviews with government officials, employers, other private sector representatives, and workers to obtain views and information on the minimum wage increases. In total, we conducted 15 interviews: five employer interviews (the American Samoa government and three of its component units, and StarKist Samoa), two employer group interviews (private employers that are American Samoa Chamber members and ethnic business employers), and eight worker group interviews. For the primary American Samoa government and StarKist Samoa, we conducted two worker group interviews for each. In the group interviews, we followed a standard protocol that asked for participants’ views on the impact of the minimum wage increases. We interviewed a nongeneralizable sample of employers and workers selected on the basis of key industry information from prior GAO reports and employment data from the American Samoa government. Specifically, we selected the following employers and their workers: (1) the American Samoa government, (2) StarKist Samoa, (3) American Samoa Medical Center, (4) American Samoa Community College, and (5) American Samoa Power Authority. To supplement these employers and workers, we requested that the American Samoa Chamber of Commerce identify additional employers and their workers on the basis of criteria related to the tuna canning, construction, and retail industries, among other things. The American Samoa Chamber of Commerce arranged a group of 15 employers and their workers belonging to its membership and related to the tuna canning, construction, and retail industries, as well as a group of eight employers related to the territory’s ethnic (including Filipino, Chinese, Korean) business community. Overall, the number of participants in each group interview ranged from four to 20, for a total of over 100 participants. The range in number of participants applies to all of the interviews, regardless of their composition. In addition, we reviewed data and interviewed officials from the U.S. Departments of the Interior, Commerce, and Labor. We also reviewed U.S. minimum wage laws and other relevant laws and regulations. We did not review the extent to which laws were properly enforced or implemented. The scope of our study also does not include workers in the underground economy, which would include employers that may not comply with laws, including tax, minimum wage, immigration, and other laws. We did not review compliance with laws as part of this study. We conducted this performance audit from June 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Federal Laws Related to Minimum Wages in American Samoa, 1938–2018 Table 2 summarizes key federal laws regarding minimum wages in American Samoa. Appendix III: American Samoa Minimum Wage Poster Figure 14 shows a U.S. Department of Labor poster outlining federal minimum wage requirements for American Samoa employers subject to the Fair Labor Standards Act (FLSA). According to the department, all such employers are required to post this information in conspicuous places in every establishment where employees subject to the FLSA’s minimum wage provisions are employed to permit them to readily observe it. Appendix IV: Alternative Estimates of American Samoa Employment and Earnings Employment Table 3 compares American Samoa employment data from 2007 to 2018 obtained on this review, data reported in GAO-17-83, and data from the American Samoa Statistical Yearbook 2017, an annual report produced by the American Samoa Department of Commerce. Data obtained on this review were composed of American Samoa tax records, including individual-level data, and responses from a questionnaire submitted to StarKist. Data reported in GAO-17-83 relied on aggregate data and did not include individual-level tax records. The data in the tables derived from these different sources are broadly consistent, but there are differences in certain years. The largest gaps between the alternate sources are in 2009 and 2012. According to the American Samoa Department of Commerce, some temporary government workers are not reflected in the data reported in GAO-17-83. We also compared American Samoa private sector employment data that we obtained and analyzed to County Business Patterns private sector employment data, collected by the U.S. Census Bureau. Private sector employment data that we analyzed indicated 2,000 to 3,000 more workers employed than County Business Patterns private sector employment data, depending on the year. According to the U.S. Census Bureau, this may largely be because the County Business Patterns data capture employment during the week of March 12, while the tax data include employment throughout the year. In addition, given that the American Samoa manufacturing sector is largely composed of the tuna canning industry, we also compared cannery data that we obtained on this review to County Business Patterns manufacturing data, which County Business Patterns reports in selected years. As table 4 shows, cannery employment data are in a similar range as County Business Patterns manufacturing data. Earnings Table 5 compares American Samoa workers’ earnings data from 2007 to 2018 obtained on this review and data reported in GAO-17-83. As shown, the data are broadly consistent. We also compared American Samoa workers’ earnings data from 2007 to 2018 obtained on this review to County Business Patterns data. In general, average earnings estimates in the County Business Patterns data are somewhat higher than in the American Samoa tax data, as shown in table 6. However, both series show growth in earnings over the period of 2008 to 2017 of approximately 20 to 30 percent. Additional Data Reliability Analysis of Individual-Level Earnings As an additional test of the reliability of the individual-level tax data, we examined trends in the distribution of worker-level earnings. A prior GAO report found that the minimum wage increases narrowed the gap between lower- and higher-paid workers in American Samoa from 2007 to 2009. We first examined whether the tax data also show that the gap narrowed during this period and then examined trends through 2018. One limitation of this analysis was that it was restricted to the private sector excluding the canneries. Therefore, any patterns that we documented in this sector may not reflect changes to the American Samoa workforce as a whole. We began our analysis in 2006 to provide information before the Fair Minimum Wage Act of 2007. To measure a worker’s annual earnings, we summed all of the worker’s gross wages from his or her tax records in a given tax year. According to the tax data, in 2006, workers at the 50th percentile of annual earnings earned $6,031. This amount is only 8 percent higher than what full-time workers would have earned if they were continuously employed at the lowest minimum wage that was in effect in American Samoa in 2006 ($2.68 per hour). In comparison, in the same year workers at the 90th percentile earned $18,747, or 3.1 times higher than the 50th percentile. We found that earnings at the 50th percentile experienced a larger increase than earnings at the 90th percentile from 2006 through 2009, and this ratio fell to 2.7. Figure 15 depicts trends in the gap between lower- and higher-paid workers in American Samoa from 2006 through 2018, as measured using the tax data by the ratio between the 90th and 50th percentiles of earnings. Overall, from 2006 through 2018, this gap fell by 17 percent, from 3.1 to 2.6. The decline is attributable to a 48 percent increase in earnings for workers at the 50th percentile, compared to only a 23 percent increase for workers at the 90th percentile. Appendix V: Key Selected Events, 2007– 2019 The following events highlight changes in American Samoa’s minimum wages and the status of the tuna canning industry from 2007 to 2019: 2007. Fair Minimum Wage Act includes a provision to incrementally increase American Samoa minimum wages to the federal level ($7.25 per hour). Special industry committees that previously set minimum wages in American Samoa are disbanded. Minimum wages in American Samoa rise by $0.50 as federally mandated. Minimum wage for fish canning and processing workers becomes $3.76. May 2008. Minimum wages in American Samoa rise by $0.50 as federally mandated. Minimum wage for fish canning and processing workers becomes $4.26. May 2009. Minimum wages in American Samoa rise by $0.50 as federally mandated. Minimum wage for fish canning and processing workers becomes $4.76. September 2009. Chicken of the Sea closes its cannery in American Samoa. The company relocates its canning facilities to the U.S. state of Georgia while outsourcing labor-intensive processes to countries with lower labor costs. The Samoa earthquake and tsunami cause severe damage and leave 34 people dead in American Samoa. The federal government issues a disaster declaration and assists with tsunami recovery efforts. October 2010. Tri Marine International acquires former Chicken of the Sea facility in American Samoa, located adjacent to the StarKist Samoa cannery. American Samoa government grants Tri Marine International exemption from local taxes for 10 years. December 2012. American Samoa government grants StarKist Samoa exemption from local taxes for 10 years. January 2015. Tri Marine International opens $70 million Samoa Tuna Processors cannery after large capital investments in prior years to renovate and expand the former Chicken of the Sea cannery. September 2015. Minimum wages in American Samoa rise by $0.40 as federally mandated. Minimum wage for fish canning and processing workers becomes $5.16. December 2016. Tri Marine International indefinitely suspends operations at its Samoa Tuna Processors cannery in American Samoa. September 2017. StarKist Co. agrees to pay a $6.3 million penalty resulting from violations of federal environmental laws. October 2017. StarKist Samoa temporarily halts operations for 5 weeks because of fish supply setbacks and equipment upgrades. February 2018. According to American government estimates, Tropical Storm Gita causes nearly $200 million in damages to public and private property. The federal government issues a disaster declaration and assists with disaster recovery efforts. May 2018. According to a report by the Pacific Islands Forum Fisheries Agency, StarKist Co. signs 10-year lease agreement with Tri Marine International to sub-lease its Samoa Tuna Processors facility for use in StarKist Samoa operations. September 2018. Minimum wages in American Samoa rise by $0.40 as federally mandated. Minimum wage for fish canning and processing workers becomes $5.56. September 2019. StarKist Co. is sentenced to pay a criminal fine of $100 million for its role in price fixing. Appendix VI: Comparison of Labor and Tariff Costs for Three Potential Tuna Canning Business Models Although American Samoa’s tuna canning industry faces multiple challenges in addition to scheduled minimum wage increases, American Samoa offers the tuna canning industry certain competitive advantages relative to the U.S. mainland and other countries. To illustrate tuna canning costs for other business models, we compared the labor and tariff costs associated with three potential business models for the cannery operations currently used by firms in the global tuna industry. The following analysis provides cost estimates for the three possible models, assuming constant total production under each model. Our analysis considers only labor costs and tariffs to show the effect of variation across different models. Our analysis excludes other associated costs, including transportation and refrigeration, as well as costs associated with establishing multiple production locations. Therefore, we assume that shipping costs and other conversion costs, such as for electricity usage, are identical. We also assume that fixed costs for starting operations in a new location (i.e., search costs) are equal to zero. We assume the alternative country is Thailand, on the basis of prior related reports and interviews with relevant officials and stakeholders. All of the tariff and tax assumptions used in our analysis are based on input from tuna canning industry officials. Model A (maintaining all loining and canning in American Samoa): This is the current production process for the remaining cannery operating in American Samoa. Tuna processing currently performed in American Samoa remains entirely in American Samoa. The cannery located in American Samoa hires local and foreign workers to loin—clean, cook, and cut—and can the fish. With an estimated workforce of 2,000 employees in American Samoa, the associated labor cost was an estimated $23 million in 2019. The canned tuna from American Samoa is exported directly to the United States and, according to cannery officials who utilize this model, such canned tuna is eligible for tariff-free access to the U.S. market. The cannery, which is a qualified domestic corporation, according to cannery officials, receives an estimated $5 million as a federal tax credit. Model B (relocating loining to Thailand or another country with low labor costs and canning processed loins in the U.S. states): This is the current production process for a firm operating a cannery outside of American Samoa. The loining operation—the most labor-intensive part of the operation—would move to a country with low labor costs, such as Thailand, where the fish would be loined, sealed in pouches, and frozen. The loined, frozen fish would then be exported to the U.S. mainland, where it would be canned. With an estimated workforce of 1,700 employees in a country with low labor costs making $1.25 per hour, the associated labor cost would be $4.4 million; and with an estimated workforce of 300 employees in the U.S. mainland at $15 per hour, the associated labor cost would be $9.4 million. Therefore, the total associated labor cost in 2019 for this model would be $14 million. No workers would remain in American Samoa, and 300 workers would be employed on the U.S. mainland. The imported fish would carry an average tariff of $11 per metric ton. This model assumes that the firm operating outside of American Samoa would not qualify for the American Samoa economic development credit. Model C (relocating all loining and canning to Thailand or another country with low labor costs): This is an alternative production process for operating canneries outside of American Samoa. Tuna processing currently performed in American Samoa would relocate to a foreign country with low labor costs. All operations, including loining and canning the fish, would take place in this foreign country. With an estimated workforce of 2,000 employees in a country with low labor costs making $1.25 per hour, the associated labor cost in 2019 would be $5 million. No workers would remain in American Samoa and no workers would be employed in the U.S. mainland. The imported fish would carry an average tariff of 12 percent of export revenue. This model assumes that a firm operating outside of American Samoa would not qualify for the American Samoa economic development credit. Figure 16 shows that, considering labor and tariff costs along with tax credits, Model A has higher costs than Model B. Model B presents cost savings; however, importing processed loins to the United States would incur tariffs, and wages for canning in any of the 50 U.S. states would be higher than in competing tuna processing countries. Model C presents the highest combined labor and tariff costs and would result in an estimated 2,000 fewer jobs in American Samoa. Table 7 below shows how revenue and labor and trade costs are computed for each model. Appendix VII: Comments from the Department of Interior The following are GAO’s comments to the Department of Interior’s letter. GAO Comments With respect to paragraph 4 of the U.S. Department of the Interior’s letter above, the suggested further study was beyond the scope of this report, which focused on (1) economic trends including changes in employment and earnings since the minimum wage increases in American Samoa began in 2007, (2) the status of the tuna canning industry, and (3) stakeholder views on the minimum wage increases. Appendix VIII: Comments from the American Samoa Government Appendix IX: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Emil Friberg (Assistant Director), Benjamin Bolitzer (Assistant Director), Justine Lazaro (Analyst in Charge), Samuel Huang, James Boohaker, Carl Nadler, Debbie Chung, Christopher Keblitis, Sara Daleski, Martin De Alteriis, and Alex Welsh made key contributions to this report. Related GAO Reports American Samoa: Alternatives for Raising Minimum Wages to Keep Pace with the Cost of Living and Reach the Federal Level. GAO-17-83. Washington, D.C.: December 2, 2016. American Samoa and the Commonwealth of the Northern Mariana Islands: Economic Indicators Since Minimum Wage Increases Began. GAO-14-381. Washington, D.C.: March 31, 2014. American Samoa and Commonwealth of the Northern Mariana Islands: Employment, Earnings, and Status of Key Industries Since Minimum Wage Increases Began. GAO-11-956T. Washington, D.C.: September 23, 2011. American Samoa and Commonwealth of the Northern Mariana Islands: Employment, Earnings, and Status of Key Industries Since Minimum Wage Increases Began. GAO-11-427. Washington, D.C.: June 23, 2011. American Samoa and Commonwealth of the Northern Mariana Islands: Wages, Employment, Employer Actions, Earnings, and Worker Views Since Minimum Wage Increases Began. GAO-10-333. Washington, D.C.: April 8, 2010.
Why GAO Did This Study In 2007, Congress passed legislation that established a schedule of periodic increases that would have raised all minimum wages in American Samoa to the current federal level ($7.25 per hour) by 2016. However, subsequent legislation has postponed or reduced scheduled minimum wage increases. The most recent minimum wage increase in American Samoa occurred on September 30, 2018, but all minimum wages in American Samoa are not scheduled to converge with the current federal level until 2036. Pub. L. No. 111-5, enacted in February 2009, included a provision for GAO to report periodically on the economic impact of minimum wage increases in American Samoa. This report examines (1) economic trends including changes in employment and earnings since the minimum wage increases in American Samoa began in 2007, (2) the status of the tuna canning industry, and (3) stakeholder views on the minimum wage increases. GAO analyzed federal and American Samoa data for 2016 through 2018, and interviewed employers and workers in American Samoa selected on the basis of employment levels, among other criteria. Commenting on a draft of this report, the American Samoa government suggested creating a committee to set minimum wages in the territory and a moratorium on minimum wage increases until the committee is formed. The Department of the Interior suggested GAO conduct further study, including on the use of a committee to set minimum wages. The suggested further study was beyond the scope of this report. What GAO Found American Samoa's economy largely contracted during the past decade. Adjusted for inflation, gross domestic product declined by 18.2 percent from 2007 to 2017, and increased by 2.2 percent in 2018 (see fig.). While American Samoa employment varied by year from 2007 to 2018, workers' inflation-adjusted earnings generally declined. American Samoa's economy continues to depend on the territorial government and tuna canning industry as key sectors. Changes in government spending and the tuna canning industry, including cannery closures, have impacted American Samoa's economy. To reduce the territory's dependence on the government and the tuna canning industry, the American Samoa government continues its efforts to diversify the economy. American Samoa's tuna canning industry faces multiple challenges, including increased competition and minimum wage increases, which led to cannery closures from 2007 to 2018. The companies that experienced the closures explained that minimum wage increases were a factor in the closures, but not a main factor. With the closures, employment of cannery workers decreased but inflation-adjusted earnings of cannery workers who maintained their jobs increased. StarKist Co. now operates the single remaining cannery in American Samoa, StarKist Samoa, but faces financial challenges. In addition to increased competition and labor market challenges, the industry faces other challenges, such as lower wages relative to those in American Samoa for cannery workers in other countries. However, American Samoa offers the tuna canning industry advantages relative to the U.S. mainland and other countries, including lower wages compared to those in the U.S. mainland as well as duty-free access to the U.S. canned tuna market, according to StarKist Samoa officials. The American Samoa government and the American Samoa Chamber of Commerce (the Chamber) view the minimum wage increases as conflicting with sustainable economic development, but employers and workers GAO interviewed noted benefits and challenges presented by minimum wage increases. The government supports setting a minimum wage that the economy can support, while the Chamber supports delaying minimum wage increases for the cannery. Employers and workers GAO interviewed noted a potential positive impact on the livelihood of workers but a potential negative impact on the remaining cannery, among other things.
gao_GAO-19-543
gao_GAO-19-543_0
Background In 1994, Executive Order 12898 directed each federal agency to develop an environmental justice strategy that identifies and addresses disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low-income populations. Together, the 1994 executive order and the 2011 MOU include eight areas that agencies’ environmental justice efforts should address, as appropriate, such as NEPA implementation and public participation. Working group members have documented their environmental justice strategies using environmental justice strategic plans. We have previously reported on the importance of certain leading practices in developing or updating strategic plans and developing periodic progress reports, including in our October 2011 review of EPA’s environmental justice efforts. We reported that a multi-year strategic plan articulates the fundamental mission of an organization and lays out its long-term general goals for implementing that mission, including resources needed to achieve the goals. To that end, during strategic planning, which should occur at least every 4 years, an agency should review its mission statement, review its strategic goals, align strategic goals and strategies, and align strategic and annual performance goals. In addition, a strategic plan should contain a description of how the goals will be achieved, including human capital, information, and other resources needed. Finally, agencies should develop annual performance plans with annual performance goals—linked to the overall strategic goals—and describe how the goals will be measured to assess progress in achieving them. As one method for assessing such progress, we identified key attributes of successful performance measures, such as having measurable targets. Interagency Collaboration on Environmental Justice The 1994 executive order also created an interagency working group to coordinate federal environmental justice efforts by serving the following seven functions: Provide guidance to federal agencies on criteria for identifying disproportionately high and adverse human health or environmental effects on minority populations and low-income populations. Coordinate with, provide guidance to, and serve as a clearinghouse for each federal agency as it develops an environmental justice strategy, in order to ensure consistent administration, interpretation, and enforcement of programs, activities, and policies. Assist in coordinating research by, and stimulating cooperation among, EPA; the Department of Health and Human Services (HHS); Department of Housing and Urban Development (HUD); and other agencies conducting certain research, data collection, or analysis. Assist in coordinating data collection. Examine existing data and studies on environmental justice. Hold public meetings. Develop interagency model projects on environmental justice that demonstrate cooperation among federal agencies. After a period of relative inactivity, 16 agencies and CEQ recommitted to collaborating on environmental justice efforts through a revitalized interagency working group when they signed the 2011 MOU. We have previously found that federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, including working groups, and that interagency collaboration mechanisms benefit from key features, which raise issues to consider when implementing such mechanisms. These features include defining and articulating a common outcome; reinforcing agency accountability for collaborative efforts through agency plans and reports; developing mechanisms to monitor, evaluate, and report on results; agreeing on or clarifying roles and responsibilities; including all relevant participants and determining their ability to commit resources; identifying and addressing resource needs; and documenting written guidance and agreements. Federal Framework for Addressing Environmental Justice The 1994 executive order did not create new authorities or programs to carry out federal environmental justice efforts. As a result, federal environmental justice efforts seek to use existing federal laws, programs, and funding to address environmental and health problems that disproportionately burden minority and low-income communities, such as exposure to environmental pollutants. Example of Capacity Building Funded by an EPA Environmental Justice Grant in Spartanburg, South Carolina EPA provided a $20,000 environmental justice grant to a community organization in Spartanburg, South Carolina, in 2000 to support three research projects on the health of residents and former employees at a fertilizer plant and landfill sites. The target area, on the south side of Spartanburg, had a 96 percent African-American population according to EPA’s 2002 IWG Status Report. EPA’s initial $20,000 grant paid for research to help confirm health issues related to nearby hazardous waste sites. According to EPA officials, this initial investment has helped Spartanburg secure investments in the community. As a result, Spartanburg now has community health centers, affordable housing, and a recreation center. Several environmental laws regulate pollutants in the air, water, or soil and generally require a regulated facility to obtain permits from EPA or a state. These laws also authorize the issuance of administrative orders, among other things, to require cleanup of contamination. For example: Under the Clean Air Act, EPA, along with state and local government units and other entities, regulates air emissions of various substances that harm human health. The Clean Water Act regulates discharges of pollutants into waters of the United States, including lakes, streams, and other water bodies. The Resource, Conservation, and Recovery Act prohibits the treatment, storage, and disposal of hazardous waste without a permit. In addition, the Comprehensive Environmental Response, Compensation, and Liability Act authorizes EPA to compel the responsible parties to clean up contaminated sites and also allows EPA to conduct cleanups and then seek reimbursement from the responsible parties. Federal enforcement actions include administrative orders issued by EPA and civil or criminal judicial actions brought by the Department of Justice (DOJ). Under NEPA, federal agencies must evaluate the environmental impacts of their proposed major federal actions using an environmental assessment or a more detailed environmental impact statement, with some exceptions. CEQ is responsible for overseeing federal agencies’ implementation of NEPA. In 1997, the council issued guidance stating that agencies should consider environmental justice issues at several stages of the NEPA process, as appropriate. This guidance provides principles for considering whether particular agency actions raise environmental justice issues, such as looking at the demographic composition of the affected area and seeking public participation. HHS has programs and initiatives that address environmental health issues. Such efforts include the Centers for Disease Control and Prevention’s National Environmental Public Health Tracking Network—a data initiative which brings together health and environmental data from national, state, and city sources—and the Centers for Disease Control and Prevention’s National Report on Human Exposure to Environmental Chemicals—a series of reports that uses biomonitoring to assess the U.S. population’s exposure to environmental chemicals. Title VI of the Civil Rights Act of 1964, as amended, prohibits discrimination based on race, color, or national origin in programs or activities that receive federal financial assistance. To carry out and enforce the provisions of the act, federal agencies have developed programs to receive and investigate allegations of discriminatory actions taken by recipients of federal funding. In addition to these laws and programs, EPA also established a National Environmental Justice Advisory Council (NEJAC) in 1993 to provide advice and recommendations to EPA’s Administrator about issues related to environmental justice. NEJAC provides a forum for diverse perspectives, with representatives from various sectors, including academia, community groups, industry and business, non-governmental and environmental organizations, state and local governments, and tribal governments and indigenous groups. In recent years, NEJAC has issued reports on key environmental justice issues, including one on industrial waterfront areas (ports) and another on water and wastewater infrastructure. Most Agencies Have Developed Environmental Justice Strategic Plans but Have Not Shown Clear Progress toward Environmental Justice Goals Most of the agencies that signed the 2011 MOU have developed environmental justice strategic plans that contain strategic goals, but most have not shown clear progress toward these goals. Specifically, 14 of the 16 agencies have developed environmental strategic plans, and 12 also established strategic goals in these plans, but several agencies have not updated their plans in recent years. In addition, most agencies have not issued annual progress reports or established methods to assess progress. Most Agencies Have Developed Environmental Justice Strategic Plans and Established Goals, but Several Agencies Have Not Updated These Plans Recently Most of the 16 agencies have developed environmental strategic plans, and most of these plans included strategic goals to help direct the agencies’ environmental justice efforts. As shown in table 1, 14 of the 16 agencies issued environmental justice strategic plans after 2011, when they agreed to develop or update such plans under the 2011 MOU. Of the 14 agencies that developed environmental justice strategic plans, 12 also established strategic goals in these plans, as shown in table 1. Many of the agencies had multiple goals with common themes. For example, eight agencies included goals that involved providing assistance, such as grants, technical assistance, or direct services, to environmental justice communities. Eight agencies also included goals that involved promoting public participation; seven agencies included goals that involved identifying and addressing environmental justice issues; four agencies included goals related to training or educating agency staff on environmental justice; four agencies included goals related to promoting enforcement of Title VI; three agencies included goals related to conducting research on environmental justice issues; and three agencies included goals related to incorporating environmental justice considerations into policies or guidance. Two agencies—the Department of Defense (DOD) and Small Business Administration (SBA)—did not issue environmental strategic plans after 2011 even though by signing the MOU they agreed, as appropriate, to develop or update their environmental justice strategies by early 2012. DOD issued such a plan in 1995, shortly after the executive order was signed but has not updated its plan since. We have previously reported that strategic planning serves as the starting point and foundation for defining what the agency seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in achieving goals and objectives. DOD officials said that the agency has not prioritized environmental justice efforts. By updating its environmental justice strategic plan, DOD would have a foundation for such efforts. SBA has never issued an environmental justice strategic plan. SBA officials said that the agency is uncertain whether it has a role in implementing environmental justice and they were in the process of reviewing whether SBA should continue its membership in the working group. By assessing whether to participate in the 2011 MOU, SBA could clarify its role. Of the 14 agencies that developed environmental justice strategic plans after 2011, six agencies have updated those plans and one has updated its priority areas on its website. The 2011 MOU directs agencies to update their strategic plans periodically, and GAO’s leading practices for strategic planning suggest that strategic plans should be updated every 4 years. Five of the six agencies—the U.S. Department of Agriculture (USDA), Department of the Interior (DOI), DOT, EPA, and General Services Administration (GSA)—issued updated strategic plans in 2016 in response to a request from the working group that all agencies update their strategic plans. The sixth agency, the Department of Energy (DOE), issued an updated strategic plan in 2017. HHS posted a list of “priority areas of focus” for environmental justice for 2015 through 2016 on its website. Agency officials noted that this was less resource-intensive than conducting a full review and update of the strategic plan. The remaining seven agencies—Commerce, Education, DHS, HUD, DOJ, Department of Labor (DOL), and Department of Veterans Affairs (VA)—have not updated their plans since issuing them after 2011. Six of these agencies issued their environmental justice strategic plans in 2012, and one of these agencies, DOJ, issued its revised strategic plan and a companion guidance document in 2014. As a result, as of 2019, these plans are more than 4 years old and may not reflect the agencies’ current approach. Some of these agencies have taken preliminary steps to update their plans, but with the exception of DHS, they do not have a time frame for developing an update according to agency officials. DHS officials stated that the agency was developing an updated environmental justice strategic plan, which is scheduled for formal internal review during calendar year 2019 and for release in 2020. DOJ officials stated that they plan to meet in 2019 to review and discuss possible updates to their strategic plan, but the agency does not intend to update it unless any significant changes have taken place since they reissued it in 2014. According to HUD officials, HUD prepared a draft of an updated environmental justice strategic plan for 2016 through 2020 and posted it online for public comment in November 2016, but the agency has not worked on the draft plan since then. According to agency officials, the draft plan has not been finalized because of staff losses and because HUD leadership prioritized other issues, such as long-term disaster recovery, over environmental justice issues. Officials from Commerce stated that the agency has not updated its environmental justice strategic plan because of the time and resources that this would require. Officials from Education, DOJ, DOL, and VA said that they do not believe it is necessary to update their agency plans because they are continuing to implement their existing plans or because their approach to environmental justice work has not changed since their plans were issued. However, in updating their plans, which are no longer current, the agencies could explain that significant changes were not made. By updating their strategic plans or by reaffirming the validity of their current plans, these agencies (Commerce, Education, DHS, HUD, DOJ, DOL, VA) would have a current plan to guide their environmental justice activities as they committed to do in the 2011 MOU. Most Agencies Have Not Shown Clear Progress toward Environmental Justice Goals While 12 agencies have developed an environmental justice strategic plan with strategic goals, most of them have not shown clear progress toward achieving their environmental justice goals and the purpose of the executive order. Specifically, the agencies have not comprehensively assessed how environmental justice fits with their overall missions or their progress toward the implementation of their strategic goals by issuing annual progress reports or by establishing methods to gauge their progress, such as performance measures. Furthermore, officials from most agencies said that they are unable to determine how much progress they have made toward achieving the major requirement from the executive order because they do not have a way to assess progress. Seven Agencies Assessed Environmental Justice within Their Agency Missions, and Seven Agencies Did Not Clearly Do So Of the 14 agencies that developed environmental justice strategic plans after 2011, we found that seven of the agencies—Commerce, DHS, DOE, DOL, EPA, GSA, and HUD—assessed and discussed how their environmental justice efforts aligned with their overall missions. For example, HUD’s environmental justice strategic plan contains a section that describes HUD’s mission to create strong, sustainable, inclusive communities and quality, affordable homes for all. The section then discusses its overall strategic goals and their relationship to environmental justice. For example, HUD’s goal to build inclusive and sustainable communities free from discrimination includes a subgoal to promote energy-efficient buildings and location-efficient communities that are healthy, affordable, and diverse. Similarly, Commerce includes a section in its environmental justice strategic plan entitled “Relationship of Environmental Justice to Agency Mission and Agency Strategic Plan Goals or Objectives.” Among the agency-wide goals that support environmental justice, Commerce describes the National Oceanic and Atmospheric Administration’s (NOAA) efforts to manage fisheries, coastal habitats and species, and protected areas, and to provide information and warnings about weather conditions to the nation, including vulnerable populations. In our review of the 14 agencies’ environmental justice strategic plans, we found that seven of these plans did not clearly show how the agencies assessed alignment between the agencies’ environmental justice plans and overall mission, although the 1994 executive order directed each agency to make achieving environmental justice part of its mission by identifying and addressing, as appropriate, disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low-income populations. In addition, EPA officials questioned how some environmental justice strategic plans from agencies related to their agency’s core missions and stated that to be effective, environmental justice should be considered throughout agencies’ missions. Our previous work found that effective strategic plans include, among other things, agency missions and long-term goals, and that to encourage the use of performance information, agency-wide goals and measures should align. Specifically, we have previously found that an agency’s program goals should flow from its mission statement and that its strategic goals—those that explain what results are expected and when they should be achieved—should also grow out of the mission statement. Although half of the agencies’ environmental justice strategic plans did not clearly show that their agencies assessed their connection to their overall mission, officials from DOI, DOJ, USDA, and VA said that they considered their agencies’ overall strategic plan’s mission and goals when they developed their environmental justice strategic plans. HHS officials commented that although HHS’s overall strategic plan is at a very high level, some elements within its environmental justice strategic plan, such as research, align with its overall strategic plan. The remaining agencies did not explain whether they had considered their agencies’ overall mission and goals when developing their environmental justice strategic plans. The 1994 executive order requires that each federal agency makes achieving environmental justice part of its mission and requires the working group to provide guidance to agencies in developing their environmental justice strategies. However, the working group has not provided guidance to federal agencies on how to develop a strategic plan, including how to demonstrate they have considered their broader agency missions in developing their environmental justice strategic plans. According to the working group’s charter, the working group creates committees to carry out its responsibilities under this executive order, and one of those committees—the Strategy and Implementation Progress Report Committee—is to be available as a resource to federal agencies as they develop and update their environmental justice strategies. However, according to officials from EPA, which chairs the working group, this committee has not provided guidance to agencies on what to include in their strategic plans because each agency determines the direction of their plans. By developing such guidance, the working group could assist agencies in planning more strategically about which parts of their mission are important for achieving environmental justice. Fourteen Agencies Issued at Least One Progress Report after 2011, but Most Have Not Done So Annually Of the 14 agencies that developed environmental justice strategic plans after 2011, all have issued at least one annual progress report on the implementation of these plans, but most have not issued such reports every year, as they agreed to do in the 2011 MOU (see table 2). As shown in table 2, two of the 16 agencies—DHS and DOJ—have issued progress reports every year. In addition, several agencies issued progress reports consistently during the first few years after signing the 2011 MOU but subsequently stopped issuing reports. For example, four agencies—DOE, HHS, DOI, and DOL—issued progress reports through 2016 but have not issued reports for 2017. Four additional agencies issued reports through either 2014 or 2015 but have not issued any reports since then. Only four agencies—DHS, DOJ, EPA, and GSA— have issued progress reports for 2017. The two agencies that did not develop environmental justice strategic plans after 2011—DOD and SBA—have not issued any progress reports. According to the 2011 MOU, each agency should issue an annual report on the progress it has made over the previous year in implementing its environmental justice strategic plan. However, agency officials from most of the agencies said that they had not issued annual progress reports because of competing priorities. In addition, officials from some agencies, including USDA, DOE, and VA, cited the change in administration in January 2017 as a factor in delaying or not issuing their progress reports. Officials from DOE, HHS, and DOT said that they planned to issue overdue progress reports in the near future. The remaining agencies who have not issued a progress report since 2016 or earlier either did not have plans to issue progress reports or did not provide information on the status of their progress reports. However, we have previously found that annual program performance reports can provide essential information needed to assess federal agencies’ performance and hold agencies accountable for achieving results. Further, we have previously found that reporting is part of a broader performance management process that includes identifying mission and desired outcomes, measuring performance, and using this information to report on performance and to identify gaps in performance. By issuing progress reports each year, the agencies—Commerce, DOD, DOE, DOI, DOL, DOT, Education, HUD, HHS, USDA, and VA—can have more reasonable assurance that they have the information needed to assess their performance and to demonstrate results. Most Agencies Have Not Established Methods for Assessing Progress toward Their Environmental Justice Goals The agencies’ progress reports generally describe the environmental justice activities that the agencies conducted but do not include any methods to assess progress. In our review of the most recent progress reports issued by each of the 14 agencies, we found that these reports contain information on activities undertaken by the agency over the previous year. Some of the reports are organized by the goals that the agencies identified in their environmental justice strategic plans and include information on the agencies’ future plans for environmental justice efforts. However, most agencies have not established a method that would allow them to evaluate their progress toward their environmental justice goals, such as establishing performance measures. According to Office of Management and Budget (OMB) guidance, performance measures are a means of evaluating efficiency, effectiveness, and results. The guidance also describes different types of these measures, including outcome measures—indicating an agency’s progress toward achieving the intended results of its efforts—and output measures—usually expressed quantitatively and describe the level of activities that will be provided over a period of time (e.g., the number of meetings held or the number of people trained). Agencies may assess their progress using milestones, which are scheduled events signifying the completion of a major deliverable or a phase of work (e.g., a date by which the agency will release a certain product), according to OMB guidance. While not performance measures, milestones can help agencies track the actions they have completed in implementing their environmental justice strategic plans. Of the 16 agencies that signed the 2011 MOU, four agencies—DOI, EPA, HHS, and USDA—have established performance measures or milestones for their environmental justice efforts. Of these four agencies, two agencies—HHS and EPA—have reported on their progress toward achieving the performance measures or milestones they established. Examples of how the four agencies measured the progress of their environmental justice efforts include the following: DOI established performance measures in its 2012 environmental justice strategic plan and reported on progress using these measures in its 2013, 2014, and 2015 annual progress reports. DOI changed from performance measures to milestones in its 2016 strategic plan. For example, in the 2016 plan, DOI has target years for establishing public outreach strategies and creating a best practices report on public outreach activities for environmental justice communities. According to agency officials, DOI made this change because the performance measures from the 2012 plan were difficult and time- consuming to use, were not helpful in tracking progress, and did not result in actionable outcomes. DOI believed that an action plan would be easier to use for identifying actions to meet goals and for measuring progress. DOI has not yet reported on the milestones from its 2016 strategic plan. Its most recent progress report is from fiscal year 2016, the first year that the strategic plan covers. Agency officials stated that DOI plans to report on the milestones in its fiscal year 2017 progress report but did not provide a timeline for when this report would be issued. In its environmental justice strategic plan for 2016 through 2020, EPA established four goals for reducing environmental and health hazards: reducing children’s exposure to lead, reducing contamination of small and tribal drinking water systems, reducing fine particle air pollution, and reducing contamination at hazardous waste sites. EPA established performance measures for tracking progress toward each of these goals at the national level. For example, EPA’s goal is to achieve air quality that meets national standards for fine particle pollution in all areas of the country, with special emphasis on communities with poor air quality and low-income populations. EPA collected data from air monitors to determine its progress toward achieving this goal. In its progress report for fiscal year 2017, EPA reported an increase from 43 percent of low-income populations living in counties that attained the standards in 2006 through 2008 to 92 percent in 2014 through 2016. According to agency officials, EPA plans to continue reporting on the goals in the future. EPA has also established several other performance measures and milestones for its environmental justice activities. For example, in its environmental justice strategic plan for 2016 to 2020, EPA provides the status for 28 environmental justice activities that it had included in its environmental justice 2014 strategic plan. HHS established many performance measures and milestones in its 2012 environmental justice strategic plan and reported on its progress toward these measures and milestones in its annual progress reports. In its most recent progress report, HHS reported that, as of January 2017, 30 of the 37 actions that it committed to undertake in the 2012 strategic plan had a status of “complete or substantial progress,” three had achieved “some progress,” and four could not be carried out and were deemed “inactive.” For example, HHS reported that it has conducted outreach events to educate local communities on the purpose and functions of the HHS Office for Civil Rights. In this report, HHS also stated that it will no longer be reporting on these measures and milestones going forward and that it would be developing a new plan of action to achieve its environmental justice goals. HHS has not yet developed such a plan and therefore does not have any current performance measures or milestones. USDA established several performance measures and milestones for its five strategic goals in its environmental justice strategic plan for 2016 through 2020. For its first environmental justice strategic goal, USDA established performance measures involving increased funding for environmental justice-related programs. USDA established milestones for the rest of its goals. Its five strategic goals are: ensure USDA programs provide opportunities for environmental justice communities; increase capacity-building within environmental justice communities; expand public participation in program operations, planning activities, and decision-making processes to benefit environmental justice communities; ensure USDA’s activities do not have disproportionately high and adverse human health impacts on environmental justice communities and resolve environmental justice issues and complaints; and increase awareness, skills, and abilities of USDA employees regarding environmental justice issues. However, the agency has not issued a progress report since its 2016 strategic plan and has not yet reported on these measures and milestones. Agency officials said that USDA has collected information on these measures and milestones, but has not issued progress reports with this information. In our interviews with agency officials, a few described plans for developing new performance measures. In particular, EPA has proposed to implement a measure that would involve identifying key decisions across the entire agency in which environmental justice was taken into account. According to EPA officials, a significant way to incorporate environmental justice into an agency’s mission, including its programs, policies, and activities, is to include environmental justice considerations in its various decision-making processes. For example, EPA has set a goal of including environmental justice issues in the analyses for regulatory or permitting decisions, such as Clean Air rules or permits; officials stated that they could count the number of such decisions that that have included environmental justice issues in the underlying analyses for the decisions. Under the new performance measure, every EPA office would be responsible for identifying a certain number of decisions it has made and explaining how these decisions were affected by environmental justice considerations. The measure would also allow EPA to share examples of how various offices are taking environmental justice into account, so that other offices could learn from these examples (e.g., integrating environmental justice into permitting decisions). EPA plans to pilot this new measure through September 2019. The remaining 12 agencies have not established any performance measures or milestones. In the absence of annual progress reports that evaluate progress using performance measures or milestones, we interviewed agency officials about the progress they had made toward the primary directive in Executive Order 12898—to identify and address disproportionately high and adverse human health or environmental effects of their programs, policies, and activities on minority or low-income populations. Officials from most of these agencies said that they are unable to determine how much progress they have made toward achieving this directive. Specifically, officials from six of the agencies (Commerce, DOD, Education, DOJ, DOL, and VA) stated that they do not have a method for gauging their progress, although several of these agencies stated that they are able to identify specific accomplishments they have made toward addressing environmental justice issues. A seventh agency, DOT, said that it has made significant progress, but faced challenges in developing quantitative performance measures. Officials from DHS and GSA said that they gauge their progress by tracking the completion of action items or goals from their environmental justice strategic plans, and DOE said that it periodically gauges its progress through conducting qualitative reviews of its environmental justice work. Finally, DOD and SBA reported no efforts to gauge progress toward implementing the executive order. Officials for most of the 12 agencies that have not developed performance measures for their environmental justice efforts said they have not done so because it would be difficult and they are unsure how to do so. For example, DOJ officials commented that it would be difficult to develop meaningful measures that are indicative of true progress toward achieving environmental justice. EPA officials commented that encouraging agencies to adopt performance measures for environmental justice would align with their agency’s efforts and would involve, among several things, providing guidance and training to the agencies. The 2011 MOU states that annual progress reports issued by the agencies should include performance measures as deemed appropriate by each agency. In our previous work, we have found that it is important for agencies to establish a method to assess their progress toward their goals; such methods should ideally include performance measures or milestones. We have also reported that performance measures are important for tracking progress in achieving goals and are a key element of effective strategic planning. Performance measures provide managers with information on which to base their decisions, including how effectively offices are integrating environmental justice in their decisions. Performance measures also create powerful incentives to influence organizational and individual behavior. Leading practices we have identified include clearly relating performance measures to the performance they will be used to evaluate and creating a set of performance goals and measures that addresses important and varied aspects of program performance. The executive order directs the working group to provide guidance to agencies in developing their environmental justice strategies. However, the working group has not provided guidance to its members on methods to assess and report on their environmental justice progress, such as through performance measures, according to officials from EPA, which chairs the working group. According to these officials, EPA is still pursuing its own agency-wide performance measures. By developing such guidance or creating a committee, the working group could assist agencies in tracking and measuring their progress in achieving their environmental justice goals. Most Agencies Reported Taking Various Actions to Identify and Address Environmental Justice Issues, and Most Reported Supporting These Actions with Existing Resources Most agencies that signed the 2011 MOU reported taking various actions to identify and address environmental justice issues related to their programs, policies, and activities; most also reported having limited resources for these efforts. Examples of actions they reported taking included improving research and data collection by creating data tools, considering environmental justice issues when implementing NEPA and enforcing environmental laws, and revising processes to ensure greater public participation. Most agencies used resources from existing related programs (e.g., civil rights or environmental programs) to support environmental justice efforts, although two agencies provided dedicated resources specifically to environmental justice efforts from fiscal years 2015 through 2018. Agencies Reported Creating Data Tools and Revising Processes for Public Participation and Environmental Review Most of the 16 agencies reported planning and implementing actions to identify and address environmental justice issues to carry out the 1994 executive order and 2011 MOU. The executive order contains four areas that agencies’ environmental justice strategies should include, as appropriate: Promote enforcement of all health and environmental statutes in areas with minority populations and low-income populations. Ensure greater public participation. Improve research and data collection relating to the health of and environment of minority populations and low-income populations. Identify differential patterns of consumption of natural resources among minority populations and low-income populations (e.g., subsistence fishing or hunting). The 2011 MOU contains four additional areas that the 16 agencies agreed federal environmental justice efforts should include, as appropriate: Implement the National Environmental Policy Act (NEPA). Implement Title VI of the Civil Rights Act of 1964, as amended. Consider impacts from climate change. Consider impacts from commercial transportation and supporting infrastructure (goods movement). Each of the 14 agencies that produced an environmental justice strategic plan discussed in their most recent plan how they would identify and address environmental justice issues related to at least one of these eight areas. Although most agencies did not formally report on progress annually, all of the 14 agencies provided examples—in their strategic plans or progress reports, in other related documents or on their websites, or in interviews with us—of actions they implemented to identify and address environmental justice issues. In addition to the eight areas outlined in the 1994 executive order and 2011 MOU, agencies also provided examples of actions they took to provide internal training and conduct external capacity building. See appendix II for additional examples of agency actions to identify and address environmental justice issues. Improve research and data collection. In their most recent environmental justice strategic plans, 11 agencies discussed planning to improve research and data collection on environmental justice issues. At least eleven agencies provided examples of research or data actions they implemented, including creating data tools. For example, in 2015, EPA publicly released its Environmental Justice Mapping and Screening Tool (EJSCREEN), a web-based mapping tool that includes environmental and demographic data at a local level, allowing users to identify potential exposure to environmental pollutants and related health risks across different communities. Officials from DOJ’s Environmental and Natural Resources Division told us that they regularly use EJSCREEN to help determine if cases involve environmental justice issues. Also, since 2015, EPA and HHS’s National Institute on Minority Health and Health Disparities and National Institute of Environmental Health Sciences have co-funded a collaborative research and data effort called the Centers of Excellence on Environmental Health Disparities Research. This effort facilitates research on diseases that are a burden on populations with environmental justice issues and promotes knowledge sharing among researchers. Example of Addressing Environmental Justice Issues in EPA Rulemaking In January 2017, EPA released a final rule amending its Risk Management Program, a program under the Clean Air Act that requires facilities using extremely hazardous substances to develop a risk management plan to submit to EPA at least once every 5 years. The rule changes were identified by a Chemical Facility Safety and Security Working Group composed of the Administrator of EPA, and the department heads of Labor, Homeland Security, Justice, Agriculture, and Transportation, which was created in 2013 by Executive Order 13650 after chemical facility incidents that resulted in fatalities. The executive order requires that the working group develop ways to improve operational coordination with state, local, tribal, and other partners, including enhancing federal agency information sharing. In a May 2014 report, the working group cited the need to familiarize all agencies with Executive Order 12898 on environmental justice. It identified concerns of communities living adjacent to chemical facilities, many of them low-income and minority, and the need to share information with these communities, including first responders. Under EPA’s 2017 rule, risk management plans must be provided to members of the public upon request. The notice publishing the final rule contained a section on environmental justice comments and its response to address environmental justice concerns. In May 2018, EPA proposed to rescind several amendments to its rule. Industry and some states raised concerns about the cost and burden to carry out the rule. Promote enforcement of health and environmental statutes. In their most recent environmental justice strategic plans, 13 agencies discussed planning to promote enforcement of health or environmental statutes in some form. At least 12 agencies provided examples of actions they implemented to promote enforcement, including ensuring enforcement of environmental laws in communities with environmental justice issues and addressing such issues in the resolution of cases against violators. For example, in its 2017 progress report, EPA reported combining EJSCREEN with enforcement and compliance data to help regional offices and state, local, and tribal authorities focus reviews of compliance with environmental laws in overburdened communities. EPA reported reviewing all enforcement cases to see if communities with environmental justice issues were affected and tracking how agency enforcement actions to resolve these cases benefitted the affected communities. As a result, EPA reported tracking that 45 percent of Supplemental Environmental Projects—a type of beneficial environmental project implemented as part of a civil enforcement action settlement—in fiscal year 2017 were in locations with potential environmental justice issues. Ensure greater public participation. In their most recent environmental justice strategic plans, 14 agencies discussed planning to ensure greater public participation in decision-making processes. All 14 agencies provided examples of public participation actions they implemented, including seeking public input on their environmental justice strategic plans or consulting communities directly during environmental analyses under NEPA, siting decisions, or enforcement cases. For example, in its 2016 progress report, DOI reported formally inviting tribes to participate in environmental analyses and revising policies on tribal-government relations. DOI also continued to have publicly designated environmental justice coordinators for each of its bureaus (e.g., Bureau of Land Management), many of which deal directly with tribes or manage natural resources they rely on, such as land or water. Example of an EPA Environmental Justice Grant to Study Microplastics in Tribal Foods In 2017, the Sitka Tribe of Alaska received an Environmental Protection Agency (EPA) Environmental Justice Small Grant to study microplastics in its traditional food sources, such as mussels and clams. Microplastics are tiny pieces of plastic that are less than 5 millimeters in length and, according to EPA, may contain toxic chemicals that can pose human health and ecosystem risks when ingested by aquatic animals. According to EPA, the tribe planned to collect samples of water and traditional foods from four locations within its traditional territory and test them for the presence of microplastics and associated toxins. The results were to be shared with the tribe and the public to inform decisions about harvesting traditional foods. Local students collected and tested Butter Clam and Blue Mussel samples in 2018, which showed that more than 80 percent of the mussels and 100 percent of the clams contained microfibers and other microplastic particles. Identify differential patterns of consumption of natural resources. Because many Native Americans and other minority communities rely on hunting, foraging, or fishing for food, five agencies planned actions to identify or address risks to these food sources in their most recent environmental justice strategic plans. At least eight agencies provided examples of actions they implemented in this area, including collecting or providing information on human health risks associated with the consumption of polluted fish or wildlife. For example, in its 2015 progress report, USDA reported that the Forest Service’s Alaska Regional Office coordinated with DOT’s Federal Aviation Administration to accelerate cleanup of petroleum-contaminated soil at a mixed-ownership site containing national forest lands. According to USDA, the need for accelerated cleanup arose because increasing sea-levels and tidal surges that were encroaching on the area would have washed the pollutants into nearby waters supporting a local subsistence fishery. Implement NEPA. In their most recent environmental justice strategic plans, 12 agencies discussed planning to consider environmental justice issues in their NEPA analyses. At least 13 agencies provided examples of NEPA actions they had implemented, including providing internal guidance on how to include environmental justice issues in NEPA analyses. For example, at DOI, it is departmental policy for all bureaus to include consideration of environmental justice in the NEPA process and some bureaus have developed their own guidance for doing so. For example, DOI’s 2015 National Park Service NEPA Handbook requires the agency’s environmental analyses to discuss and evaluate the impact of proposals on minority and low-income populations and communities, including the distribution of the benefits and risks among different communities and populations. Implement Title VI of the Civil Rights Act of 1964. In their most recent environmental justice strategic plans, 11 agencies planned to consider environmental justice issues when implementing their Title VI programs. At least 10 agencies provided examples of Title VI environmental justice actions they implemented, some of which focused on providing training and guidance. For instance, in 2016, DOJ, DHS, HUD, HHS, and DOT jointly issued interagency guidance on Title VI to state and local agencies involved in emergency activities. DHS and DOJ reported that DHS’s Office for Civil Rights and Civil Liberties and DOJ’s Civil Rights Division coordinated to distribute this guidance in the aftermath of the 2017 hurricane season to ensure that federal funding recipients (e.g., state and local agencies) were aware of their obligations to provide emergency management services across communities without discrimination. Consider impacts from climate change. In their most recent environmental justice strategic plans, nine agencies discussed planning to address impacts from climate change on communities with environmental justice issues. At least 11 agencies provided examples of actions they implemented in this area, including providing communities with information on how climate change may affect them. For example, in its 2016 progress report, DOI reported that the U.S. Geological Service working with the Swinomish Indian Tribal Community and Skagit River System Cooperative to build a coastal model to evaluate the impacts of sea-level rise, storm surge, and waves, including effects on foods such as salmon and shellfish. DOI reported that the model was used to inform tribal climate adaptation and resilience plans. Consider impacts from goods movement. In their most recent environmental justice strategic plans, three agencies discussed planning to address environmental justice issues arising from goods movement, and at least five agencies provided examples of actions they implemented in this area. For example, DOT’s Federal Highway Administration developed a detailed freight and land use handbook in 2012, which highlights potential negative impacts in communities with minority or low- income residents (e.g., air quality or light pollution) and provides guidance on integrating freight and land-use planning to balance freight’s beneficial economic impacts and harmful environmental impacts for affected communities. For example, the handbook advises using off-peak deliveries or anti-idling technologies to reduce impacts from emissions. Provide internal training. Eleven agencies also provided us with examples of training programs to help their staff identify and address environmental justice issues within their work. For example, EPA developed an introductory training on environmental justice, which was required training for all EPA staff agency-wide when it was first launched in 2015. More recently, EPA reported providing environmental justice training in 2017 to more than 1,000 employees and contractors across the government who were responsible for implementing NEPA. DOI developed a web-based introductory training on environmental justice in 2015 that is available to all DOI employees and became required training for project managers for the Central Hazardous Materials Fund in 2016. Example of an EPA Environmental Justice Grant to Build Community Capacity to Reduce Exposure to Contaminated Soil through Community Education In 2017, the Trumbull Neighborhood Partnership in Warren, Ohio, received an EPA Environmental Justice Small Grant for an educational initiative to reduce residents’ exposure to soil contamination from former industrial activities, such as steel production. According to EPA, with support from the grant, the neighborhood partnership planned to create a curriculum of best practices, repurpose vacant land, and share a range of educational materials with residents to help them learn how to avoid exposure to contaminated soil. As part of the educational campaign on safe soil handling practices for residential and community land use, the partnership created a website to host educational materials and also shared the materials in person with residents and contacted local contractors to help ensure safe demolition practices. Conduct external capacity building. Thirteen agencies also provided examples of actions they implemented to fund and assist communities with environmental justice issues to build their capacity to access available resources and participate in federal decisions that affect them. For example, since its inception in 1994, EPA’s Environmental Justice Small Grants Program has awarded more than $24 million to over 1,400 organizations working with communities with environmental justice issues. EPA provides these grants for up to $30,000 to support projects that help communities build understanding of local environmental and public health issues, develop strategies for addressing these issues, and facilitate discussions about community priorities. Most Agencies Support Environmental Justice Efforts with Resources from Related Programs From fiscal year 2015 through 2018, most of the 16 agencies reported supporting environmental justice efforts through existing related program funding and staffing resources that were not specifically dedicated to environmental justice. EPA and DOE were the only agencies that dedicated resources specifically for environmental justice efforts in their budgets. In fiscal year 2018, EPA provided about $6.7 million, which, according to EPA officials, supported 31 full-time-equivalents (FTE) for Office of Environmental Justice staff in its headquarters and environmental justice coordinators in regional offices and two environmental justice grant programs. These staff support data tools such as EJSCREEN, provide training sessions, and coordinate federal efforts through the Interagency Working Group on Environmental Justice. The two grant programs provide communities with funding to research and understand potential environmental and health issues in their communities. For fiscal years 2015 through 2018, EPA awarded an average of about $1.2 million annually in environmental justice grants to communities through the Environmental Justice Small Grants Program and Environmental Justice Collaborative Problem-Solving Cooperative Agreement Program. EPA officials also reported using other related resources to support environmental justice efforts, but said the agency does not track these resources separately. In fiscal year 2018, DOE provided about $1.6 million and, according to DOE officials, one FTE for its environmental justice program in its Office of Legacy Management. These resources support activities to manage problems and concerns arising from the materials and chemicals on DOE sites by giving communities and tribes near these sites opportunities and tools to participate in DOE decisions. DOE also uses its funds and staff to sponsor the annual National Environmental Justice Conference and Training Program and to participate in the interagency working group. Eleven of the remaining 14 agencies reported undertaking some examples of environmental justice efforts with support from funding and staff from existing related programs (e.g., civil rights or environmental programs) from fiscal year 2015 through 2018. According to budget documents and agency officials, these 11 agencies did not formally track resources used to support environmental justice activities. Four of these agencies—USDA, DOI, GSA, and HUD—provided us with estimates of staffing or funding resources used to support environmental justice efforts. USDA estimated that a total of about eight FTEs annually were charged by many different staff for fiscal years 2015 through 2018 and that between $10,000 and $22,500 in funding annually supported the National Environmental Justice Conference and Training Program. DOI reported that it has one full-time Environmental Justice Outreach Specialist and that most DOI bureaus have an Environmental Justice Coordinator who handles environmental justice responsibilities as a collateral duty. DOI also reported funding one small research project related to environmental justice. GSA reported that staffing related to environmental justice efforts constituted a portion of the total FTE allocation within its Office of Civil Rights and estimated that this amounted to less than one FTE annually for fiscal years 2015 through 2018. HUD also estimated that less than one FTE was used specifically to support environmental justice efforts annually for the period, with one designated environmental justice lead and other staff serving on the working group as needed. Officials from the other seven agencies did not quantify estimates of resources but told us that staff conduct these activities as collateral duties. For example, DHS told us that its Office of the Chief Readiness Support Officer, Office for Civil Rights and Civil Liberties, the Office of General Counsel support its environmental justice efforts as needed. In another instance, DOJ designated an Environmental Justice Director, created a Senior Litigator for Environmental Justice position, and reported that the department has other staff that spend a portion of their time working on environmental justice efforts. Several agencies also reported establishing internal working groups or other coordinating bodies to help implement their environmental justice efforts, which means using some staffing resources to support these coordinating efforts. Three agencies—DOD, Education, and SBA—reported providing no funding or staffing resources to carry out any environmental justice efforts and also did not report any examples of environmental justice efforts from fiscal year 2015 through 2018. Agency resources for environmental justice were one of the concerns several stakeholders that we interviewed raised (see textbox). Stakeholder Perspectives on Federal Environmental Justice Efforts Several stakeholders expressed concerns about agency resources, agency responsiveness to and awareness of environmental justice issues, legal tools for raising environmental justice concerns, or overall prioritization of environmental justice efforts. Stakeholders expressed concerns about the limited availability of resources for environmental justice efforts, including staff to carry out environmental justice work and funding for related programs. One stakeholder told us that agencies need to prioritize their environmental justice efforts because they have not identified all communities with potential environmental justice issues and lack the resources to address all environmental justice issues. Several stakeholders discussed concerns about variation in agency staff familiarity with environmental justice issues or responsiveness to issues raised. Stakeholders also expressed concerns about the ability of existing legal tools to address environmental justice issues in the absence of a legal framework that specifically addresses them. For example, stakeholders said that risks from cumulative pollutant exposure are not addressed by existing environmental statutes. Several stakeholders also expressed concern about federal prioritization of environmental justice issues overall, including enforcement, changes to existing environmental regulations, and limited consideration of environmental justice in rulemaking processes. Some stakeholders we interviewed, including representatives from local and national nonprofit organizations, university professors, federal officials, and employees of private companies, also said that agencies’ efforts to build community capacity and develop tools that address environmental justice issues have been helpful. Stakeholders told us that EPA’s Environmental Justice Small Grants Program has helped communities, and DOE’s National Environmental Justice Conference and Training Program brings together grassroots leaders, stakeholders, and agencies. Stakeholders said that EJSCREEN is a useful tool for agencies and the public to screen for communities with potential environmental justice issues. Stakeholders also said agencies could use EJSCREEN in additional ways (e.g., in rulemaking and permitting) and discussed some limitations for its use (e.g., data limitations and the need to directly engage communities). The working group has collaborated in issuing guidance and in several other areas regarding environmental justice. The working group has also demonstrated three of the key features of interagency collaboration that we reviewed—leadership, clarity of roles and responsibilities, and written guidance and agreements. However, its use of two features of interagency collaboration—participation and organizational outcomes and accountability—was limited. The Working Group Has Collaborated to Issue Guidance and Assist Communities Collaboration from an Interagency Working Group Committee Assists with Environmental Justice Issues in Lowndes County, Alabama A November 2017 American Journal of Tropical Medicine and Hygiene study of hookworm conducted in Lowndes County, Alabama, highlighted a long-standing situation created by poor wastewater management affecting a largely rural, minority population in the state. The makeshift septic tanks that residents use in the absence of proper wastewater treatment infrastructure do not function properly in the moist, rich soil common in that area. This problem increased residents’ exposure to parasites, such as hookworm, through untreated wastewater. According to agency officials, in 2018, the General Services Administration collaborated with the Rural Communities Committee of the Interagency Working Group on Environmental Justice to help apply for Department of Agriculture rural development grant funding for decentralized sewer systems in Lowndes by using federal surplus personal property as matching funds. As of March 2016, the Equal Justice Initiative and Alabama Center for Rural Enterprise were working to identify and employ alternative decentralized technologies to treat wastewater in the county. The two entities were also attempting to write and implement policies requiring residents to connect to public sewers. In 2017, the Impacts from Commercial Transportation committee released a compendium on publicly available federal resources to assist communities impacted by goods movement activities. In fiscal year 2017, with input and vetting from the Rural Communities committee, USDA compiled and launched a web page with links to community tools, funding opportunities, educational or training assistance, and case studies to support rural communities according to USDA officials. In March 2016, the NEPA committee issued guidance entitled, “Promising Practices for Environmental Justice Methodologies in NEPA Reviews.” According to working group officials, this guidance can assist federal agencies with incorporating environmental justice during their NEPA reviews. In March 2019, the committee also completed guidance for communities entitled, “Community Guide to Environmental Justice and NEPA Methods.” Hookworms can be found in soil contaminated by untreated wastewater. In 2016, the working group’s Rural Communities committee participated in a brownfields redevelopment conference to help local organizations understand and access resources to redevelop brownfields in their communities. In 2016, the Regional Interagency Working Groups committee coordinated technical assistance to communities in EPA’s regions 2 and 4. For example, the group is working in North Birmingham, Alabama, and other communities to evaluate air, water, and waste issues. The Working Group Demonstrated Some Key Features That Benefit Collaboration, but Participation and Use of Goals Were Limited With respect to the five key features of interagency collaboration that we reviewed, we found that the working group demonstrated leadership, clarity of roles and responsibilities, and written guidance and agreements. However, its use of two other key features—participation and clear goals—was limited. Leadership In our September 2012 report on interagency collaborative mechanisms, we identified leadership as a key feature of collaborative groups and stated that identifying a leader and sustaining that role throughout the groups’ efforts are important. For the working group, EPA’s Administrator was identified as the chair of the group in both the 1994 executive order and the 2014 Charter for Interagency Working Group on Environmental Justice. EPA officials we interviewed described the agency’s role as providing guidance to the working group agencies and coordinating their efforts. More specifically, EPA officials we interviewed said that as chair of the working group, EPA’s responsibilities include the following: Convene monthly meetings with the working group. Provide public access to working group agencies’ environmental justice strategic plans and annual implementation progress reports, a list of working group agencies, and other information relevant to the working group. Lead the development and publication of the working group’s plans and reports. Clarity of Roles and Responsibilities Our September 2012 report identified the need for collaborative groups to have clarity about the roles and responsibilities of the participating agencies. We stated that clarity can come from agencies working together to define and agree on their respective roles and responsibilities, as well as steps for decision-making. The working group has done this by assigning roles to its chair and most of its member agencies. In particular, according to working group officials, the topics for the nine working group committees were based on the seven functions that the executive order assigned to the working group and public input. Officials from 13 of the working group members agreed to either chair or become a member of one or more committees. The topics that these committees address, their chair, members, and purpose are identified in table 3: Our September 2012 report on interagency collaborative mechanisms stated that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. Since 2011, when the 16 agencies and CEQ recommitted to carrying out environmental justice efforts, the working group has developed several such documents including: MOU on Environmental Justice. This document, signed in 2011, is an agreement among member agencies to recommit to addressing environmental justice issues. It also listed the four areas that the agencies agreed to work on: NEPA, Title VI of the Civil Rights Act, impacts from climate change, and impacts from goods movement. Charter for Interagency Working Group on Environmental Justice. This document, which was adopted in 2011 and updated in 2014, outlines the governance structure for the working group. It also lists four committees to help carry out the working group’s responsibilities under the executive order: public participation, regional interagency working group, Title VI, and strategy and implementation progress reports. Framework for Collaboration. This document, which was issued in 2016 and covered a 3-year period through 2018, listed four goals of the working group to advance greater federal agency collaboration. It also listed and described the purpose of the nine working group committees. Participation In our September 2012 report, we found that it is important to ensure that the relevant participants have been included in the collaborative effort. Participation in working group activities has been mixed. In the 2011 MOU, the 16 signing agencies and CEQ agreed to address environmental justice issues and participate as members of the working group. According to agency officials, most working group members attend the monthly meetings. The most active members of the working group, in terms of participation in working group committees, have been EPA and DOJ. EPA, the chair of the working group, also chaired or co-chaired six committees, and DOJ chaired or co-chaired four. Both also participated in all eight of the active committees (see table 4). However, four agencies—DOD, Education, SBA, and VA—did not attend any of the working group’s monthly meetings in fiscal year 2018. These agencies also did not participate as leaders or members in any working group committees in fiscal year 2018. Furthermore, DOD and SBA did not have a designated representative as of March 2019. These four agencies had various reasons for not participating more actively in the working group or its committees. DOD officials said that DOD has not been involved with the working group since August 2017, when its working group representative retired, because it does not have the resources to participate in the working group. Education officials also said that they have had a limited role with the working group because many of the topics discussed have not been relevant to their agency’s missions. For example, according to Education officials, while research has established that schools with poor environmental health conditions often serve disadvantaged students, Education does not have authority to plan, fund, construct, maintain, or operate school facilities and grounds. As discussed earlier, SBA officials we interviewed said that they were unclear on whether environmental justice applied to SBA’s mission and that they were in the process of reviewing whether SBA should continue its membership in the working group. VA officials confirmed that it has also been inactive with the working group, but will call in to a meeting if there are topics of relevance. EPA officials commented that it is difficult to characterize what specific opportunities are missed from the lack of representation by an agency. However, they also commented that nonparticipation limits the working group’s ability to fulfill its mandates in a strategic, methodical way across the entire federal government. EPA officials further stated that the limiting factor for the working group in its efforts to address the executive order on environmental justice has always been the will of leadership across federal government to make clear, measurable commitments of those priorities and to adequately resource the attainment of those commitments. However, the participants signed the 2011 MOU about 8 years ago, and the agreement has become dated and may not reflect the agencies’ current commitments or abilities to participate in the working group or the broader environmental efforts. Our 2012 report on interagency collaborative mechanisms stated that written agreements and documents are most effective when they are regularly updated and monitored. By updating the 2011 MOU and renewing the commitment among participating agencies, EPA and the working group agencies would have more reasonable assurance that those agencies who sign the agreement are committed to participating. Clear Goals Our September 2012 report found that collaborative mechanisms such as the working group benefit from clear goals to establish organizational outcomes and accountability. The report stated that participants might not have the same overall interests or may even have conflicting interests, but by establishing a goal based on common interests, a collaborative group can shape its own vision and define its purpose. The executive order that created the working group assigned the working group seven functions to carry out, as listed in table 5. While the working group has developed documents with agreed-upon goals, which is beneficial to collaboration, none of them address all the seven functions of the executive order. The working group’s organizational documents do not contain strategic goals aligned to address the executive order as suggested by our previous work on establishing clear goals for collaborative mechanisms. Further, the three functions involving environmental justice research, data collection, and studies are not described as part of the goals of the working group, as laid out in its various documents: The 2011 MOU includes four focus areas for the working group members: NEPA, Title VI, impacts from climate change, and impacts from goods movement. These do not include the executive order functions of environmental justice data collection, research, and studies. The 2011 Charter for Interagency Working Group on Environmental Justice states that the committees were created to help carry out the working group’s responsibilities under the executive order. The committees focus on certain working group roles and responsibilities, including NEPA, goods movement, strategic planning, and public participation. However, none of the committees focus on environmental justice research, data collection, or studies. The working group’s fiscal year 2016-2018 Framework for Collaboration’s has four goals for collaboration: (1) enhance communication and coordination to improve the health, quality-of-life, and economic opportunities in overburdened communities; (2) enhance multi-agency support of holistic community-based solutions to provide assistance as needed to address environmental justice issues; (3) advance interagency strategies to identify and address environmental justice issues in agency programs, policies, and activities; and (4) develop partnerships with academic institutions to assist in providing long-term technical assistance to overburdened communities. These goals do not pertain to environmental justice research, data collection, or studies. We found that the organizational documents do not provide strategic goals with clear direction for the committees to carry out the functions of the working group as laid out in the executive order. Our analysis, which compares the functions of the executive order to documented working group roles and responsibilities, shows that coordinated data collection and examination of research and studies on environmental justice are not included in these documents or committee purposes and have not been a focus of the interagency working group since at least 2011. A DOI official acknowledged that the working group has not addressed all of these functions from the executive order; the official attributed the omission to a lack of resources for the working group. EPA officials commented that some individual agencies, such as HHS and EPA, have done work in environmental justice data collection and research. As leaders of the working group, EPA officials told us that the 2011 MOU, committee groups, and framework for collaboration reflect the current priorities of the working group, based on the public’s input. They were unsure whether a coordinated effort in the data collection, research, and studies areas was needed, but they said such an effort could be useful. They said that the most useful role of the working group in research may be as a forum for sharing of information and providing training opportunities. By clearly establishing strategic goals in the working group’s organizational documents to carry out the 1994 executive order, EPA, in consultation with working group members, could enhance its strategic direction for intergovernmental environmental justice efforts. Conclusions The interagency working group on environmental justice and its 16 member agencies have put in place the building blocks for an environmental justice program across the federal government. They have conducted a number of efforts over the last 25 years to implement the Executive Order on Environmental Justice. Through these efforts, they have developed tools such as EJSCREEN and guidance for incorporating environmental justice under NEPA. Most of the agencies have also developed strategic plans since 2011, although two agencies we reviewed have not, and many others have not kept their plans updated. SBA is in the process of reviewing whether it should continue its membership in the working group, which should clarify its role after SBA completes its review. DOD developed an environmental justice strategic plan in 1995 after the executive order was issued but not since 2011 when the interagency working group members signed the MOU. By updating its environmental justice strategic plan, DOD would have a foundation for its environmental justice efforts. Another seven agencies developed environmental justice strategic plans in 2012 but have not updated them since. By updating their strategic plans, these agencies— Commerce, DHS, DOJ, DOL, Education, HUD, and VA—would have a current plan to guide their environmental justice activities as they committed to do in the 2011 MOU. Moreover, most agencies—Commerce, DOD, DOE, DOI, DOL, DOT, Education, HHS, HUD, USDA, and VA—have not shown clear progress toward achieving their environmental justice goals in the 8 years since they signed the working group’s 2011 MOU because they have not consistently issued progress reports. By issuing progress reports each year, the agencies can provide essential information needed to assess their performance and demonstrate results. The 16 agencies and CEQ signed the 2011 MOU to establish a collaborative initiative across agencies to carry out environmental justice efforts. Under the leadership of EPA, they have also put in place a structure to coordinate with each other on their environmental justice efforts. One area that the group has not coordinated, however, is in developing guidance on what to include in strategic plans, such as demonstrating how environmental justice is part of an agency’s mission, or developing methods to assess and report on progress, which many of the agencies said they needed. Under GAO’s leading practices for strategic planning, agencies’ plans should address their missions, articulate goals, and lay the groundwork for assessing progress. Only half of the agencies that developed environmental justice strategic plans after 2011 clearly assessed how their plans fit into their overall missions. By developing guidance on what agencies should include in their environmental justice strategic plans, the working group could assist agencies in planning more strategically about what parts of their mission are important for achieving the environmental justice directives outlined in Executive Order 12898. Few of the agencies had performance measures or other methods to assess progress. By developing guidance on methods that the agencies could use to assess and report on progress, or creating a committee to do so, the working group could assist agencies in tracking and measuring their progress in achieving their environmental justice goals. In addition, the working group faces challenges of unclear strategic goals and mixed levels of participation. As noted in our earlier work, collaborative mechanisms, such as the working group, benefit from clear goals to establish organizational outcomes and accountability. Although the 1994 executive order created the working group to carry out the functions of the executive order, the working group’s framework focuses on how the agencies will collaborate rather than setting clear strategic goals to carry out the executive order. As a result, several of the executive order’s functions are not being carried out by the working group. By clearly establishing, in its organizational documents, strategic goals for the federal government’s efforts to carry out the 1994 executive order, EPA and the working group members could enhance the strategic direction for intergovernmental environmental justice efforts. Furthermore, by updating the 2011 MOU and having the 16 agencies and CEQ renew their commitment to participating in the interagency collaborative effort and the working group, EPA, as chair of the working group and consulting with other working group members, would have more reasonable assurance that those who sign the agreement are committed to participate. Recommendations for Executive Action We are making a total of 24 recommendations to 15 agencies of the Interagency Working Group on Environmental Justice—nine to the federal agencies that need to develop or update strategic plans (recommendations 1-9); 11 to the federal agencies that need to develop annual progress reports (recommendations 10-20); and four to the Environmental Protection Agency as chair of the working group (recommendations 21-24). The Secretary of Commerce should update the department’s environmental justice strategic plan. (Recommendation 1) The Assistant Secretary of Defense for Sustainment should update the department’s environmental justice strategic plan. (Recommendation 2) The Secretary of Education should update the department’s environmental justice strategic plan. (Recommendation 3) The Secretary of Homeland Security should update the department’s environmental justice strategic plan. (Recommendation 4) The Secretary of Housing and Urban Development should update the department’s environmental justice strategic plan. (Recommendation 5) The Attorney General of the United States should update the department’s environmental justice strategic plan. (Recommendation 6) The Secretary of Labor should update the department’s environmental justice strategic plan. (Recommendation 7) The Administrator of the Small Business Administration should complete the agency’s assessment of whether to participate in the 1994 Executive Order and the 2011 Memorandum of Understanding, and, if appropriate, develop an environmental justice strategic plan. (Recommendation 8) The Secretary of Veterans Affairs should update the department’s environmental justice strategic plan. (Recommendation 9) The Secretary of Agriculture should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 10) The Secretary of Commerce should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 11) The Assistant Secretary of Defense for Sustainment should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 12) The Secretary of Education should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 13) The Secretary of Health and Human Services should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 14) The Secretary of Energy should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 15) The Secretary of Housing and Urban Development should issue a progress report on its environmental justice efforts each year. (Recommendation 16) The Secretary of the Interior should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 17) The Secretary of Labor should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 18) The Secretary of Transportation should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 19) The Secretary of Veterans Affairs should issue a progress report on the department’s environmental justice efforts each year. (Recommendation 20) The Administrator of EPA, as chair of the working group, should develop guidance for agencies on what they should include in their environmental justice strategic plans. (Recommendation 21) The Administrator of EPA, as chair of the working group, should develop guidance or create a committee of the working group to develop guidance on methods the agencies could use to assess progress toward their environmental justice goals. (Recommendation 22) The Administrator of EPA, as chair of the working group, and in consultation with the working group, should clearly establish, in its organizational documents, strategic goals for the federal government’s efforts to carry out the 1994 Executive Order. (Recommendation 23) The Administrator of EPA, as chair of the working group, and in consultation with the other working group members, should update the 2011 Memorandum of Understanding and renew the agencies’ commitments to participate in the interagency collaborative effort and the working group. (Recommendation 24) Agency Comments and Our Evaluation We provided a draft of this report to CEQ and 16 federal agencies— Commerce, DHS, DOD, DOE, DOI, DOJ, DOL, DOT, Education, EPA, GSA, HHS, HUD, SBA, USDA, and VA—for review and comment. Fourteen agencies provided comments on our report. The comments of 12 agencies—DHS, DOD, DOE, DOI, DOJ, DOL, DOT, Education, EPA, HHS, USDA, and VA—are reproduced in appendixes III-XIV, respectively. HUD and SBA provided comments by email. Of these 14 agencies, eight agencies—DHS, DOE, DOI, DOJ, HHS, SBA, USDA, and VA—agreed with our recommendations. Of the other six agencies that provided comments, EPA agreed with two recommendations and disagreed with two others; DOD agreed with one recommendation and disagreed with one other; DOT partially agreed with the recommendation; DOL and HUD neither agreed nor disagreed with their recommendations, and Education did not agree with its two recommendations. We also made recommendations to Commerce, but it did not provide comments in time to include them in our report. Although we did not make recommendations to them, CEQ and GSA reviewed our report. CEQ provided technical comments, which we incorporated as appropriate; GSA did not have any comments on our report. In addition to CEQ, we also received technical comments and clarifications from DHS, DOJ, DOT, EPA, HHS, and USDA, which we incorporated as appropriate. We directed four recommendations to EPA as chair of the Interagency Working Group on Environmental Justice; the recommendations are aimed at improving the strategic direction of the working group and the federal government’s efforts. EPA stated that it appreciates our work on this subject area and understands the need for interagency coordination and is working closely and collaborating with its federal partners. EPA agreed with the two recommendations to develop guidance for agencies on what they should include in their environmental justice strategic plans (recommendation 21) and to develop guidance or create a committee of the working group to develop guidance on methods the agencies could use to assess progress toward their environmental justice goals (recommendation 22). However, EPA disagreed with the recommendations to update the 2011 MOU and renew the agencies’ commitments to participate in the interagency collaborative effort and the working group (originally recommendation 23, now recommendation 24) and to clearly establish strategic goals for the federal government’s efforts to carry out the 1994 Executive Order (originally recommendation 24, now recommendation 23). EPA stated that it disagrees with recommendations 23 and 24; instead of updating the MOU, the agency will lead efforts to update the working group’s fiscal year 2016-2018 Framework for Collaboration to include guidance for strategic plans, tracking progress toward goals, and defining alignment with the executive order. The agency also said that it believes that the intent of recommendation 24 could be combined with recommendation 23, making recommendation 24 unnecessary. We believe that EPA misunderstood recommendation 24 and do not agree it should be combined with recommendation 23. We agree with EPA that the working group can benefit from greater guidance on strategic plans, tracking goals, and alignment with the executive order to carry out federal environmental justice efforts. In our report, we list three organizational documents—the 2011 MOU, the 2011 Charter for Interagency Working Group on Environmental Justice, and the Framework for Collaboration. Our recommendation is for EPA to clearly establish strategic goals for federal efforts to carry out the executive order and does not specify which organizational document needs to be updated to address these issues. To help avoid confusion about the intent of this recommendation, we made two changes in the report. First, we clarified in the report that we were referring to the interagency working group’s strategic goals and organizational documents to show that we are not specifically recommending that the MOU be updated to meet this recommendation. Second, we switched the order of recommendations 23 and 24 so that our recommendation to establish strategic goals (previously recommendation 24) would no longer follow our recommendation to update the MOU. We disagree with EPA that it does not need to update the working group’s MOU because it plans to update the working group’s Framework for Collaboration. We believe that the MOU needs to be updated to address the matter of participation by the members who signed it but do not participate. As discussed in our report, the 2011 MOU is an agreement among member agencies to commit to addressing environmental justice issues. We do not have an opinion on when this document needs to be updated, however, and we believe that it can be updated after the working group discusses its strategic goals and updates its other organizational documents. Federal agencies may clarify how they can best participate through discussions of the working group’s goals and how they can meet the purposes of the executive order. DOD agreed with the recommendation that it update its environmental justice strategic plan (recommendation 2), but disagreed with the recommendation that it issue a progress report on its environmental justice efforts each year (recommendation 12). DOD provided two primary reasons why it disagreed with this recommendation. First, DOD stated that it had achieved the intent of Executive Order 12898 by including environmental justice considerations in its decision-making processes, primarily by using the NEPA review process. Second, the department stated that it has limited ability to further the implementation of environmental justice and create new goals and metrics in operating locations and mission. DOD stated that it is bound by its mission with limited opportunities to change where the department operates. According to DOD, for it to create new bases or close existing ones, it must first obtain congressional approval and then perform a NEPA analysis prior to implementation; also, its mission does not include a federal role in regulating or directing off-base activity or land uses; and aside from the U.S. Army Corps of Engineers civil regulatory functions, it does not routinely issue environmental permit decisions like federal regulatory agencies. DOD stated that these reasons make it a significant challenge for the department to meet our recommendation and therefore does not see a tangible benefit to additional reporting. We disagree with DOD that it does not need to issue a progress report on its environmental justice efforts each year. As we state in the report, the purpose of an annual progress report is to provide essential information needed to assess federal agencies’ performance and hold agencies accountable for achieving results. Reporting is part of a broader performance management process that includes identifying mission and desired outcomes, measuring performance, and using this information to report on performance and to identify gaps in performance. DOD would be reporting on goals that it set within its mission and authorities. For this reason, we continue to believe that by issuing progress reports each year, DOD could have more reasonable assurance that it has the necessary information to assess its performance and to demonstrate results. DOT stated that it partially concurs with recommendation 19 that it issue progress reports annually. DOT stated that it commits to issuing progress reports on its environmental justice efforts “when it determines that the circumstances of its activities so warrant.” However, we continue to believe that DOT should issue progress reports each year because doing so would give DOT more reasonable assurance that it has the information needed to assess its performance and to demonstrate results. DOL neither agreed nor disagreed with the two recommendations for it to (1) update its environmental justice strategic plan and (2) issue a progress report on its environmental justice efforts each year (recommendations 7 and 18). DOL stated that it values our review of its work in this area and will review the recommendations and take appropriate actions to improve program performance and delivery of services. HUD also neither agreed nor disagreed with our recommendations for it to update its environmental justice strategic plan and issue a progress report on its environmental justice efforts each year (recommendations 5 and 16). In an email, a HUD audit liaison official stated that the agency had no comments at this time and will continue to work with the current administration and the working group to update its environmental justice strategic plan and issue a progress report on its environmental justice efforts. Education stated that our report did not sufficiently account for the limitations on its legal authority in the subject area of environmental justice and that our report would be more accurate and comprehensive if it included more information about the department’s limited role. Education also stated that it did not agree with the recommendations to update its environmental justice strategic plan (recommendation 3) and issue a progress report on its environmental justice efforts each year (recommendation 13) because it does not believe this is the most appropriate course of action for the department or an efficient use of resources. We disagree with Education’s assessment. In the report, we discuss Education officials’ comments that they have a limited role with the working group because many of the topics discussed have not been relevant to their agency’s missions. We also discuss Education’s legal authority by including Education officials’ comment that the department does not have federal authority to plan, fund, construct, maintain, or operate school facilities and grounds. As discussed in the report, by updating its strategic plan, Education would have a current plan to guide its environmental justice activities, as it committed to do in the 2011 MOU. By issuing progress reports each year, Education could have more reasonable assurance that it has the necessary information to assess its performance and to demonstrate results. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We are sending copies of this report to the appropriate congressional committees; the Chair of the Council on Environmental Quality; the Attorney General, Department of Justice; the Administrators of the Environmental Protection Agency and General Services Administration; the Acting Administrator of the Small Business Administration; the Secretaries of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Housing and Urban Development, the Interior, Labor, Transportation, and Veterans Affairs; and the Acting Secretary of the Department of Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the extent to which the 16 working group agencies have developed environmental justice strategic plans and shown progress toward environmental justice goals since 2011; (2) the actions agencies have taken to identify and address environmental justice issues related to their programs, policies, and activities since the executive order was issued in 1994 and the resources they have used to do so in recent years; and (3) the extent to which the Interagency Working Group on Environmental Justice (working group) has collaborated on environmental justice efforts. Sixteen federal agencies and one agency of the Executive Office of the President are involved in environmental justice efforts: the Council on Environmental Quality (CEQ), Environmental Protection Agency (EPA), General Services Administration (GSA), Small Business Administration (SBA), Department of Agriculture (USDA), Department of Commerce (Commerce), Department of Defense (DOD), Department of Education (Education), Department of Energy (DOE), Department of Health and Human Services (HHS), Department of Homeland Security (DHS), Department of Housing and Urban Development (HUD), Department of the Interior (DOI), Department of Justice (DOJ), Department of Labor (DOL), Department of Transportation (DOT), and Department of Veterans Affairs (VA). To address these objectives, we reviewed Executive Order 12898 (Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations), the 2011 Memorandum of Understanding on Environmental Justice (MOU), working group documents, and agency environmental justice strategic plans and progress reports, and interviewed federal agency officials about the documents. We also attended the 2018 National Environmental Justice Conference and Training Program, in which leaders from various sectors share ideas and approaches to achieving environmental justice. At this conference, we observed sessions to gain background and context and interviewed some attendees whom we identified and arranged to interview prior to the conference. We also visited sites in Oakland, California, and Richmond, California, to add context to our review with observations of communities with environmental justice issues. We selected these sites because they had minority and low-income populations with environmental and health concerns. Including interviews we conducted at the conference, we conducted 33 interviews with environmental justice stakeholders about federal environmental justice efforts and related issues. Of these interviews, 10 were with representatives from national nonprofit organizations, seven were with representatives from nonprofit groups who work on local issues, six were with university professors, four were with employees of private companies, two were with current or former government officials, and four were with mixed groups of stakeholders. We identified these stakeholders for interviews from our background interviews and document reviews. The views of the stakeholders we interviewed cannot be generalized to all similar stakeholders, but they represent a range of stakeholder perspectives and provide illustrative examples of views of agency efforts. To examine the extent to which the 16 agencies developed environmental justice strategic plans since 2011, we determined which agencies had completed an environmental justice strategic plan after signing the 2011 MOU and which agencies had also updated their plans at EPA’s request in 2016. We made these determinations by reviewing the website of each agency for its environmental justice documents, reviewing the environmental justice strategic plans, and interviewing agency officials about the origin and status of these environmental justice strategic plans. To examine the extent to which the 16 agencies showed progress toward environmental justice goals since 2011, we determined whether each agency had completed annual environmental justice progress reports for each year for fiscal year 2012 through fiscal year 2017 by reviewing the website of each agency to identify these progress reports, reviewing the progress reports we located, and interviewing agency officials about the status and content of these progress reports. We also reviewed the environmental justice strategic plans and progress reports to assess whether agencies included a method to assess progress in accordance with GAO’s leading practices for strategic planning and reporting, including establishing goals and establishing a method to assess progress toward goals. Specifically, we analyzed whether each agency’s environmental justice strategic plan included goals and performance measures or milestones, and whether each agency assessed progress toward these goals using performance measures or milestones in subsequent progress reports. We also interviewed agency officials about their progress toward the goals of Executive Order 12898. To examine the actions the 16 agencies took to identify and address environmental justice issues related to their programs, policies, and activities since the executive order was issued in 1994, we reviewed agency environmental justice strategic plans, progress reports, and related documents to identify illustrative examples of agency efforts in each of the areas outlined in Executive Order 12898 and the 2011 MOU as well as two additional areas identified by agencies. We also interviewed officials from each agency to confirm or gather additional information on these examples. The analysis included a detailed review of the most recent environmental justice strategic plan and progress report for each agency to identify examples of agency actions and a content analysis of the most recent environmental justice strategic plan for each agency. From this review, we (1) counted how many agencies discussed plans to identify and address environmental justice issues related to the areas outlined in the 1994 executive order and 2011 MOU in their most recent environmental justice strategic plan, (2) developed a list of illustrative examples of agency efforts to identify and address environmental justice issues related to these areas, and (3) counted how many agencies provided examples of actions they implemented related to these areas. The examples are not a generalizable sample of the types or instances of agency actions, but illustrate the various ways that different agencies are implementing plans to identify and address environmental justice issues and different approaches to doing so that may be useful for other agencies, the Interagency Working Group on Environmental Justice, and environmental justice stakeholders. We report a minimum count of agencies that provided examples for each area because most agencies did not formally report on progress annually and the information we reviewed does not provide a complete record of agency environmental justice efforts. To examine what resources working group members used to support their environmental justice efforts for fiscal year 2015 through 2018, we obtained and reviewed agency budget justification documents and agency estimates of resources data to determine which agencies (1) had any funding or staffing resources dedicated specifically for environmental justice in their budgets, (2) supported environmental justice efforts with a mix of existing funding and staff from related programs, or (3) did not report any examples of environmental justice efforts or use any resources specifically for any environmental justice efforts. We assessed the reliability of the agencies’ estimated resources data, including for agencies that estimated no resources were used to support any environmental justice efforts, by corroborating it with agency budget justification documents or internal agency budget documentation, interviewing agency officials about the data, and comparing it with information on any reported examples environmental justice efforts. We found it reliable for our purposes of describing which agencies had any resources dedicated specifically for environmental justice in their budgets and of presenting estimates of other funding and staffing resources used to support environmental justice efforts. To determine the extent to which the working group has collaborated on environmental justice efforts, we reviewed working group documents including the group’s fiscal year 2016-2018 Framework for Collaboration and associated progress reports, its published guidance entitled Promising Practices for Environmental Justice Methodologies in NEPA Reviews, and its resource guide entitled Goods Movement Federal Resources Compendium. We also conducted semi-structured interviews with officials from working group committees. We compared the working group’s organization, documents, and actions with key features of collaborative mechanisms that GAO has identified, including clarifying roles and responsibilities, participation, establishing written guidance and agreements, and establishing outcomes and accountability. We selected these features because they were most relevant to the activities of the working group organization. We conducted this performance audit from November 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Examples of Agency Actions to Identify and Address Environmental Justice Issues Agencies provided examples of actions to identify and address environmental justice issues: Improve research and data collection In 2017, the Department of Housing and Urban Development (HUD) and the Environmental Protection Agency (EPA) entered into a memorandum of understanding (MOU) to improve communication and data sharing about public and HUD-assisted housing located near contaminated Superfund sites to help both agencies prioritize actions protecting against human health and environmental risks. The Department of the Interior (DOI) provided an example in which the National Park Service used EPA’s Environmental Justice Mapping and Screening Tool (EJSCREEN) in 2015 to check for populations with respiratory health risks near a prescribed burn area (i.e., a planned, controlled fire to manage wildfire risks) in Jean Lafitte, Louisiana, as part of an environmental assessment (see fig. 1 for example of EJSCREEN display). Promote enforcement of health and environmental statutes The Department of Justice (DOJ) officials told us that its attorneys consider environmental justice issues when pursuing cases to enforce federal environmental laws, and in 2014 it updated and reissued guidance on how its attorneys should identify and address environmental justice issues in their work. For example, DOJ reported in its 2017 progress report that it sought and incorporated community input on resolutions for a 2017 case involving several petrochemical facilities alleged to be violating the Clean Air Act that were located in Texas and Louisiana communities with environmental justice issues. DOJ reported that some of the injunctive relief and monitoring requirements included in the case settlement reflected suggestions made by the community. According to internal DOI guidance from 2018, the Central Hazardous Materials Fund, which supports cleanup of contaminated sites on federal lands through the Comprehensive Environmental, Response, Compensation, and Liability Act, requires projects to be screened for any potentially affected environmental justice communities and for the requesting bureau to work with any communities that are identified near the proposed project. In its 2014 progress report, Commerce reported that the National Oceanic and Atmospheric Administration (NOAA) developed a handbook on procedures for government-to-government consultation with federally recognized Indian tribes and Alaska Native Corporations as part of an effort to facilitate meaningful and timely input from Tribes into federal decisions that directly affect them. In 2013, DOJ and EPA reported seeking and incorporating input from low-income and minority communities on resolutions for several Clean Water Act violations for sewer overflows in cities in Tennessee, Mississippi, and Washington; these resolutions included requiring the cities to address overflows at specific sites impacting these communities and developing Supplemental Environmental Projects for the cities to fix leaking private sewer pipes. Identify differential patterns of consumption of natural resources In its 2016 progress report, DOJ reported that its Environment and Natural Resources Division negotiated a settlement to help improve the passage of steelhead and salmon—fish that are important to the Muckleshoot and Puyallup tribes—on the White River in Washington. In its 2016 progress report, DOI reported that the U.S. Geological Service worked with the Stillaguamish tribe in Washington, to assess the effects of possible wastewater contamination on fish and wildlife in the Stillaguamish River. The Department of Homeland Security (DHS) issued an agency-wide directive on the National Environmental Policy Act (NEPA) implementation in 2014 and the accompanying 2014 NEPA instruction manual included public involvement requirements for populations with environmental justice issues. For agency staff to implement this guidance, DHS included questions about potential environmental justice issues related to the proposed action in its NEPA assessment system. Since at least 2012, as part of the NEPA process for HUD-assisted projects, HUD requires the environmental review record to document any adverse and disproportionate impacts on low-income or minority populations, and steps to engage the community in meaningful participation about mitigating the adverse impacts or moving the project. The General Services Administration’s (GSA) 1999 Public Building Service NEPA Desk Guide includes a section specifically on environmental justice, which states that each GSA NEPA review should include some level of environmental justice analysis. In its 2015 progress report, GSA reported that it continues to consider environmental justice issues for proposed Public Buildings Service projects. The U.S. Department of Agriculture’s (USDA) 1997 Departmental Regulation on Environmental Justice directs USDA component agencies to incorporate environmental justice into their NEPA processes (e.g., Rural Development’s official guidance includes a section on integrating environmental justice and socioeconomic analyses into environmental reviews as part of the NEPA process). Implement Title VI of the Civil Rights Act of 1964 In its 2017 progress report, EPA reported that its External Civil Rights Compliance Office provided training and technical assistance on federal civil rights obligations to local agencies, tribal governments, and 38 states across the agency’s 10 regions through outreach calls and meetings in 2017. Consider impacts from climate change According to the Department of Commerce, NOAA has developed information, tools, and services to help society understand, plan for, and respond to climate variability and change. As part of this effort, NOAA built a web-based resource called Digital Coast, which can be used to identify the risk of potential sea-level rise and inundation to vulnerable populations (e.g., low-income). According to the Department of Energy’s (DOE) 2015 progress report, the 2015 National Environmental Justice Conference and Training Program focused on climate change and climate justice. DOE also issued a 2015 report on the vulnerabilities that tribal energy systems, such as electric grid infrastructure, have to climate change and extreme weather, and announced a grant opportunity to establish clean energy projects and energy efficiency projects on tribal lands. Consider impacts from goods movement In its 2017 progress report, EPA reported prioritizing funding projects to reduce elevated diesel emissions from equipment moving goods and people near seaports and airports through its Diesel Emissions Reduction Act grants. The Department of Transportation’s (DOT) Federal Highway Administration created an Environmental Justice Tools Peer Network to share transportation practitioners’ experiences using EJSCREEN and other relevant data tools in decisions about transportation planning or project development. DOJ officials told us that new attorneys and staff in its Environment and Natural Resources Division—the primary division responsible for prosecuting environmental cases—received training on environmental justice issues. In its most recent environmental justice strategic plan, DOT reported that it offers environmental justice training throughout the agency to help federal employees and grantees ensure compliance with environmental justice policies. For example, in its 2015 progress report, DOT stated that its Federal Highway Administration and Federal Transit Administration offered courses and webinars on such topics as environmental justice fundamentals, planning, and analysis; Title VI; and freight impacts. USDA officials told us that its National Resources Conservation Service developed a webinar in 2014 to assist conservation planners, partners, and technical service providers understand, analyze, and document environmental justice issues related to planned conservation actions under NEPA, such as data sources and potential mitigation measures. In its 2017 progress report, EPA reported holding training sessions for community organizations on how to use EJSCREEN, how to apply for grants, and other strategies and resources to deal with specific environmental justice issues, such as lead exposure and poisoning. Since 2007, DOE has sponsored an annual conference, the National Environmental Justice Conference and Training Program, with support from other agencies, to bring together community leaders; federal, state, and local government representatives; tribal leaders; environmental justice organizations; and others. The conference provides a forum to share information, tools, and strategies for identifying and dealing with specific environmental justice issues that communities may be facing, and agencies in the working group reported participating. Since at least 2012, HUD has offered online training on environmental justice for HUD grantees to help build their capacity to meet environmental review responsibilities for HUD-assisted projects. In 2017, DOI and EPA entered into an MOU to collaborate on environmental justice and economic development issues by assisting underserved communities through academic partnerships, technical assistance, and training, in collaboration with the communities. In its 2016 progress report, the Department of Labor reported that the Employment and Training Administration’s Job Corps, a job training program for low-income and at-risk youth, offers training in fields such as green building and hazardous waste removal. Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Department of Defense Appendix V: Comments from the Department of Energy Appendix VI: Comments from the Department of the Interior Appendix VII: Comments from the Department of Justice Appendix VIII: Comments from the Department of Labor Appendix IX: Comments from the Department of Transportation Appendix X: Comments from the Department of Education Appendix XI: Comments from the Environmental Protection Agency Appendix XII: Comments from the Department of Health & Human Services Appendix XIII: Comments from the Department of Agriculture Appendix XIV: Comments from the Department of Veterans Affairs Appendix XV: GAO Contact and Staff Acknowledgments GAO Contact J. Alfredo Gómez, (202) 512-3841 or gomezj@gao.gov. Staff Acknowledgments In addition to the individual named above, Susan Iott (Assistant Director), Allen Chan (Analyst-in-Charge), Peter Beck, Hannah Dodd, Juan Garay, Rich Johnson, Matthew Levie, Ben Licht, Cynthia Norris, Amber Sinclair, Kiki Theodoropoulos, and Elise Vaughan Winfrey made key contributions to this report.
Why GAO Did This Study Environmental justice seeks to address the disproportionately high distribution of health and environmental risks among low-income and minority communities by seeking their fair treatment and meaningful involvement in environmental policy. In 1994, Executive Order 12898 directed 11 federal agencies to identify and address environmental justice issues related to their activities and tasked an interagency working group to coordinate federal environmental justice efforts. In 2011, 16 agencies, including the 11 original agencies, recommitted to planning and reporting on environmental justice efforts by signing an MOU. GAO was asked to review federal environmental justice efforts. This report examines agencies' environmental justice actions, strategic plans and progress reports, and working group collaboration. GAO reviewed agency environmental justice plans, reports, and funding data; interviewed agency officials; and compared working group collaboration to leading collaborative practices. What GAO Found Most of the 16 agencies that are members of the interagency working group on environmental justice—created by Executive Order 12898 in 1994—reported taking some actions to identify and address environmental justice issues, such as creating data tools, developing policies or guidance, and building community capacity through small grants and training. For example, the Environmental Protection Agency (EPA) created a mapping tool that can help identify low-income and minority communities exposed to health or environmental risks. Several agencies, such as EPA and the Departments of Justice, Homeland Security, and the Interior, also developed policies or guidance to analyze environmental justice issues during environmental reviews or enforcement activities. Most of the agencies supported their efforts with funds and staff from related programs, but EPA and the Department of Energy provided funds ($8.3 million in fiscal year 2018) and staff specifically for environmental justice. Agencies' progress toward environmental justice is difficult to gauge, however, because most do not have updated strategic plans and have not reported annually on their progress or developed methods to assess progress. As they agreed to do in a 2011 Memorandum of Understanding (MOU), most of the agencies developed environmental justice strategic plans, but only six have updated them more recently. Few agencies have measures or methods for assessing progress, and the working group has not provided guidance to help agencies with such assessments. The number of agencies issuing annual progress reports has declined (see fig.). Updated strategic plans and annual progress reports, along with guidance on performance measures and methods, would help agencies provide essential information to assess their progress. The working group, chaired by EPA, has developed committees and written agreements to carry out its responsibilities to coordinate agencies' environmental justice efforts, but it is not carrying out several functions in the 1994 Executive Order. GAO has found that collaborative mechanisms, such as the working group, benefit from clear goals, but the working group's organizational documents do not contain clear strategic goals aligned to address the order. Clear strategic goals to carry out the executive order could enhance the group's strategic direction for intergovernmental environmental justice efforts. What GAO Recommends GAO is making 24 recommendations, including that agencies update environmental justice strategic plans and report on progress annually, and that EPA consult with other working group members to provide guidance on assessing progress and to set strategic goals. Of the 15 agencies with recommendations, eight agreed. Other agencies' responses included partial agreement, disagreement, and no comment. GAO continues to support its recommendations.
gao_GAO-20-648T
gao_GAO-20-648T_0
Background In creating the military justice system, Congress established three types of military courts, called the summary, special, and general courts-martial, to adjudicate UCMJ violations. Each of these types of military courts respectively is intended to deal with progressively more serious offenses, and each court-martial type may adjudicate more severe maximum punishments as prescribed under the UCMJ. In addition, an accused servicemember can receive nonjudicial punishment under Article 15 of the UCMJ, by which a commander can punish a servicemember without going through the court-martial process. There are several steps in the discipline of a servicemember who allegedly commits a crime under the UCMJ, which are summarized in figure 1 below. The military justice process begins once an offense is alleged and an initial report is made, typically to law enforcement, an investigative entity, or the suspect’s chain of command. The commanding officer, law enforcement, or a military criminal investigative organization (MCIO) will conduct an inquiry or investigation into the accusation and gather all reasonably available evidence. Investigations are recorded in MCIO databases when a servicemember is the subject of a criminal allegation; for the purposes of our report, we say the servicemember had a “recorded investigation” to describe these cases. Following an investigation, the first step toward initiation of a court-martial is when the accused is presented with a list of charges signed by the accuser under oath, which is called preferral of charges. After charges are preferred, the charges are forwarded to an officer with sufficient legal authority to convene a court-martial, also known as the “convening authority.” The convening authority in receipt of preferred charges may, among other actions, refer the case to its own court or forward the case to a superior commander for disposition. Once referred to a general or special court- martial, an accused servicemember may be tried by a military judge alone or by a military judge with a military jury. In summary courts-martial, a single commissioned officer who is not a military judge adjudicates minor offenses and a sentence. Convictions at the general and special court- martial level are subject to a post-trial process and may be appealed to higher courts in cases where the sentence reaches a certain threshold. The military justice system, like the civilian criminal justice system, provides avenues for accused servicemembers to raise allegations of discrimination, improprieties in investigations, improprieties in disposition, and improprieties in the selection of the military jury at the court-martial proceeding, before a judge and on appellate review. The Military Services Do Not Collect, Maintain, and Report Consistent Information about Race and Ethnicity, Limiting the Ability to Assess Data to Identify Any Disparities The Military Services Do Not Collect and Maintain Consistent Data for Race and Ethnicity The military services do not collect and maintain consistent information regarding race and ethnicity in their investigations, military justice, and personnel databases. Specifically, the number of potential responses for race and ethnicity within the 15 databases across the military services ranges from 5 to 32 options for race and 2 to 25 options for ethnicity, which can complicate cross-service assessments. For example, the Army’s personnel database maintains 6 options for race and 23 options for ethnicity, whereas the Coast Guard’s personnel database maintains 7 options for race and 3 for ethnicity. Table 1 below summarizes how the databases used by the military services vary in how the servicemember’s race is entered and the number of potential race options. Table 2 shows that the military services’ databases also vary in how information about servicemembers’ ethnicity is entered into the databases and the number of potential ethnicity options that are collected. Although the data collected and maintained was not consistent within and across the military services, each of the military services’ databases maintained race and ethnicity data for at least 99 percent of the servicemembers, with the exception of the Coast Guard. The Coast Guard did not track information about race or ethnicity in its military justice database, Law Manager. Coast Guard officials stated that this is because Law Manager was designed to determine the status of court- martial cases, and captures attributes that are needed to generate relevant UCMJ documents, such as court pleadings. Demographic information such as race and ethnicity is not included in these official documents, so this information is not input into Law Manager. Further, four of the databases we reviewed—including both of the Army’s military justice databases, and the Navy and the Marine Corps’ military justice databases—collect information on race and ethnicity in a combined data field as shown in table 2 above, whereas the other databases collect and maintain race and ethnicity information in two separate fields. These inconsistencies limit the military services’ ability to collectively or comparatively assess these demographic data to identify any racial or ethnic disparities in the military justice system within and across the services. Recommendations to collect and maintain race and ethnicity information in investigations and personnel databases. To address these inconsistencies, in our May 2019 report, we made four separate recommendations to each of the military departments and to the Secretary of Homeland Security for the Coast Guard. We recommended that these entities develop the capability to present servicemembers’ race and ethnicity data in their investigations and personnel databases using the same categories of race and ethnicity established in the uniform standards for the military justice databases that were issued in December 2018. As part of these uniform standards, the military services were directed to collect data related to race and ethnicity in their military justice databases, to collect race and ethnicity data in separate data fields, and to standardize the reporting of the data into categories identified in the standards. However, DOD applied these December 2018 standards only to the military justice databases and not to the investigations and personnel databases. DOD officials stated that the investigations and personnel databases do not fall under the charter of the DOD General Counsel, which issued the standards for the military justice databases. DOD and the Department of Homeland Security (DHS) concurred with these four recommendations. As of October 2019, officials from each of the military departments said that they were working to implement the uniform standards for race and ethnicity and the ability to aggregate the data, and they expected to implement these categories in December 2020. Similarly, as of May 2019, the Coast Guard expected to implement such modifications by September 2020. The Military Services Have Not Consistently Reported Data That Provides Visibility about Racial Disparities Although some military services report demographic information about the subjects of military justice actions internally, the military services have not externally reported data that provides visibility into, or would enable an analysis of, the extent of racial or ethnic disparities in the military justice system. Officials from all of the military services told us that they compile internal quarterly or monthly staff judge advocate reports, which include the total number of each type of court-martial handled by their legal offices and of nonjudicial punishments. According to military service officials, the Air Force and the Army reports include demographic information about servicemembers involved in these cases, such as the total number of each type of case broken out by the subject’s race and ethnicity. However, the Navy, Marine Corps, and Coast Guard reports do not include this demographic information, and there was no requirement to do so at the time of our May 2019 report. Regarding external reporting, the UCMJ directs the Court of Appeals for the Armed Forces, the Judge Advocates General, and the Staff Judge Advocate to the Commandant of the Marine Corps to submit annual reports on the military justice system to the Congressional Armed Services Committees, the Secretary of Defense, the secretaries of the military departments, and the Secretary of Homeland Security. These reports are to include information on the number and status of pending cases handled in the preceding fiscal year, among other information. The annual reports include the total number of cases each military service handled for each type of court-martial and for nonjudicial punishments. However, prior to our review, these annual reports did not include demographic information about servicemembers who experienced a military justice action, such as breakdowns by race, because the reporting requirement did not direct the military services to include such information. Recommendation to require military services to include data about race and ethnicity in annual reports about military justice actions. In our May 2019 report, we recommended that the Joint Service Committee on Military Justice, which is responsible for reviewing the UCMJ annually, consider an amendment to the UCMJ’s annual military justice reporting requirements to require the military services to include demographic information, including race and ethnicity, for all types of courts-martial. DOD concurred with this recommendation. According to a memorandum from the Joint Service Committee on Military Justice, in September 2019 the committee proposed an action item as part of its annual review. Specifically, the committee was considering an amendment to the UCMJ’s annual military justice reporting requirements to require the military services to include demographic information, including race and ethnicity, for all types of courts-martial. However, in December 2019, the National Defense Authorization Act for Fiscal Year 2020 included a provision directing the Secretary of Defense to include data on race, ethnicity, and gender in the annual military justice reports. We believe that this statutory change meets the intent of our recommendation. By requiring the military services to report this information, servicemembers and the public will have greater visibility into potential disparities, which will help build confidence that DOD is committed to a military justice system that is fair and just. DOD Has Not Identified When Disparities Should Be Examined Further DOD has not issued guidance that establishes criteria to specify when any data indicating possible racial or ethnic disparities in the investigations, trials, or outcomes of cases in the military justice system should be further reviewed, and to describe what steps should be taken to conduct such a review if it were needed. While equal employment opportunity enforcement is a very different context than the military justice system, other federal agencies have developed such criteria in the equal employment opportunity context that can indicate when disparities should be examined further. For example, the Department of Justice, the Department of Labor, the Equal Employment Opportunity Commission, and the Office of Personnel Management use a “four-fifths” test to determine when differences between subgroups in the selection rates for hiring, promotion, or other employment decisions are significant. These criteria, though inexact, provide an example of the type of criteria that DOD could consider using as a basis for determining when disparities among racial groups in the military justice process could require further review or analysis. Recommendation to issue guidance to establish criteria that determines when racial and ethnic disparities should be reviewed. In our May 2019 report, we recommended that the Secretary of Defense, in collaboration with the Secretaries of the military departments and the Secretary of Homeland Security, issue guidance that establishes criteria to specify when data indicating possible racial, ethnic, or gender disparities in the military justice process should be further reviewed, and that describes the steps that should be taken to conduct such a review. In commenting on a draft of our report, DOD partially concurred with this recommendation, agreeing with the content, but requesting that we modify the recommendation to direct it to more appropriate entities. That change was made before our report was issued. In October 2019, DOD officials said that the department was exploring the feasibility of conducting relevant research to inform implementation of this recommendation. At that time, they estimated that this research might be concluded in March 2021. In December 2019, the National Defense Authorization Act for Fiscal Year 2020 included a provision directing the Secretary of Defense to issue guidance consistent with our recommendation. DOD was directed to commence or carry out these activities by June 2020. We believe that issuing guidance that establishes criteria for determining when data indicating possible racial disparities in the investigations, trials, or outcomes of cases in the military justice system should be further examined, and describes the steps that should be taken to conduct such further examination, would better position DOD and the services to monitor the military justice system to help ensure that it is fair and just, a key principle of the UCMJ. Racial Disparities Exist in Military Justice Investigations, Disciplinary Actions, and Case Outcomes Racial disparities exist in investigations, disciplinary actions, and punishment of servicemembers in the military justice system. Our analysis of available data from fiscal years 2013 through 2017, which controlled for attributes such as race, gender, rank, education, and years of service, found racial disparities were more likely in actions that first brought servicemembers into the military justice system, but we identified fewer statistically significant racial disparities in case outcomes— convictions and punishment severity. Black and Hispanic Servicemembers Were More Likely to Be Subjects of Recorded Investigations and Tried in General and Special Courts-Martial Black and Hispanic servicemembers were more likely than White servicemembers to be the subjects of recorded investigations in all of the military services, and were more likely to be tried in general and special courts-martial in the Army, the Navy, the Marine Corps, and the Air Force, as shown in figure 2 below. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. When separating general and special court-martial cases into those that either were or were not preceded by an investigation recorded in an MCIO database, we found fewer statistically significant racial disparities in most of the military services in general and special courts-martial that were preceded by a recorded investigation. However, as shown in figure 3 below, statistically significant racial disparities were also present in general and special courts-martial that did not follow a recorded investigation in all military services included in this analysis, which would include cases where the investigation was performed by the servicemember’s command. Specifically, as shown in figure 3 above, we found that: General and special courts-martial following a recorded investigation. Black, Hispanic, and servicemembers in the Other race category in the Army, and Hispanic servicemembers in the Marine Corps were more likely than White servicemembers to be tried in general and special courts-martial following a recorded investigation, after controlling for other attributes. We generally found fewer statistically significant differences compared to the results of our analyses for all special and general courts martial. General and special courts-martial without a recorded investigation. Black servicemembers in all of the military services were more likely than White servicemembers to be tried in general and special courts-martial without a recorded investigation after controlling for other attributes. These differences were consistent with the differences we identified for general and special courts-martial overall, as shown in figure 2 above. Hispanic servicemembers in the Army were more likely than White servicemembers to be tried in general and special courts-martial without a recorded investigation, but we found no statistically significant differences in the likelihood of Hispanic servicemembers to be tried in general and special courts-martial without a recorded investigation in the Marine Corps, the Navy, or the Air Force. Black Servicemembers Were More Likely to Be Subject to Summary Courts-Martial and Nonjudicial Punishment in the Air Force and Marine Corps, and the Other Services Lack Data Black servicemembers were more likely than White servicemembers to be tried in summary courts-martial and to be subjects of nonjudicial punishment in the Air Force and the Marine Corps, as shown in figure 4. The Army and the Navy did not maintain complete summary court-martial or nonjudicial punishment data, and the Coast Guard had too few summary courts-martial for us to analyze, and did not maintain complete nonjudicial punishment data. We could not determine whether disparities existed among servicemembers tried in summary courts-martial or subject to nonjudicial punishments in the Army and the Navy because the Army and the Navy did not collect complete summary courts-martial or nonjudicial punishment data in their investigations, military justice, or personnel databases. Specifically, as part of our data reliability checks, we identified the total number of summary courts-martial that the Army and the Navy reported in the Court of Appeals for the Armed Forces annual reports for fiscal years 2013 through 2017, and compared these totals to the number of cases we identified in their military justice databases. While our comparisons are not exact, due to differences in the dates we used to count the number of cases, we found that approximately 60 percent of the Army’s reported summary courts-martial cases and less than 50 percent of the Navy’s reported summary courts-martial cases were included in their military justice databases. The absence of complete summary court-martial data in the military justice databases of the Army and the Navy limits these services’ visibility into any disparities that may exist among servicemembers involved in these types of military justice proceedings. On December 17, 2018, the General Counsel of the Department of Defense issued the uniform standards and criteria required by article 140a of the Military Justice Act of 2016. As part of these uniform standards, the military services were directed to collect certain information about all cases in their military justice databases, which a DOD official said includes summary court- martial cases. The DOD General Counsel directed that military services are to implement the Secretary’s direction no later than December 23, 2020. Similarly, we identified the total number of nonjudicial punishments that the Army, the Navy, and the Coast Guard reported in the Court of Appeals for the Armed Forces annual reports for fiscal years 2013 through 2017, and compared these totals to the number of cases we identified in their military justice and personnel databases. As shown in figure 5 below, we found that 65 percent of the Army’s reported nonjudicial punishments, 8 percent of the Navy’s reported nonjudicial punishments, and 82 percent of the Coast Guard’s reported nonjudicial punishments were recorded in their military justice databases. Recommendation to include benefits and drawbacks of collecting and maintaining complete information for nonjudicial punishment. In our May 2019 report, we made separate recommendations to the Army, the Navy, and the Coast Guard to consider the feasibility, to include the benefits and drawbacks, of collecting and maintaining complete information for all nonjudicial punishment cases in one of the military service’s databases, such as information on the servicemembers’ race, ethnicity, gender, offense, and punishment imposed. DOD and DHS concurred with these recommendations. As of October 2019, Army and Navy officials said that they were developing the capability to collect data on race, ethnicity, gender, offense and punishment imposed for nonjudicial punishments. They expected to complete this action in December 2020. As of May 2019, the Coast Guard stated that it would consider the feasibility of collecting and maintaining complete information for all nonjudicial punishments cases through a military justice and personnel work group. The estimated completion date for this action had not been determined at that time. Few Statistically Significant Racial Disparities Exist in Likelihood of Conviction or Severity of Punishment, but the Coast Guard Does Not Collect and Maintain Complete Data We identified fewer statistically significant racial disparities in case outcomes—convictions and punishment severity. Among the servicemembers convicted in general and special courts-martials, we found no statistically significant differences regarding the likelihood of conviction among racial groups in the Army, the Navy, the Marine Corps, and the Air Force, while controlling for other attributes, as shown in figure 6 below. In the military services that maintained complete punishment data—the Army, the Navy, the Marine Corps, and the Air Force—we found that minority servicemembers were either less likely to receive a more severe punishment in general and special courts-martial compared to White servicemembers, or there were no statistically significant differences in punishments among racial groups. Specifically, as shown in figure 7, Black servicemembers were less likely to receive a more severe punishment in general and special courts-martial compared to White servicemembers in the Navy, but there was no statistically significant difference for Black servicemembers in the Marine Corps, the Army, and the Air Force. Additionally, there were no statistically significant differences for Hispanic servicemembers in the Navy, the Marine Corps, the Army, or the Air Force. We could not determine disparities in case outcomes—convictions and punishment severity—in the Coast Guard’s general and special courts- martial for fiscal years 2013 through 2017 because the Coast Guard did not collect and maintain complete conviction and punishment data in its military justice database. Specifically, 16 percent of all Coast Guard cases were missing conviction and punishment data. When broken down by court-martial type, 20 percent of general court-martial cases, 15 percent of special court-martial cases, and 4 percent of summary court- martial cases were missing conviction and punishment data. Coast Guard officials acknowledged that incomplete conviction and punishment data entry is a consistent problem. They said that data entry had improved recently. On December 17, 2018, the General Counsel of the Department of Defense issued the uniform standards and criteria required by article 140a of the Military Justice Act of 2016. As part of these uniform standards, the military services were directed to collect information about the findings for each offense charged, and the sentence or punishment imposed. The DOD General Counsel directed that the military services are to implement the Secretary’s direction no later than December 23, 2020. DOD and the Military Services Have Conducted Some Assessments of Military Justice Disparities, but Have Not Studied the Causes of Disparities DOD and the military services have taken some steps to study racial disparities in the military justice system over the last several decades, but they have not comprehensively studied the causes of any disparities. We previously reported in 1995 on DOD studies on discrimination and equal opportunity, and found DOD and the military services conducted seven reviews of racial disparities in discipline rates between 1974 and 1993. Since our 1995 report through 2016, DOD and military service assessments of military justice disparities have been limited. Officials in the Office of Diversity, Equity and Inclusion noted DOD has not conducted any department-wide assessments of racial disparities in military justice during this period. The military services’ diversity offices also were not able to identify any service-specific reviews of disparities in military justice. However, DOD has conducted climate surveys to address servicemembers’ perceptions of bias. In addition, the military services have some initiatives to examine and address disparities in military justice. For example, the Air Force routinely analyzes military justice data using a rates-per-thousand analysis to identify whether certain demographic groups are tried by courts-martial or subject to nonjudicial punishments at higher rates than others. These Air Force analyses found that Black servicemembers were more likely than White servicemembers to be subject to courts-martial and nonjudicial punishments from fiscal years 2013 through 2017, which is consistent with what we found. However, the other services do not routinely conduct such analyses. Officials from DOD and the military services acknowledged that they do not know the cause of the racial disparities that have been identified in the military justice system. This is because they have not conducted a comprehensive evaluation to identify potential causes of these disparities and make recommendations about any appropriate corrective actions to remediate the cause(s) of the disparities. Recommendation to identify causes of racial disparities in the military justice system. In our May 2019 report, we recommended that the Secretary of Defense, in collaboration with the Secretaries of the military services and the Secretary of Homeland Security, conduct an evaluation to identify the causes of any disparities in the military justice system, and take steps to address the causes of these disparities as appropriate. DOD partially concurred with this recommendation, agreeing with the content, but requesting that we modify the recommendation to direct it to more appropriate entities. We made that change before the report was issued. In October 2019, DOD officials said that the department was exploring the feasibility of conducting a research project to delve into the differences in military justice data to inform implementation of this recommendation. At that time, they estimated that this research might be concluded in March 2021. In December 2019, the National Defense Authorization Act for Fiscal Year 2020 included a provision directing the Secretary of Defense to conduct an evaluation consistent with our recommendation. DOD was directed to commence or carry out these activities by June 2020. We believe that conducting a comprehensive analysis into the causes of disparities in the military justice system, would better position DOD and the military services to identify actions to address disparities, and thus help ensure that the military justice system is fair and just, a key principle of the UCMJ. In conclusion, our analysis of available data identified racial disparities in all of the military services for servicemembers with recorded investigations, and for four of the military services for trials in special and general courts-martial, but these disparities generally were not present in the convictions or punishments of cases. These findings show an association for disparities at particular stages of the military justice process, but are inconclusive regarding other stages for the period covered by our analysis. However, our findings of racial disparities, taken alone, do not establish whether unlawful discrimination has occurred, as that is a legal determination that would involve other corroborating information along with supporting statistics. The absence of complete nonjudicial punishment data in the Army, the Navy, and the Coast Guard limits their visibility into the vast majority of legal punishments imposed on servicemembers under the UCMJ every year. Without such data, these three military services will remain limited in their ability to assess or identify disparities among populations subject to this type of punishment. Our May 2019 report included several recommendations with specific actions that can be taken to better position DOD and the military services to identify and address disparities, such as (1) developing the capability to present race and ethnicity data from the military services’ personnel and investigations databases using the same categories as the military justice databases; (2) establishing criteria to determine when possible disparities among racial or ethnic groups should be further reviewed, and describing the steps that should be taken in such a review; and, importantly, (3) conducting a comprehensive evaluation of the causes of these disparities and taking steps to address them. To help build confidence that DOD is committed to a military justice system that is fair and just, and for the system of military law to be recognized as fair and just by both members of the armed forces and by the American public, DOD and the military services need to take actions to address these recommendations. Madam Chairwoman Speier, Ranking Member Kelly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, who may be reached at (202) 512-3604 or farrellb@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kimberly C. Seay, Assistant Director; Christopher Allison; Renee S. Brown; Vincent M. Buquicchio; Christopher Gezon; Won (Danny) Lee; Serena C. Lo; Dae B. Park; Samuel J. Portnow; and Clarice Ransom. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Uniform Code of Military Justice (UCMJ) was established to provide a statutory framework that promotes fair administration of military justice. Every active-duty servicemember is subject to the UCMJ, with more than 258,000 individuals disciplined from fiscal years 2013-2017, out of more than 2.3 million unique active-duty servicemembers. A key principle of the UCMJ is that a fair and just system of military law can foster a highly disciplined force. This statement provides information on 1) the collection of race and ethnicity information in the military services' databases, 2) the extent of racial disparities in investigations, disciplinary actions, and case outcomes in the military justice system, and 3) steps taken by DOD to study any identified disparities. This statement is based on GAO -19-344 issued on May 30, 2019. As part of that work, GAO analyzed data from the investigations, military justice, and personnel databases from the military services, including the Coast Guard, from fiscal years 2013-2017 and interviewed agency officials. What GAO Found In May 2019, GAO found that the military services did not collect consistent information about race and ethnicity in their investigations, military justice, and personnel databases. Thus, the military services are limited in their ability to identify disparities (i.e., instances in which a racial or ethnic group was overrepresented) in the military justice system. The military services were not required to, and thus did not, report demographic information that would provide greater visibility into potential disparities in their annual military justice reports. GAO's analysis of available data identified disparities in how likely servicemembers of different races were to be subjects of investigations recorded in military criminal investigative organization databases and tried in general and special courts-martial in particular. For example, in three military services, Black servicemembers were about twice as likely as White servicemembers to be tried in general and special courts-martial. Racial disparities generally were not present in convictions or punishments. These findings show an association for disparities at particular stages of the military justice process, but are inconclusive regarding other stages. However, GAO's findings of racial disparities, taken alone, do not establish whether unlawful discrimination has occurred, as that is a legal determination that would involve other corroborating information and supporting statistics. Note: These analyses, taken alone, should not be used to make conclusions about the presence of unlawful discrimination. These multivariate regression analysis results estimate whether a racial group is more likely or less likely to be the subject of an investigation or a trial in general or special courts-martial after controlling for race, gender, rank, and education, and in the Air Force, years of service. GAO made all racial comparisons to White servicemembers, and grouped individuals of Hispanic ethnicity together, regardless of race. The Other race category includes individuals who identified as American Indian/Alaska Native, Asian, Native Hawaiian/Other Pacific Islander, and multiple races. The Department of Defense (DOD) has taken some steps to study disparities but has not comprehensively evaluated the causes of racial disparities in the military justice system. Doing so would better position DOD to identify actions to address disparities and to help ensure the military justice system is fair and just. What GAO Recommends GAO made 11 recommendations in prior work, including that the military services develop the capability to present consistent race and ethnicity data, and DOD and the Coast Guard include demographic information in military justice annual reports and evaluate the causes of disparities. DOD and the Coast Guard generally concurred. Progress has been made in addressing some of the recommendations. Continued attention is needed to ensure that the remainder of these recommendations are addressed.
gao_GAO-19-476T
gao_GAO-19-476T_0
Background VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. In carrying out this mission, the department operates one of the largest health care delivery systems in America, providing health care to millions of veterans and their families at more than 1,500 facilities. The department’s three major components—the Veterans Health Administration (VHA), the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA)—are primarily responsible for carrying out its mission. More specifically, VHA provides health care services, including primary care and specialized care, and it performs research and development to address veterans’ needs. VBA provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance. Further, NCA provides burial and memorial benefits to veterans and their families. VA Relies Extensively on IT The use of IT is critically important to VA’s efforts to provide benefits and services to veterans. As such, the department operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers, veteran- facing systems, benefits delivery systems, memorial services, and all other systems supporting the department’s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure the delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. Toward this end, the department operates approximately 240 information systems, manages approximately 314,000 desktop computers and 30,000 laptops, and administers nearly 460,000 network user accounts for employees and contractors to facilitate providing benefits and health care to veterans. These systems are used for the determination of benefits, benefits claims processing, patient admission to hospitals and clinics, and access to health records, among other services. VHA’s systems provide capabilities to establish and maintain electronic health records that health care providers and other clinical staff use to view patient information in inpatient, outpatient, and long-term care settings. The department’s health information system—VistA—serves an essential role in helping the department to fulfill its health care delivery mission. Specifically, VistA is an integrated medical information system that was developed in-house by the department’s clinicians and IT personnel, and has been in operation since the early 1980s. The system consists of 104 separate computer applications, including 56 health provider applications; 19 management and financial applications; eight registration, enrollment, and eligibility applications; five health data applications; and three information and education applications. Within VistA, an application called the Computerized Patient Record System enables the department to create and manage an individual electronic health record for each VA patient. In June 2017, the former VA Secretary announced that the department planned to acquire the same Cerner electronic health record system that the Department of Defense (DOD) has acquired. VA’s effort—the Electronic Health Record Modernization (EHRM) program—calls for the deployment of a new electronic health record system at three initial sites in 2020, with a phased implementation of the remaining sites over the next decade. In addition, VBA relies on the Veterans Benefits Management System (VBMS) to collect and store information such as military service records, medical examinations, and treatment records from VA, DOD, and private medical service providers. In 2014, VA issued its 6-year strategic plan, which emphasizes the department’s goal of increasing veterans’ access to benefits and services, eliminating the disability claims backlog, and ending veteran homelessness. According to the plan, the department intends to improve access to benefits and services through the use of enhanced technology to provide veterans with access to more effective care management. The plan also calls for VA to eliminate the disability claims backlog by fully implementing an electronic claims process that is intended to reduce processing time and increase accuracy. Further, the department has an initiative under way that provides services, such as health care, housing assistance, and job training, to end veteran homelessness. Toward this end, VA is working with other agencies, such as the Department of Health and Human Services, to implement more coordinated data entry systems to streamline and facilitate access to appropriate housing and services. VA Manages IT Resources Centrally Since 2007, VA has been operating a centralized organization, OI&T, in which most key functions intended for effective management of IT are performed. This office is led by the Assistant Secretary for Information and Technology—VA’s Chief Information Officer (CIO). The office is responsible for providing strategy and technical direction, guidance, and policy related to how IT resources are to be acquired and managed for the department, and for working closely with its business partners—such as VHA—to identify and prioritize business needs and requirements for IT systems. Among other things, OI&T has responsibility for managing the majority of VA’s IT-related functions, including the maintenance and modernization of VistA. As of January 2019, OI&T was comprised of about 15,800 staff, with more than half of these positions filled by contractors. VA Is Requesting about $5.9 Billion for IT and a New Electronic Health Record System for Fiscal Year 2020 VA’s fiscal year 2020 budget request includes about $5.9 billion for OI&T and its new electronic health record system. Of this amount, about $4.3 billion was requested for OI&T, which represents a $240 million increase over the $4.1 billion enacted for 2019. The request seeks the following levels of funding: $401 million for new systems development efforts to support current health care systems platforms, and to replace legacy systems, such as the Financial Management System; approximately $2.7 billion for the operations and maintenance of existing systems, which includes $327.3 million for infrastructure readiness that is to support the transition to the new electronic health record system; and approximately $1.2 billion for administration. Additionally, the department requested about $1.6 billion for the EHRM program. This amount is an increase of $496 million over the $1.1 billion that was enacted for the program for fiscal year 2019. The request includes the following: $1.1 billion for the contract with the Cerner Corporation to acquire the $161,800 for program management, and $334,700 for infrastructure support. VA’s Management of IT Has Contributed to High- Risk Designations In 2015, we designated VA Health Care as a high-risk area for the federal government and noted that IT challenges were among the five areas of concern. In part, we identified limitations in the capacity of VA’s existing systems, including the outdated, inefficient nature of certain systems and a lack of system interoperability—that is, the ability to exchange and use electronic health information—as contributors to the department’s IT challenges related to health care. Also, in February 2015, we added Improving the Management of IT Acquisitions and Operations to our list of high-risk areas. Specifically, federal IT investments were too frequently failing or incurring cost overruns and schedule slippages while contributing little to mission- related outcomes. We have previously reported that the federal government has spent billions of dollars on failed IT investments, including at VA. Our 2017 update to the high-risk report noted that VA had partially met our leadership commitment criterion by involving top leadership in addressing the IT challenges portion of the VA Health Care high-risk area; however, it had not met the action plan, monitoring, demonstrated progress, or capacity criteria. We have also identified VA as being among a handful of departments with one or more archaic legacy systems. Specifically, in our May 2016 report on legacy systems used by federal agencies, we identified two of VA’s systems as being over 50 years old—the Personnel and Accounting Integrated Data system and the Benefits Delivery Network system. These systems were among the 10 oldest investments and/or systems that were reported by 12 selected agencies. Accordingly, we recommended that the department identify and plan to modernize or replace its legacy systems. VA addressed the recommendation in May 2018, when it provided a Comprehensive Information Technology Plan that showed a detailed roadmap for the key programs and systems required for modernization. The plan included time frames, activities to be performed, and functions to be replaced or enhanced. The plan also indicated that the Personnel and Accounting Integrated Data system and the Benefits Delivery Network system are to be decommissioned in quarters 3 and 4 of fiscal year 2019, respectively. Our March 2019 update to our high-risk series noted that the ratings for leadership commitment criterion regressed, while the action plan criterion improved for the IT Challenges portion of the VA Health Care area. The capacity, monitoring, and demonstrated progress criteria remained unchanged. Our work continued to indicate that VA was not yet able to demonstrate progress in this area. Since its 2015 high-risk designation, we have made 14 new recommendations in the VA Health Care area, 12 of which were made since our 2017 high-risk report was issued. For example, in June 2017, to address deficiencies we recommended that the department take six actions to provide clinicians and pharmacists with improved tools to support pharmacy services to veterans and reduce risks to patient safety. VA generally concurred with these recommendations; however, all of them remain open. FITARA Is Intended to Help VA and Other Agencies Improve Their IT Acquisitions Congress enacted FITARA in December 2014 to improve agencies’ acquisitions of IT and enable Congress to better monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law applies to VA and other covered agencies. It includes specific requirements related to seven areas, including agency CIO authority, data center consolidation and optimization, risk management of IT investments, and government-wide software purchasing. Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing incremental development, as defined in capital planning guidance issued by the Office of Management and Budget (OMB), (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Federal data center consolidation initiative. Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing their data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management in IT investments. OMB and covered agencies are to make detailed information on federal IT investments publicly available, and department-level CIOs are to categorize their major IT investments by risk. Additionally, in the case of major investments rated as high risk for 4 consecutive quarters, the act required that the department- level CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Government-wide software purchasing program. The General Services Administration is to enhance government-wide acquisition and management of software and allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Additionally, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act of 2016, or the “MEGABYTE Act,” further enhanced CIOs’ management of software licenses by requiring agency CIOs to establish an agency software licensing policy and a comprehensive software license inventory to track and maintain licenses, among other requirements. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. VA and Other Agencies Face Cybersecurity Risks The federal approach and strategy for securing information systems is prescribed by federal law and policy. The Federal Information Security Modernization Act (FISMA) provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. In addition, the Federal Cybersecurity Enhancement Act of 2015 requires protecting federal networks through the use of federal intrusion prevention and detection capabilities. Further, Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, directs agencies to manage cybersecurity risks to the federal enterprise by, among other things, using the National Institute of Standards and Technology Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity framework). Federal agencies, including VA, and our nation’s critical infrastructures— such as energy, transportation systems, communications, and financial services—are dependent on IT systems and electronic data to carry out operations and to process, maintain, and report essential information. The security of these systems and data is vital to public confidence and national security, prosperity, and well-being. Because many of these systems contain vast amounts of personally identifiable information, agencies must protect the confidentiality, integrity, and availability of this information. In addition, they must effectively respond to data breaches and security incidents when they occur. The risks to IT systems supporting the federal government and the nation’s critical infrastructure are increasing, including insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, and the emergence of new and more destructive attacks. Cybersecurity incidents continue to impact federal entities and the information they maintain. According to OMB’s 2018 annual FISMA report to Congress, agencies reported 35,277 information security incidents to DHS’s U.S. Computer Emergency Readiness Team in fiscal year 2017. VA Has Made Limited Progress toward Addressing IT System Modernization Challenges VA has made limited progress toward addressing the IT management challenges for three critical initiatives: VistA, the Family Caregiver Program, and VBMS. Specifically, the department has recently initiated its fourth effort to modernize VistA, but uncertainty remains regarding the program’s governance. In addition, although VA has taken steps to address our recommendations for the Family Caregiver Program and VBMS, the department has not fully implemented most of them. VA Recently Initiated Its Fourth Effort to Modernize VistA VA has pursued four efforts over nearly 2 decades to modernize VistA. These efforts—HealtheVet, the integrated Electronic Health Record (iEHR), VistA Evolution, and EHRM—reflect varying approaches that the department has considered to achieve a modernized health care system. Figure 1 shows a timeline of the four efforts that VA has pursued to modernize VistA since 2001. In 2001, VA undertook its first VistA modernization project, the HealtheVet initiative, with the goals of standardizing the department’s health care system and eliminating the approximately 130 different systems used by its field locations at that time. HealtheVet was scheduled to be fully implemented by 2018 at a total estimated development and deployment cost of about $11 billion. As part of the effort, the department had planned to develop or enhance specific areas of system functionality through six projects, which were to be completed between 2006 and 2012. In June 2008, we reported that the department had made progress on the HealtheVet initiative, but noted concerns with its project planning and governance. In June 2009, the Secretary of Veterans Affairs announced that VA would stop financing failed projects and improve the management of its IT development projects. Subsequently in August 2010, the department reported that it had terminated the HealtheVet initiative. In February 2011, VA began its second VistA modernization initiative, the iEHR program, in conjunction with DOD. The program was intended to replace the two separate electronic health record systems used by the two departments with a single, shared system. In addition, because both departments would be using the same system, this approach was expected to largely sidestep the challenges that had been encountered in trying to achieve interoperability between their two separate systems. Initial plans called for the development of a single, joint iEHR system consisting of 54 clinical capabilities to be delivered in six increments between 2014 and 2017. Among the agreed-upon capabilities to be delivered were those supporting laboratory, anatomic pathology, pharmacy, and immunizations. According to VA and DOD, the single system had an estimated life cycle cost of $29 billion through the end of fiscal year 2029. However, in February 2013, the Secretaries of VA and DOD announced that they would not continue with their joint development of a single electronic health record system. This decision resulted from an assessment of the iEHR program that the secretaries had requested in December 2012 because of their concerns about the program facing challenges in meeting deadlines, costing too much, and taking too long to deliver capabilities. In 2013, the departments abandoned their plan to develop the integrated system and stated that they would again pursue separate modernization efforts. In December 2013, VA initiated its VistA Evolution program as a joint effort of VHA and OI&T. The program was to be comprised of a collection of projects and efforts focused on improving the efficiency and quality of veterans’ health care, modernizing the department’s health information systems, increasing the department’s data exchange and interoperability with DOD and private sector health care partners, and reducing the time it takes to deploy new health information management capabilities. Further, the program was intended to result in lower costs for system upgrades, maintenance, and sustainment. However, VA ended the VistA Evolution program in December 2018 to focus on its new electronic health record system acquisition. In June 2017, VA’s Secretary announced a significant shift in the department’s approach to modernizing VistA. Specifically, rather than continue to use VistA, the Secretary stated that the department would acquire the same electronic health record system that DOD is implementing. In this regard, DOD awarded a contract to acquire a new integrated electronic health record system developed by the Cerner Corporation. According to the Secretary, VA decided to acquire this same product because it would allow all of VA’s and DOD’s patient data to reside in one system, thus enabling seamless care between the department and DOD without the manual and electronic exchange and reconciliation of data between two separate systems. According to the Secretary, this fourth VistA modernization initiative is intended to minimize customization and system differences that currently exist within the department’s medical facilities, and ensure the consistency of processes and practices within VA and DOD. When fully operational, the system is intended to be a single source for patients to access their medical history and for clinicians to use that history in real time at any VA or DOD medical facility, which may result in improved health care outcomes. According to VA’s Chief Technology Officer, Cerner is expected to provide integration, configuration, testing, deployment, hosting, organizational change management, training, sustainment, and licenses necessary to deploy the system in a manner that meets the department’s needs. To expedite the acquisition, in June 2017, the Secretary signed a “Determination and Findings,” for a public interest exception to the requirement for full and open competition, and authorized VA to issue a solicitation directly to Cerner. Accordingly, the department awarded a contract to Cerner in May 2018 for a maximum of $10 billion over 10 years. Cerner is to replace VistA with a commercial electronic health record system. This new system is to support a broad range of health care functions that include, for example, acute care, clinical decision support, dental care, and emergency medicine. When implemented, the new system will be expected to provide access to authoritative clinical data sources and become the authoritative source of clinical data to support improved health, patient safety, and quality of care provided by VA. Further, the department has estimated that, as of November 2018, an additional $6.1 billion in funding, above the Cerner contract amount, will be needed to fund additional project management support supplied by outside contractors, government labor costs, and infrastructure improvements over a 10-year implementation period. Deployment of the new electronic health record system at three initial sites is planned for March 2020, with a phased implementation of the remaining sites over the next decade. Each VA medical facility is expected to continue using VistA until the new system has been deployed at that location. After VA announced in June 2017 that it planned to acquire the Cerner electronic health record system, we testified in June 2018 that a governance structure had been proposed that would be expected to leverage existing joint governance facilitated by the Interagency Program Office. At that time, VA’s program officials had stated that the department’s governance plans for the new program were expected to be finalized in October 2018. However, the officials had not indicated what role, if any, the Interagency Program Office was to have in the governance process. This office has been involved in various approaches to increase health information interoperability since it was established by the National Defense Authorization Act for Fiscal Year 2008 to function as the single point of accountability for DOD’s and VA’s electronic health record system interoperability efforts. In September 2018, we recommended that VA clearly define the role and responsibilities of the Interagency Program Office in the governance plans for acquisition of the department’s new electronic health record system. The department concurred with our recommendation and stated that the Joint Executive Committee, a joint governance body comprised of leadership from DOD and VA, had approved a role for the Interagency Program Office that included providing expertise, guidance, and support for DOD, VA, and joint governance bodies as the departments continue to acquire and implement interoperable electronic health record systems. However, the department has not yet provided documentation supporting these actions and how they relate to VA’s governance structure for the new acquisition. In addition, the role described does not appear to position the office to be the single point of accountability originally identified in the National Defense Authorization Act for Fiscal Year 2008. We continue to monitor the department’s governance plans for the acquisition of the new electronic health record system and its relationship with the Interagency Program Office. The Family Caregiver Program Has Not Been Supported by an Effective IT System In May 2010, VA was required by statute to establish a program to support family caregivers of seriously injured post-9/11 veterans. In May 2011, VHA implemented its Family Caregiver Program at all VA medical centers across the country, offering caregivers an array of services, including a monthly stipend, training, counseling, referral services, and expanded access to mental health and respite care. In fiscal year 2014, VHA obligated over $263 million for the program. In September 2014, we reported that the Caregiver Support Program office, which manages the program, did not have ready access to the types of workload data that would allow it to routinely monitor the effects of the Family Caregiver Program on VA medical centers’ resources due to limitations with the program’s IT system—the Caregiver Application Tracker. Program officials explained that this system was designed to manage a much smaller program and, as a result, the system has limited capabilities. Outside of obtaining basic aggregate program statistics, the program office was not able to readily retrieve data from the system that would allow it to better assess the scope and extent of workload problems at VA medical centers. Program officials also expressed concern about the reliability of the system’s data. The lack of ready access to comprehensive workload data impeded the program office’s ability to monitor the program and identify workload problems or make modifications as needed. This runs counter to federal standards for internal control which state that agencies should monitor their performance over time and use the results to correct identified deficiencies and make improvements. We also noted in our report that program officials told us that they had taken initial steps to obtain another IT system to support the Family Caregiver Program, but they were not sure how long it would take to implement. Accordingly, we recommended that VA expedite the process for identifying and implementing a system that would fully support the Family Caregiver Program. VA concurred with our recommendation and subsequently began taking steps to implement a replacement system. However, the department has encountered challenges related to the system implementation efforts. We have ongoing work to evaluate VA’s effort to acquire a new IT system to support the Family Caregiver Program. Additional Actions Can Improve Efforts to Develop and Use the Veterans Benefits Management System In September 2015, we reported that VBA had made progress in developing and implementing VBMS—its system for processing disability benefit claims—but also noted that additional actions could improve efforts to develop and use the system. Specifically, VBA had deployed the initial version of the system to all of its regional offices as of June 2013. Further, after initial deployment, it continued developing and implementing additional system functionality and enhancements to support the electronic processing of disability compensation claims. Nevertheless, we pointed out that VBMS was not able to fully support disability and pension claims, as well as appeals processing. While the Under Secretary for Benefits stated in March 2013 that the development of the system was expected to be completed in 2015, implementation of functionality to fully support electronic claims processing was delayed beyond 2015. In addition, VBA had not produced a plan that identified when the system would be completed. Accordingly, holding VBA management accountable for meeting a time frame and demonstrating progress was difficult. Our report further noted that, even as VBA continued its efforts to complete the development and implementation of VBMS, three areas were in need of increased management attention: cost estimating, system availability, and system defects. We also noted in our report that VBA had not conducted a customer satisfaction survey that would allow the department to compile data on how users viewed the system’s performance and, ultimately, to develop goals for improving the system. We made five recommendations to improve VA’s efforts to effectively complete the development and implementation of VBMS. VA agreed with four of the recommendations. In addition, the department has addressed one of the recommendations—that it establish goals for system response time and use the goals as the basis for reporting system performance. However, the department has not yet fully addressed our remaining recommendations to (1) develop a plan with a time frame and a reliable cost estimate for completing VBMS, (2) reduce the incidence of system defects present in new releases, (3) assess user satisfaction, and (4) establish satisfaction goals to promote improvement. Continued attention to these important areas can improve VA’s efforts to effectively complete the development and implementation of VBMS and, in turn, more effectively support the department’s processing of disability benefit claims. VA Has Demonstrated Uneven Progress toward Implementing Key FITARA Provisions FITARA included provisions for federal agencies to, among other things, enhance government-wide acquisition and management of software, improve the risk management of IT investments, consolidate data centers, and enhance CIOs’ authorities. Since its enactment, we have reported numerous times on VA’s efforts toward implementing FITARA. VA’s progress toward implementing key FITARA provisions has been uneven. Specifically, VA issued a software licensing policy and has generated an inventory of its software licenses to inform future investment decisions. However, the department did not fully address requirements related to IT investment risk, data center consolidation, or CIO authority enhancement. VA has made progress in addressing federal software licensing requirements. In May 2014, we reported on federal agencies’ management of software licenses and stressed that better management was needed to achieve significant savings government-wide. Specifically regarding VA, we noted that the department did not have comprehensive policies that included the establishment of clear roles and central oversight authority for managing enterprise software license agreements, among other things. We also noted that it had not established a comprehensive software license inventory, a leading practice that would help the department to adequately manage its software licenses. The inadequate implementation of these and other leading practices in software license management was partially due to weaknesses in the department’s policies related to licensing management. Thus, we made six recommendations to VA to improve its policies and practices for managing licenses. For example, we recommended that the department regularly track and maintain a comprehensive inventory of software licenses and analyze the inventory to identify opportunities to reduce costs and better inform investment decision making. Since our 2014 report, VA has taken actions to implement all six recommendations. For example, the department implemented a solution to generate and maintain a comprehensive inventory of software licenses using automated tools for the majority of agency software license spending and/or enterprise-wide licenses. Additionally, the department implemented a solution to analyze agency-wide software license data, including usage and costs; and it subsequently identified approximately $65 million in cost savings over 3 years due to analyzing one of its software licenses. VA has made limited progress in addressing the FITARA requirements related to managing the risks associated with IT investments. In June 2016, we reported on risk ratings assigned to investments by CIOs. We noted that the department had reviewed compliance with risk management practices, but had not assessed active risks when developing its risk ratings. VA determined its ratings by quantifying and combining inputs such as cost and schedule variances, risk exposure values, and compliance with agency processes. Metrics for compliance with agency processes included those related to program and project management, project execution, the quality of investment documentation, and whether the investment was regularly updating risk management plans and logs. When developing CIO ratings, VA chose to focus on investments’ risk management processes, such as whether a process was in place or whether a risk log was current. Such approaches did not consider individual risks, such as funding cuts or staffing changes, which detail the probability and impact of pending threats to success. Instead, VA’s CIO rating process considered several specific risk management criteria: whether an investment (1) had a risk management strategy, (2) kept the risk register current and complete, (3) clearly prioritized risks, and (4) put mitigation plans in place to address risks. As a result, we recommended that VA factor active risks into its CIO ratings. We also recommended that the department ensure that these ratings reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. VA concurred with the recommendations and cited actions it planned to take to address them. VA has reported progress on consolidating and optimizing its data centers, although this progress has fallen short of targets set by OMB. Specifically, VA reported a total inventory of 415 data centers, of which 39 had been closed as of August 2017. While the department anticipated another 10 data centers would be closed by the end of fiscal year 2018, these closures fell short of the targets set by OMB. Further, while VA reported $23.61 million in data center-related cost savings and avoidances from 2012 through August 2017, the department did not realize further savings from the additional 10 data center closures. In addition, as of February 2017, VA reported meeting one of OMB’s five data center optimization metrics related to power usage effectiveness. Also, the department’s data center optimization strategic plan indicated that VA planned to meet three of the five metrics by the end of fiscal year 2018. Further, while OMB directed agencies to replace manual collection and reporting of metrics with automated tools no later than fiscal year 2018, the department had only implemented automated tools at 6 percent of its data centers. We have recommended that VA take actions to address data center savings goals and optimization performance targets identified by OMB. The department has taken actions to address these recommendations, including reporting data center consolidation savings and avoidance costs to OMB and updating its data center optimization strategic plan. However, the department has yet to address recommendations related to areas that we reported as not meeting OMB’s established targets, including implementing automated monitoring tools at its data centers. VA has made limited progress in addressing the CIO authority requirements of FITARA. Specifically, in November 2017, we reported on agencies’ efforts to utilize incremental development practices for selected major investments. We noted that VA’s CIO had certified the use of adequate incremental development for all 10 of the department’s major IT investments. However, VA had not updated the department’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, as we had recommended. As of October 2018, a VA official stated that the department was working to draft a policy to address our recommendation, but did not identify time frames for when all activities would be completed. In January 2018, we reported on the need for agencies to involve CIOs in reviewing IT acquisition plans and strategies. We noted that VA’s CIO did not review IT acquisition plans or strategies and that the Chief Acquisition Officer was not involved in the process of identifying IT acquisitions. Accordingly, we recommended that the VA Secretary ensure that the office of the Chief Acquisition Officer is involved in the process to identify IT acquisitions. We also recommended that the Secretary ensure that the acquisition plans or strategies are reviewed and approved in accordance with OMB guidance. The department concurred with the recommendations and, in a May 2018 update, provided a draft process map that depicted its forthcoming acquisition process. However, as of March 2019, this process had not yet been finalized and implemented. In August 2018, we reported that the department had only fully addressed two of the six key areas that we identified—IT Leadership and Accountability and Information Security. The department had partially addressed IT Budgeting, minimally addressed IT Investment Management, and had not at all addressed IT Strategic Planning or IT Workforce. Thus, we recommended that the VA Secretary ensure that the department’s IT management policies address the role of the CIO for key responsibilities in the four areas we identified. The department concurred with the recommendation and acknowledged that many of the responsibilities provided to the CIO were not explicitly formalized by VA policy. VA’s Cybersecurity Management Lacks Key Elements In December 2018, we reported on the effectiveness of the government’s approach and strategy for securing its systems. The federal approach and strategy for securing information systems is prescribed by federal law and policy, including FISMA and the presidential executive order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. Accordingly, federal reports describing agency implementation of this law and policy, and reports of related agency information security activities, indicated VA’s lack of effectiveness in its efforts to implement the federal approach and strategy. Our December 2018 report identified that the department was deficient or had material weaknesses in all four indicators of departments’ effectiveness in implementing the federal approach and strategy for securing information systems. Specifically, VA was not effective in the Inspector General Information Security Program Ratings, was found to have material weaknesses in the Inspector General Internal Control Deficiencies over Financial Reporting, did not meet CIO Cybersecurity Cross-Agency Priority Goal Targets, and had enterprises that were at risk according to OMB Management Assessment Ratings. We reported on federal high-impact systems—those that hold sensitive information, the loss of which could cause individuals, the government, or the nation catastrophic harm—in May 2016. We noted that VA had implemented numerous controls, such as completion of risk assessments, over selected systems. However, the department had not always effectively implemented access controls, patch management, and contingency planning to protect the confidentiality, integrity and availability of these high-impact systems. These weaknesses existed in part because the department had not effectively implemented elements of its information security program. We made five recommendations to VA to improve its information security program. The department concurred with the recommendations and, as of March 2019, had implemented three of the five recommendations. Our March 2019 report on the federal cybersecurity workforce indicated that VA was not accurately categorizing positions to effectively identify critical staffing needs. The Federal Cybersecurity Workforce Assessment Act of 2015 required agencies to assign the appropriate work role codes to each position with cybersecurity, cyber-related, and IT functions. Agencies were to assign a code of “000” only to positions that did not perform IT, cybersecurity, or cyber-related functions. As we reported, VA had assigned a “000” code to 3,008 (45 percent) of its 6,636 IT positions. Human resources and IT officials from the department stated that they may have assigned the “000” code in error and that they had not completed the process to validate the accuracy of their codes. We recommended that VA take steps to review the assignment of the “000” code to any of the department’s positions in the IT management occupational series and assign the appropriate work role codes. VA concurred with the recommendation and indicated that it was in the process of conducting a cyber coding review. In conclusion, VA has long struggled to overcome IT management challenges, which have resulted in a lack of system capabilities needed to successfully implement critical initiatives. In this regard, VA is set to begin deploying its new electronic health record system in less than 1 year and questions remain regarding the governance structure for the program. Thus, it is more important than ever for the department to ensure that it is managing its IT budget in a way that addresses the challenges we have identified in our previous reports and high-risk updates. If the department continues to experience the challenges that we have previously identified, it may jeopardize its fourth attempt to modernize its electronic health record system. Additionally, the department has been challenged in fully implementing provisions of FITARA, which has limited its ability to improve its management of IT acquisitions. Until the department implements the act’s provisions, Congress will be unable to effectively monitor VA’s progress and hold it accountable for reducing duplication and achieving cost savings. Further, the lack of key cybersecurity management elements at VA is concerning given that agencies’ systems are increasingly susceptible to the multitude of cyber-related threats that exist. As VA continues to pursue modernization efforts, it is critical that the department take steps to adequately secure its systems. Chair Lee, Ranking Member Banks, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contact and Staff Acknowledgments If you or your staffs have any questions about this testimony, please contact Carol C. Harris, Director, Information Technology Management Issues, at (202) 512-4456 or harrisc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this testimony are Mark Bird (Assistant Director), Eric Trout (Analyst in Charge), Justin Booth, Rebecca Eyler, Katherine Noble, Scott Pettis, Christy Tyson, and Kevin Walsh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The use of IT is crucial to helping VA effectively serve the nation's veterans. Each year the department spends billions of dollars on its information systems and assets. However, VA has experienced challenges in managing its IT programs, raising questions about its ability to deliver intended outcomes needed to help advance the department's mission. To improve federal agencies' IT acquisitions, in December 2014 Congress enacted FITARA. GAO has previously reported on IT management challenges at VA, as well as its progress in implementing FITARA and cybersecurity requirements. GAO was asked to summarize key results and recommendations from its work at VA that examined systems modernization efforts, FITARA implementation, and cybersecurity efforts. To do so, GAO reviewed its recently issued reports and incorporated information on the department's actions in response to GAO's recommendations. What GAO Found The Department of Veterans Affairs (VA) has made limited progress toward addressing information technology (IT) system modernization challenges. From 2001 through 2018, VA pursued three efforts to modernize its health information system—the Veterans Health Information Systems and Technology Architecture (VistA). However, these efforts experienced high costs, challenges to ensuring interoperability of health data, and ultimately did not result in a modernized VistA. Regarding the department's fourth and most recent effort, the Electronic Health Record Modernization, GAO recently reported that the governance plan for this program was not yet defined. VA has not fully implemented GAO's recommendation calling for the department to define the role of a key office in the governance plans. The Family Caregiver Program, which was established to support family caregivers of seriously injured post-9/11 veterans, has not been supported by an effective IT system. Specifically, GAO reported that, due to limitations with the system, the program office did not have ready access to the types of workload data that would allow it to routinely monitor workload problems created by the program. GAO recommended that VA expedite the process for identifying and implementing an IT system. Although the department concurred with the recommendation, VA has not yet fully addressed it. VA had developed the Veterans Benefits Management System—its system that is used for processing disability benefit claims; however, the system did not fully support disability and pension claims, as well as appeals processing. GAO made five recommendations for VA to improve its efforts to effectively complete the development and implementation of the system. The department concurred with the recommendations but has implemented only one thus far. VA has demonstrated uneven progress toward fully implementing GAO's recommendations related to key Federal Information Technology Acquisition Reform Act (FITARA) provisions. Specifically, VA has implemented all six recommendations in response to GAO's 2014 report on managing software licenses, leading to, among other things, savings of about $65 million over 3 years. However, the department has not fully addressed two recommendations from GAO's 2016 report on managing the risks of major IT investments. Further, the department has not implemented (1) two of four recommendations related to its effort to consolidate data centers and (2) GAO's four recommendations to increase the authority of its Chief Information Officer. VA's management of cybersecurity has also lacked key elements. For example, GAO reported in May 2016 that VA had established numerous security controls, but had not effectively implemented key elements of its information security program. In addition, as GAO reported in March 2019, the department had not accurately categorized positions to effectively identify critical staffing needs for its cybersecurity workforce. VA has implemented three of six cybersecurity-related recommendations from these two reports. What GAO Recommends GAO has made numerous recent recommendations to VA aimed at improving the department's IT management. VA has generally agreed with the recommendations and has taken steps to address them; however, the department has fully implemented less than half of them. Fully implementing all of GAO's recommendations would help VA ensure that its IT effectively supports the department's mission.
gao_GAO-19-650
gao_GAO-19-650_0
Background Opioids, such as hydrocodone and oxycodone, can be prescribed to treat both acute and chronic pain. Opioids can pose serious risks when they are misused. These risks include addiction, overdose, and death. As a result, opioids are classified as controlled substances, which means that their use and disposal are subject to additional oversight by DEA. Some studies suggest that the majority of patients who received prescriptions for opioids often do not use a large portion of the drugs dispensed. A study that surveyed U.S. adults who had received opioids found that approximately 60 percent of patients who were no longer using the medication had unused opioids. Two studies reported that over one- half of patients did not use all of the opioids prescribed to them after surgery; these studies found that patients reported leaving 15 to 20 pills unused, representing 54 percent to 72 percent of the opioids they were prescribed. Another study on patient opioid use after a cesarean section and thoracic surgery found that most patients, 83 percent and 71 percent respectively, used less than half of the total opioids they were prescribed. Federal Authorities There is no federal law or regulation imposing requirements for how patients are to dispose of unused opioids. However, DEA, FDA, and EPA all have authorities and initiatives related to patient disposal of opioids. DEA regulations specify three take-back options that patients can opt to use to dispose of their unused controlled substances: take-back events, permanent collection sites, and mail-back programs. DEA hosts semi- annual events called National Prescription Drug Take-Back Days, where temporary collection sites are set up in locations such as police stations. Advertisements encourage community participation in the events and educate the community on safe disposal of unused medications, including opioids. DEA also registers collectors and provides information to the public about the location of permanent collection sites for take-back, such as at local retail pharmacies or hospital pharmacies, and sets requirements for the provision of postage-paid envelopes that patients can use to mail unused drugs to a collector for destruction. DEA regulations establish a standard for the destruction of controlled substances that applies to DEA registrants, which can destroy opioids on patients’ behalf. DEA registrants include pharmaceutical companies that manufacture controlled substances, health care providers who prescribe them, and pharmacies that dispense them. The standard for destruction requires that controlled substances maintained or collected by DEA registrants be rendered non-retrievable. This means that the physical and chemical conditions of the controlled substance must be permanently altered, thereby rendering the controlled substance unavailable and unusable for all practical purposes. According to DEA, as of May 2019, the only method currently used to meet this standard is incineration, and DEA rulemaking states that DEA will not evaluate, review, or approve methods used to render a controlled substance non-retrievable. FDA has broad authority under the Federal Food, Drug, and Cosmetic Act to evaluate whether a drug is safe and effective and ensure the benefits of drugs outweigh the risks. FDA may require manufacturers to develop a risk evaluation and mitigation strategy (REMS) for drugs with serious safety risks, including the risk of abuse, to ensure that the benefits outweigh the risks. Under one REMS, for example, manufacturers of opioids intended for outpatient use must make training available to health care providers involved in the treatment and monitoring of patients who receive opioids. The training must contain certain elements, including how providers should counsel patients and caregivers about the safe use and disposal of these opioids, among other things. In October 2018, the SUPPORT Act authorized FDA to, at its discretion, require specific packaging or disposal systems as a part of certain drugs’ REMS. For drugs with a serious risk of overdose or abuse, FDA may require the drug to be made available for dispensing to certain patients with “safe disposal packaging” or a “safe disposal system” for purposes of rendering the drug non-retrievable in accordance with DEA regulations. Before imposing these requirements, FDA must consider the potential burden on patient access to the drug and the health care delivery system. As of May 2019, FDA had not imposed any REMS requirements using the new SUPPORT Act authority. Under the Resource Conservation and Recovery Act (RCRA), EPA has authority to regulate the generation, transportation, treatment, storage, and disposal of hazardous waste, including certain discarded opioids. However, hazardous waste pharmaceuticals generated by households are not regulated as hazardous waste even if the waste would otherwise be considered hazardous. Opioids and other household waste pharmaceuticals collected through a take-back option are also exempt from most hazardous waste regulations, provided certain conditions are met. Some states and localities have imposed additional requirements for pharmaceutical disposal, such as requirements for drug manufacturers to manage or fund the disposal of collected household pharmaceuticals. Federal Agencies Recommend Take- Back Options as the Preferred Disposal Method Federal Agencies Recommend Take-Back Options Whenever Feasible, Followed by Disposal Using the Toilet or Trash According to DEA, FDA, and EPA, patients should use take-back options to dispose of unused opioids, whenever feasible. Only if take-back options are not feasible, FDA recommends flushing opioids on FDA’s flush list down the toilet to remove them from the home as soon as possible. For opioids not on the flush list, the agencies recommend placing the drugs in the household trash mixed with an unpalatable substance. (See fig. 1). Officials from FDA said that the primary goal of these recommendations is to remove dangerous substances from the home as soon as possible to reduce accidental poisoning, which also may address issues related to intentional misuse. FDA officials explained that the agency has not measured the effects of its recommendations for disposing of opioids on opioid misuse, as of May 2019, because it is difficult to establish a causal link between the recommendations and any reductions in misuse. DEA, FDA, and EPA recommend using a take-back option as the preferred method for patients to dispose of unused prescription opioids. Under this method, patients can bring unused opioids to DEA’s semi- annual take-back events or to DEA-registered permanent collection sites, or use mail-back to deliver opioids to a DEA-registered collector for destruction. When patients use these take-back options, the drugs they dispose of are ultimately incinerated, which is the only method that DEA officials said is known to render the drugs non-retrievable, that is, permanently and irreversibly destroyed. Our analysis of DEA and U.S. Census Bureau data shows that as of April 2019, 71 percent of the country’s population lived less than 5 miles from a permanent collection site, and in 42 states, at least half of the population lived within 5 miles of a site. (See fig. 2). This number has increased since our April 2017 report, when we found that about half of the country’s population lived less than 5 miles away from a site. Our analysis also shows that 90 percent of the population lived within 15 miles of a site, though in rural areas only 57 percent lived within 15 miles. In addition, two studies found that patients were willing to bring unused opioids to a take-back location as long as it was located within 5 to 8 miles of their home address. If take-back options are not feasible, FDA recommends flushing the opioids on its flush list down the toilet, because a single dose can be fatal to a child or a pet. Flushing is a permanent way to remove opioids from the home. FDA confirmed that as of June 2019, 11 of 14 drugs on the flush list are opioids, which represents about three-quarters of the approved opioid active ingredients intended for outpatient use (see sidebar). Some portion of drugs that are flushed down the toilet ultimately enter surface and wastewater streams. However, a 2017 FDA study on the environmental impact of drugs listed on the flush list concluded that flushing these opioids has negligible effects on the environment and human health, particularly relative to the amount of opioids that are excreted after taking them as prescribed, because not all of the drug is metabolized. (See text box for a summary of the effects of disposal options on the environment.) Environmental Effects of Disposal Options The environmental impact of opioid disposal depends on the method used—take-back options, flushing, or trash. According to Environmental Protection Agency (EPA) and Drug Enforcement Agency (DEA) officials, disposal of drugs through take-back options results in disposal by permitted incineration, which fully destroys the active form of the drugs. EPA officials told us that flushing or placing opioids in the trash can introduce active opioids into wastewater streams, groundwater, and surface waters. Incineration of Drugs from Take-Back Options. Opioids disposed of using take-back options are destroyed by incineration, which, according to DEA officials, is the only method currently used to meet its non-retrievable standard for destruction. EPA officials told us that based on data from DEA, the amount of household pharmaceutical waste gathered and incinerated during DEA’s semi-annual take-back events is small compared to the total amount of waste one incinerator burns on an average day. EPA officials recommended take-back options as the preferred method of opioid disposal. Flushing. Opioids enter the water supply when excreted by patients who take opioids as prescribed and when patients intentionally flush unused opioids down the toilet. EPA officials told us that most wastewater treatment facilities are not designed to eliminate opioids from wastewater streams. Further, measureable concentrations of opioids have been reported in surface and ground water sources around the world. Trash. Disposal of unused opioids in the trash often introduces opioids into landfills. Studies in scientific literature show that pharmaceutical ingredients have been observed in the water that passes through landfills, called leachate. Similar to opioids that are flushed, opioids in landfill leachate can end up in wastewater streams and other water sources, according to EPA officials. Household Trash If an opioid is not on the FDA flush list and a take-back option is not feasible, the agencies direct patients to take a series of steps to dispose of their opioids in household trash by: (1) mixing the drugs in an unpalatable substance such as dirt, cat litter, or used coffee grounds, (2) placing the mixture in a sealed container or plastic bag, and (3) throwing the container in the trash. An EPA official said that mixing the drugs with an unpalatable substance is meant to deter misusers from searching through the trash to retrieve the drugs. Disposal of opioids in the trash—either with an unpalatable substance or in-home disposal product—removes them from the home, but this option may not be permanent and the drugs still may be available for misuse. Drugs that are disposed in the trash ultimately are introduced to landfills, where they can escape landfill containment and enter wastewater streams or ground water sources. FDA Has Not Evaluated Commercial Disposal Methods FDA’s website notes the availability of commercial products for disposing of unused opioids and other drugs in the home. FDA officials stated that, as of May 2019, the agency had not evaluated the effectiveness of these products or made any recommendations related to their use, but they are aware that patients may opt to use these products. These products, known as in-home disposal products, are proprietary substances that patients can mix with their unused drugs, including opioids, before disposing of them in the trash. In-home disposal product vendors told us they sell or donate their products to pharmacies, local law enforcement, and community groups, which then distribute them to patients. A representative from a group that distributes these products, the AmerisourceBergen Foundation, noted that in-home disposal products may be a convenient option for patients for whom take-back options are not feasible, and marketing materials from a product vendor instruct patients to use their product if a take-back option is not available. Vendors indicate that their products can prevent misuse of opioids by rendering drugs non-retrievable at home and by motivating patients to dispose of unused opioids. According to DEA officials, rendering opioids non-retrievable by using an in-home disposal product is challenging, because the drugs have a variety of chemical and physical properties and potencies. Furthermore, according to DEA officials, a lethal dose of fentanyl can be as low as 250 micrograms in adults—and lower in children—underscoring the importance of effective disposal. Some vendors have presented evaluations of their commercial products. A recent comprehensive review of eight in-home disposal products raised concerns about the credibility of vendors’ evaluations and concluded that additional independent laboratory analysis is needed to fully examine product performance and assess how well these products achieve stated goals. Our review of evaluations from three vendors found that the studies contained some inconsistencies and gaps in the evaluation methods used, raising questions about the studies’ conclusions that the products are effective for disposing of opioids. In some cases, studies included detailed, but inconsistent, methods. For example, in four studies about one product, the researchers concluded that the product deactivated most of an opioid dissolved in water. However, one of the earlier studies reported that whole pills did not dissolve in water, which could impact the results, but later studies did not include similar data. In other cases, companies’ evaluations were summaries of results that did not provide enough information to independently verify or assess whether the products deactivate opioids and prevent misuse. For example, one company’s research documents presented images of a mixture as evidence that the drugs had degraded, rather than results of a test measuring if drugs were still detectable. In addition, the studies included little information about the products’ effectiveness at treating mixtures of multiple drugs at the same time, a scenario that stakeholders have referred to as “real world” use testing. Few Patients Use Federally Recommended Opioid Disposal Methods; FDA and Others Have Taken Steps to Educate the Public Few Patients Use Federally Recommended Methods to Dispose of Unused Opioids Disposal methods—when patients use them promptly—remove unused opioids from the home and therefore can be effective at reducing opioid misuse. FDA officials said that the federally recommended methods for disposing unused opioids are intended to remove these substances from the home as soon as possible, and stated that as long as individuals dispose of opioids promptly rather than storing them, then FDA has achieved its goal. However, the studies we reviewed suggest that most patients do not dispose of unused opioids using a federally recommended method. Specifically, three studies examined how patients disposed of unused opioids and found that between 12 percent and 41 percent of patients disposed of them using a federally recommended method. For example, one of the studies found that of 570 survey respondents who had unused opioids, 12 percent of respondents reported using a take-back option, 14 percent reported that they flushed them down the toilet, and 6 percent reported that they threw them in the trash after mixing with an unpalatable substance. Other studies we reviewed show that take-back options are often used to dispose of drugs other than opioids. Two studies found that less than 10 percent of the catalogued drugs brought to DEA take-back days were controlled substances, which included opioids, while another study weighed drugs brought to take-back events and permanent collection sites and reported less than 3 percent were controlled substances, including opioids. The same study found that annually, controlled substances disposed of at take-back events and permanent collection sites accounted for about 0.3 percent of those dispensed in the area, and concluded that take-back events may have a minimal impact on reducing the availability of unused opioids for misuse. Studies indicate that patients who receive an in-home disposal product may be more likely to dispose of unused opioids, but they may also be less likely to use federally recommended options like take-back or flushing. Two studies in our review found that patients who receive an in- home disposal product have reported that they are more likely to dispose of unused opioids than those who did not receive the product. Use of in- home disposal products—which may not be effective at permanently destroying drugs—may deter patients from using federally recommended options, like take-back, that have been proven effective. For example, one of these studies found that only one of the 70 patients who received an in-home disposal product used a take-back option for disposal, despite the study taking place in a state where we estimated that 77 percent of the population lived less than 5 miles from a permanent collection site. Studies indicate that patients are often unaware of federally recommended disposal options. Three of the 25 studies we reviewed suggest that many patients were not aware of federally recommended methods for disposing of opioids. For example, a study of cancer patients who received opioid prescriptions reported that more than three- quarters of these patients were unaware of proper opioid disposal methods. Another 2016 study of 1,032 patients found that nearly half of the respondents did not recall receiving information on proper disposal from pharmacists, medication packaging, or media outlets. Studies also indicate that patients choose not to dispose of unused opioids, and that they knowingly participate in the majority of opioid misuse. Five of the studies we reviewed found that between one-quarter and three-quarters of patients stored unused opioids for future use or had misplaced their unused opioids. For example, one of these studies found that 49 percent of survey respondents kept or planned to keep unused opioids for future use, and 14 percent were likely to let a family member use their opioid medications in the future. Federal data about the sources of misused opioids indicate that patients are complicit with most misuse. SAMHSA estimates that 5 percent of people nationwide who misused opioids in 2017 took these drugs from someone else without asking. In contrast, SAMHSA estimates that 85 percent of opioid misuse occurs with the patient’s knowledge or active participation, either through the patient misusing his or her own prescription by taking the drug for pain other than for which it was prescribed or by giving or selling the prescribed opioids to another person. (See fig. 3). FDA and Others Have Taken Steps to Educate Patients and Providers about Appropriate Opioid Disposal To motivate patients to use federally recommended methods to dispose of unused opioids, FDA and some physician organizations have created educational materials on safe disposal methods. For example, FDA launched a public awareness campaign called “Remove the Risk” on April 25, 2019—complete with educational materials such as public service announcements, social media posts, fact sheets, and other web-based content. AMA representatives reported that the AMA has provided physicians with educational material on drug disposal and prescribing. Specifically, AMA representatives told us that the association has compiled a two-page document for physicians containing information about drug disposal, links to DEA information on nearby permanent collection sites and take-back events, and FDA guidance on safe disposal of medications. This document included recommendations for physicians to talk to patients about safe use of prescription opioids, remind patients to store their medications in a safe place out of reach from children, and have a conversation with patients about the most appropriate ways to dispose of expired, unwanted, or unused opioids. The AmerisourceBergen Foundation has also partnered with communities to promote safe opioid disposal by providing education about take-back options and commercial in-home disposal products to patients. A representative from the Foundation explained that its Safe Disposal Support Program provides non-profit organizations or municipalities with commercial in-home disposal products, which then can be distributed free of charge to other organizations, individuals, or households. It also recommends that patients use take-back options when available. The representative said that organizations are to demonstrate to patients how these products work either through a brief in-person demonstration at an event or through a video. According to the representative, these products and demonstrations help people reflect on what is in their home and needs to be disposed of, either using a product or a take-back option. Despite such efforts, little is known about the extent to which stakeholders’ efforts to educate the public are effective in increasing use of federally recommended disposal methods. FDA officials said that they are not aware of the extent to which providers are familiar with all disposal methods or the extent to which providers discuss the importance of proper disposal with patients. As part of FDA’s REMS requirements for outpatient opioids, manufacturers must make training available to health care providers involved in the treatment and monitoring of patients who receive opioids, which includes information about the need to communicate with patients about disposal of unused drugs. FDA officials said that opioid manufacturers must assess the effectiveness of their REMS, including an assessment of prescribers’, other health care providers including pharmacists’, and patients’ understanding of the key risk messages conveyed through the educational materials. FDA expects to receive the next REMS assessment with the results of these analyses in 2020. The AMA has not been able to measure the effects of its recommendations, but provided anecdotal feedback from its members that many physicians do not consistently speak to their patients about disposal. FDA officials and AMA representatives indicated that in addition to educating patients on opioid disposal methods, focusing efforts on reducing the amount of unused opioids would be an effective approach for reducing misuse and abuse. For example, FDA officials said that adding packaging configurations that contain smaller quantities of certain opioids could help prescribers to more carefully consider the amount of opioid pain medication they prescribe. This in turn may reduce the number of unused opioids available in the home that could be inappropriately accessed by family members or visitors, and could potentially reduce the risk for misuse and abuse. Representatives from the AMA explained that it and other organizations are working to provide opioid prescribing resources and guidance to help physicians effectively manage patients’ pain, which representatives said will reduce the number of unused opioids available for misuse. FDA officials and a researcher also noted that dispensing opioids in packaging that makes it easy to count the number of unused pills may help patients identify intentional misuse. Agency Comments The FDA and EPA provided technical comments on a draft of this report, which we incorporated as appropriate; the DEA did not have comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Administrator of the DEA, the Administrator of the EPA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact James Cosgrove, (202) 512-7114 or cosgrovej@gao.gov. Staff Acknowledgments In addition to the contact named above, individuals making key contributions to this report include Leslie V. Gordon (Assistant Director), A. Elizabeth Dobrenz (Analyst-in-Charge), Sam Amrhein, Jieun Chang, Diana Chung, Kaitlin Farquharson, and Dennis Mayo. Also contributing were Giselle Hicks, Cynthia Khan, and Ethiene Salgado-Rodriguez.
Why GAO Did This Study In 2017, an estimated 11.1 million Americans misused a prescription pain reliever, which included opioids. This misuse contributes to opioid abuse and death, which has quintupled from 1999 to 2017; about 17,000 people died from prescription opioid overdoses in 2017. Government agencies and stakeholders have attempted to address the potential for misuse and abuse by facilitating safe disposal of unused prescription opioids and other drugs. The SUPPORT for Patients and Communities Act enacted in 2018 included a provision for GAO to review patient disposal of unused opioids, among other things. This report examines (1) federally recommended and other available methods patients may use to dispose of unused prescription opioids, and (2) what is known about patients' use of these methods. To do this work, GAO examined peer-reviewed, academic literature on outcomes for prescription opioid disposal; reviewed federal agency documentation; interviewed federal agency officials, independent researchers, and stakeholder group representatives—such as those from the American Medical Association; and analyzed DEA data as of April 2019 on permanent drug collection sites. GAO also interviewed representatives of three companies that manufacture commercial in-home disposal products and reviewed publicly available documents about these products. What GAO Found The Food and Drug Administration (FDA), Drug Enforcement Administration (DEA), and Environmental Protection Agency (EPA) recommend that patients dispose of unused presciption opioids by bringing them to DEA-registered collection sites or a DEA take-back event, or using mail-back programs. As of April 2019, 70 percent of the U.S. population lived less than 5 miles from permanent collection sites, which are often located at pharmacies. If collection sites, take-back events, or mail-back programs are not feasible, FDA recommends quickly and permanently removing the most dangerous prescription opioids, such as hydrocodone and fentanyl, from the home by flushing them down the toilet. For all other prescription opioids, the agencies recommend disposal in the trash after mixing them with unpalatable substances, such as cat litter. Commercial products to facilitate in-home disposal also exist, and FDA is aware that patients may opt to use these products for disposal in the trash. Available studies suggest that many patients are unaware of federally recommended disposal methods or choose not to dispose of unused prescription opioids. For example, five studies found that between one-quarter and three-quarters of patients stored unused opioids for future use or had misplaced their unused opioids. Further, federal data indicate that 85 percent of intentional misuse occurs with the patient's knowledge—for example, when a patient sells or gives away unused prescription opioids. To educate and motivate patients to dispose of unused opioids, FDA launched a public awareness campaign called “Remove the Risk” in April 2019. Also, FDA and other stakeholders have created educational materials for patients and providers on safe opioid disposal.
gao_GAO-19-715T
gao_GAO-19-715T_0
Background VA pays monthly disability compensation to veterans with service- connected disabilities according to the severity of the disability. VA’s disability compensation claims process starts when a veteran submits a claim to VA. A claims processor then reviews the claim and helps the veteran gather the relevant evidence needed to evaluate the claim. Such evidence includes the veteran’s military service records, medical exams, and treatment records from VHA medical facilities and private medical service providers. If necessary to provide support to substantiate a claim, VA will also provide a medical exam for the veteran, either through a provider at a VHA medical facility or through a VBA contractor. According to VBA officials, VBA monitors a VHA facility’s capacity to conduct exams, and in instances when the facility may not have capacity to conduct a timely exam, VBA will send an exam request to one of its contractors instead. Once the contractor accepts the exam request from VBA, it assigns a contracted examiner to conduct the exam and complete an exam report designed to capture essential medical information for purposes of determining entitlement to disability benefits. The contractors send the completed report to VBA, which uses the information as part of the evidence to evaluate the claim and determine whether the veteran is eligible for benefits. In 2016, VBA established an exam program office to manage and oversee contractors, monitor their performance, and ensure that they meet contract requirements. For example, in 2018 we reported that the contracts require that contractors develop plans outlining how they will ensure examiners are adequately trained. We also reported that contractors are required to provide VBA with monthly exam status reports, which include the number of canceled, rescheduled, and completed exams, among other things. VBA also has an office dedicated to completing quality reviews of contractors’ exam reports, which are used to assess contractor performance. VBA awarded new contracts in 2018, in part, because it wanted to update performance measures for its contractors and to change how contractors were assigned to each region throughout the country, according to agency officials. For example, officials said that the agency restructured the service areas in its contracts from five U.S. geographic districts to four to balance the number of rural and urban areas contained in each region. In doing so, they said that VBA’s goal was to distribute exams in rural areas, where it can be more challenging to find examiners, more evenly across all contractors. Incomplete Information on Quality and Timeliness Continues to Affect VBA’s Oversight of Contractors’ Performance VBA has not fully resolved issues in collecting information on contractors’ quality and timeliness, which continues to hinder its ability to oversee contractor performance. We previously reported that VBA’s lack of complete and accurate information on the quality and timeliness of exams limited its oversight of contracted examiners and contributed to other challenges in managing the contracts. For example, VBA officials had told us that as of late June 2018, VBA was behind in completing quality reviews for contracted exams that were completed in 2017, in part, due to lack of staff to complete the quality reviews. Further, VBA officials had acknowledged that they did not have accurate information on whether contractors were completing veterans’ exams in a timely manner as outlined in the contracts. We reported in 2018 that VBA measured timeliness as the number of days between the date the contractor accepts an exam request and the date the contractor initially sends the completed exam report to VBA. However, we previously found that the exam management system VBA used until spring 2018 did not retain the initial exam completion date when VBA sent an initial exam report back to a contractor for clarification or correction. In such cases, VBA’s system maintained only the most recent date an exam report was sent back to VBA. In such a situation, according to agency officials, VBA would not always be able to accurately assess a contractor’s timeliness as outlined in the contracts. Similar to our findings, the VA Office of Inspector General’s June 2019 report on VBA’s oversight of contracted exam cancellations also identified deficiencies due to staffing shortages and exam management system limitations, among other reasons. According to VBA officials in 2018, because VBA did not have complete and accurate information on contractor performance, it could not carry out key oversight activities. For example, VBA officials acknowledged that they were unable to track exams that needed corrections or clarifications, which we reported is needed to determine if VBA should reduce payment to a contractor. In 2018, we reported that the contracts required that contractors correct these exams within a certain number of days and bill VBA for these exams at half price. However, we found that VBA’s lack of complete and reliable information on insufficient exams hindered its ability to ensure such requirements were met. Further, in the absence of current and accurate quality and timeliness information, we reported in 2018 that VBA officials told us that they had not completed the quarterly reports that summarize how each contractor performed. VBA’s delay in completing these reports meant that it had not administered other provisions of the contracts. For example, we reported in 2018 that the contracts stated that VBA could use performance data to help determine how to allocate exams within specified areas in the United States that have two contractors; in particular, VBA could decide to allocate more exams to the contractor with higher performance results. However, VBA did not have performance data on which it could base its allocation of exams. Rather, the agency based allocation on contractor workload. Further, we reported that the contracts outlined how VBA could use performance data to administer financial incentives linked to performance targets. However, due to the lack of performance information, VA had not yet administered these incentives at the time of our review in October 2018. In our 2018 report, we recommended that VBA take steps to address the oversight issues we identified by developing and implementing a plan for using data from the new exam management system to accurately assess contractor timeliness, monitor time spent correcting exams, and verify proper exam invoicing. VBA has taken steps to address issues with both the incomplete quality information and inaccurate timeliness data. For example, to help resolve the delays in completing quality reviews, VBA officials said in November 2018 that the agency had hired additional staff to assess quality of contract exam reports. As of September 2019, officials said they have 16 out of 17 full-time positions filled in the quality review office because one employee left and that they are in the hiring phase for the final position. With the addition of quality review staff, officials stated that VBA is up-to- date on completing initial quality reviews. However, they said the agency has not yet finalized any quality scores, or completed the quarterly performance reports, under the new contracts. As such, according to VBA, it has not yet administered financial incentives linked to performance. To address the inaccurate timeliness data, VBA officials stated that the agency’s new exam management system, implemented in spring 2018, was designed to capture information that would allow VBA to accurately calculate contractor timeliness. Officials also said that VBA revised its performance measures to help it more fully assess contractors’ performance. In its agency comment response to our draft report in September 2018, VBA had a target completion date of December 2018 for implementing our recommendation. However, as of September 2019, VBA reported that it has not been able to fully implement its plan for using the new system to improve oversight of contractors and did not provide a target completion date for fully implementing our recommendation. In particular, VBA has not been able to implement an automated invoicing system that it plans to use to validate the accuracy of contractors’ invoices nor can it reconcile historical data in the exam management system. As a result, according to VBA, it still cannot ensure that it is paying contractors the correct amounts based on the terms of the contracts. According to VBA, the delay in implementation is, in part, a result of having to fix technical issues with exam scheduling requests and an ongoing effort involving multiple VA offices to align VBA’s systems with those of multiple contractors. To address these issues, VBA stated that it has completed testing of its invoice system with all of the contractors and anticipates completing analysis of the results of those tests by October 2019 and will provide an updated target completion date at that time. We also recommended that VBA regularly monitor and assess aggregate performance data and trends over time to identify higher-level trends and program-wide challenges. Without plans to conduct comprehensive performance analyses, we stated that VBA is limited in its ability to determine if the contract exam program is achieving its quality and timeliness goals in a cost effective manner. VBA stated that as it makes improvements to its exam management system data it will be able to implement this recommendation, but did not provide a specific date. VBA also noted that information collected in the new exam management system has helped them to identify potential issues with the metrics that they use to assess contractor performance and that the agency is in the process of identifying the best way to analyze the data to make improvements to the program. VBA Has Not Finalized System to Verify All Training Has Been Completed We previously reported that VBA relies on contractors to verify that their examiners complete required VA training and that VBA did not have information on whether the training effectively prepares examiners to conduct high quality exams. Specifically, we noted that the contractors, rather than VBA, access the contractor training systems to verify that examiners have completed the required training before they are approved to conduct exams. Further, VBA did not review contractors’ self-reported training reports for accuracy or request supporting documentation, such as training certificates, or solicit feedback from contracted examiners on the effectiveness of training or suggestions for improvement. Since VBA was without plans to verify completion of training, we noted that VBA risked using contracted examiners who are unaware of the agency’s process for conducting exams. This could lead to poor-quality exams that need to be redone and, thus, delays for veterans. Similarly, without information on the effectiveness of training, VBA may not know whether additional training courses are needed. To address these concerns, we recommended that VBA document and implement a plan and processes to verify that contracted examiners have completed required training and that it collect feedback on training for the purpose of assessing its effectiveness and making improvements as needed. As of July 2019, after VBA determined that none of its contractors were comprehensive in reporting all examiners’ training, VBA reported that the agency started conducting random audits of contractor training records. Additionally, VBA said that contractors can submit feedback following the completion of each VBA-developed training course and that it will use this information to make improvements. However, VBA is still in the process of developing a centralized training system to collect information on all training completed by contracted examiners and to obtain participant feedback on each course. VBA stated that it expects the system updates that would allow it to verify that all examiners have completed required training will be fully implement by the end of fiscal year 2020 and that it will continue random audits until full implementation. In conclusion, as VBA increasingly relies on contractors to perform veterans’ disability compensation exams, it is important that the agency ensures proper oversight of these contractors. Specifically, VBA needs to ensure that (1) it has accurate and up-to-date information on individual contractor performance to ensure veterans receive quality and timely exams and that contractors are properly paid, as well as a mechanism to asses overall performance of the contract; and (2) examiners are trained to conduct these exams in a manner that results in accurate exam reports that claims processors can use to make a disability ratings decision. Without sustained oversight, VBA also runs the risk of causing undue harm to veterans through delayed or inadequate exams. Chair Luria, Ranking Member Bost, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you or other members of the subcommittee may have at this time. GAO Contact and Staff Acknowledgments For questions about this statement, please contact Elizabeth Curda, Director, Education Workforce, and Income Security Issues at (202) 512- 7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact above, Nyree Ryder Tee (Assistant Director); Justin Gordinas (Analyst-in-Charge); Alex Galuten; and Jessica Orr made key contributions to this testimony. Other staff who made key contributions to the report cited in the testimony are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study VBA has increased the use of contractors in conducting veterans' disability medical exams. From fiscal year 2012 through mid-September fiscal year 2019, VBA reported that the number of exams completed by contractors rose from about 178,000 to nearly 958,000, which is more than half of all exams completed to date in fiscal year 2019. The remaining exams were completed by medical providers from the Veterans Health Administration. According to VBA, its contracts are worth up to $6.8 billion over 10 years. In light of issues GAO identified with VBA's oversight of contracted examiners in its October 2018 report (GAO-19-13), this testimony provides updates on VA's efforts to 1) improve its oversight of contracted examiners to ensure quality and timely exams and proper invoicing, and 2) ensure that examiners are properly trained. What GAO Found The Veterans Benefits Administration (VBA) has not fully resolved issues regarding how it oversees the quality and timeliness of and invoicing for disability compensation medical exams that are completed by contracted examiners. VBA uses medical exam reports from both VHA and contract examiners to help determine if a veteran should receive disability benefits. GAO reported in October 2018 that VBA was behind in completing quality reviews of contracted exams and did not have accurate information on contractor timeliness. VBA's lack of quality and timeliness data hindered its oversight of contractors' performance. In 2018, GAO made recommendations for VBA to address these issues. VBA has begun to implement GAO's recommendations, but continued action is needed to: Develop and implement a plan for using data from its new medical exam management system to (1) assess contractor timeliness, (2) monitor time spent correcting exams, and (3) verify proper exam invoicing. According to VBA, the agency has not fully implemented its plan for using this new system to resolve challenges with oversight of contractors' performance. For example, due to system issues, VBA has not been able to implement an automated invoicing system it planned to use to validate the accuracy of contractors' invoices. Further, VBA has not yet completed quarterly performance reviews of contracted exams under its new contracts, including any reports for fiscal year 2019. As a result, VBA still is unable to ensure that it is paying contractors the correct amounts based on its contract terms. Monitor and assess aggregate performance data and trends over time to identify higher-level trends and program-wide challenges. VBA officials stated that as the agency makes improvements to the exam management system data it will be able to implement this recommendation, but officials could not provide a target completion date. VBA has taken steps to address issues GAO identified with its oversight of contracted examiner training requirements but has not yet fully addressed them. Having properly trained examiners who can provide high quality exam reports is critical to ensuring that claims processors can make timely and accurate disability determinations for veterans. In 2018, GAO recommended that VBA improve its training oversight by: Implementing a plan to verify that all contracted examiners have completed required training. In response, VBA began conducting random audits of training completed by contracted examiners, but it is still in the process of developing a centralized training system that will collect this information. Such a system could help ensure that contracted examiners complete training and, ultimately, conduct high-quality exams. Collecting information from contractors or examiners on training and use this information to assess training and make improvements. VBA has since developed a feedback tool for examiners to complete following training and plans to use it to improve the training, where needed. What GAO Recommends GAO made four recommendations in 2018, including that VBA (1) develop a plan for using its new data system to monitor contractors' quality and timeliness performance, (2) analyze overall program performance, (3) verify that contracted examiners complete required training, and (4) collect information to assess the effectiveness of that training. VA agreed with and initiated actions on all of these recommendations but has not yet fully implemented them.
gao_GAO-20-18
gao_GAO-20-18_0
Background Older Adult Population Growth The U.S. older adult population is growing and is projected to steadily increase in the coming decades. By 2060, the U.S. Census Bureau projects that adults 65 or older will make up nearly one-quarter of the total U.S. population. In addition to the overall growth in this population, the number of adults 85 or older is expected to nearly triple, from 6.4 million in 2016 to 19 million in 2060 (see fig. 1). Federal Nutrition Assistance Programs Serving Older Adults Several federal nutrition assistance programs serve older adults, which are overseen by HHS’s Administration for Community Living (ACL) and USDA’s Food and Nutrition Service (FNS). The characteristics of older adults served by these programs vary, as do the types of assistance provided, the numbers of participants, and the amounts of federal expenditures (see table 1). Program Administration The nutrition assistance programs serving older adults are overseen by ACL and FNS’s national and regional offices and are generally administered by state and local entities. The ACL and FNS national offices allocate funding and develop program regulations and guidance, and their respective regional offices provide support, such as technical assistance and training, to state agencies. State agencies implement the programs directly or through local entities. In the four programs that provide meals and monthly food packages to participants, state agencies work with regional and local agencies, such as government entities or private nonprofit organizations, to provide nutrition assistance to participants (see fig. 2). Specifically, in FNS’s two programs, state agencies work directly with local providers, while in ACL’s two programs, states work with regional level area agencies on aging, which generally contract with local providers. Area agencies on aging are public or private nonprofit entities that are responsible for planning and delivering services to older adults within their geographic service area. Federally-Supported Nutrition Guidelines The Dietary Guidelines for Americans and the Dietary Reference Intakes (DRIs) are the two federally supported scientific bodies of work that provide broad information and guidance on the nutritional needs of healthy populations to help individuals maintain health and prevent nutrition-related chronic diseases. The dietary guidelines are developed by HHS and USDA and summarized in a federal policy document that focuses on providing practical nutritional and dietary information and guidance for Americans ages 2 and older. Overall, the 2015-2020 Dietary Guidelines recommend the consumption of a variety of vegetables, fruits, grains (at least half of which are whole grains), and protein, as well as fat-free or low-fat dairy and oils—sources of essential fatty acids and vitamin E. They also recommend foods and beverages that limit saturated and trans fats, as well as added sugars and sodium. Developed by the National Academies of Sciences, Engineering, and Medicine, the DRIs are a set of values used to plan and assess diets and nutrient intakes in both the United States and Canada, and the DRIs also provide scientific support for the development of the dietary guidelines. Specifically, the DRIs provide nutrient intake recommendations at levels considered safe for consumption of a wide range of nutrients, including vitamins, such as vitamins A and C; minerals, such as sodium and iron; and macronutrients, such as fiber and fat. Evidence Shows Nutrition Is Associated with Older Adults’ Health Outcomes, but Federal Nutrition Guidelines Do Not Address Their Varying Needs The Majority of Older Adults Have Chronic Conditions and Evidence Shows Older Adults’ Nutrition Is Associated with Their Health Outcomes The majority of older adults in the U.S. have chronic conditions, and evidence shows that nutrition is associated with the development of such conditions. Older adults are the fastest growing segment of the population, and they also have the greatest prevalence of chronic conditions. For example, according to the most recent data available from the Centers for Disease Control and Prevention (CDC), 62 percent of older adults 65 and older had more than one chronic condition in 2016, such as diabetes or heart disease, compared to 18 percent of adults ages 18 to 64. Although the risk of developing chronic conditions increases with age, research has shown that poor nutrition is a contributor to negative health outcomes, including many chronic conditions. For example, research shows that over- and under-consumption of certain nutrients, in addition to physical inactivity, is associated with the development of chronic conditions, including certain cancers, obesity, heart disease, and diabetes. The CDC reported that, in 2016, nutrition- related chronic conditions, including heart disease and stroke, were among the leading causes of death for older adults 65 and older in the United States, with heart disease accounting for 25 percent of deaths among this population. At the same time, research shows that nutrients and diet can prevent, delay, or assist in managing many chronic conditions, and individuals with certain chronic conditions may have different nutritional needs compared to healthy individuals. For example, according to research reviewed during development of the dietary guidelines and DRIs: diets low in sodium that also replace some carbohydrates with protein or unsaturated fats lower blood pressure and cholesterol levels, both reducing the risk of developing heart disease and helping to manage it; consumption of certain types of dietary fats, such as omega-3 fatty acids found in fish and flaxseed, for example, may help prevent or manage heart disease; increased consumption of fiber reduces total blood cholesterol, and high cholesterol is both a chronic condition as well as an increased risk for developing other chronic conditions, such as heart disease and stroke; and decreased consumption of foods high in added sugars, saturated fats, and sodium helps reduce the risk of diabetes, stroke, or heart attack. Barriers to Older Adults’ Meeting Nutritional Needs May Negatively Affect Their Health Outcomes Research has shown that certain age-related changes may impair older adults’ ability to meet their nutritional needs, potentially resulting in negative health outcomes. According to a study conducted by the Academy of Nutrition and Dietetics, physiological changes that occur with age, such as decreased metabolism and reductions in muscle mass and nutrient absorption, may make it difficult for older adults to meet their nutritional needs. Research reviewed to develop the dietary guidelines also indicates that older adults experience a decline in calorie or energy needs as they age, due in part to decreased physical activity. As a result of reduced energy needs, older adults exhibit less hunger and also experience changes in taste sensation and sense of smell, all of which may lead to decreased food consumption, according to the Academy of Nutrition and Dietetics study. Inadequate consumption of certain nutrients, such as potassium, may lead to increased risk of negative health outcomes, including the development of chronic conditions, as noted earlier. Age-related physical or mental impairments also may impact older adults’ ability to meet their nutritional needs, potentially resulting in negative health outcomes. The Older Americans Act defines disability to include a physical or mental impairment, or combination of the two, that results in substantial functional limitations to certain major life activities, including self-care and mobility, among other things. An HHS official we spoke with noted that some older adults’ inability to perform daily activities— which can include eating, walking, or leaving the home to obtain groceries or meals, because of a physical or mental impairment—can contribute to inadequate nutrition. According to the CDC, age-related declines in cognitive functioning, such as the ability to reason and remember, may affect some older adults’ ability to leave their homes and shop for food, hindering their ability to meet their nutritional needs. Further, HHS reported that older adults with age-related physical impairments, such as impaired mobility and vision, may have difficulty opening, reading, and using food packaging, limiting their ability to prepare food. According to an Academy of Nutrition and Dietetics study, older adults with a physical impairment, such as an inability to chew or swallow food, may have reduced ability to consume nutrients, which, as previously noted, may increase their risk of negative health outcomes. Older adults may also require the use of medication, which may impact their ability to absorb or consume nutrients and meet their nutritional needs. For example, according to the National Institute on Aging, common side effects of certain medications can include reduced appetite and dry mouth, which may make it difficult to chew and swallow. In addition, some medications require older adults to limit their consumption of certain foods, such as citrus fruit, as consumption of these foods may change the effectiveness of the medications or cause other negative health outcomes. However, such restrictions may impact older adults’ ability to obtain the nutrients commonly found in those foods. Further, some older adults experience food insecurity, and therefore have limited access to adequate food and nutrients, which research has shown may lead to negative health outcomes. According to research reviewed to develop the dietary guidelines, food insecurity is a leading nutrition- related public health issue that compromises nutrient intake, potentially resulting in an increased risk of developing a chronic condition, as well as difficulty managing chronic conditions. USDA reported that 8 percent of U.S. households with an older adult and 9 percent of U.S. households in which an older adult lived alone experienced food insecurity in 2017—the most recent year for which data are available. According to HHS, food insecure older adults are more likely to experience negative health outcomes than their food secure counterparts. For example, research has shown that older adults who are food insecure consume lower amounts of essential nutrients and are more likely to experience negative health outcomes, like diabetes or physical or mental impairments. Federal Nutrition Guidelines Do Not Address the Varying Nutritional Needs of Older Adults The federal nutrition guidelines—the dietary guidelines and Dietary Reference Intakes (DRIs)—provide broad nutrition guidance for healthy populations. However, the guidelines do not address the nutritional needs of older adults, including the majority of older adults in the United States who have multiple chronic conditions. Specifically, the guidelines focus on the foods and nutrients healthy individuals need to maintain health and prevent nutrition-related chronic conditions, which limit their applicability to older adults who already have chronic conditions. According to the scientific report for the 2015-2020 Dietary Guidelines, the guidelines are expected to evolve to address public health concerns and the nutritional needs of specific populations. Further, a report from a DRI working group indicates that the growth of the older adult population and the prevalence of chronic conditions in this group highlight the importance of understanding how nutrition can help to address chronic conditions. Although DRI researchers recently took steps to examine research on the relationship between nutrition and chronic conditions, they noted in a March 2019 report that current research on this issue is somewhat limited. At the same time, the federal nutrition guidelines do not address the varying nutritional needs of older adults of different ages and instead focus on guidelines for broad age groups. Specifically, the dietary guidelines provide information by gender on the nutrient needs of all adults 51 or older, and the DRIs provide this information by gender for older adults 51 through 70 and 71 or older. However, research has shown that these broad age categories do not account for how needs change with age among older adults, particularly for those 71 or older. For instance, according to the Academy of Nutrition and Dietetics study, the nutrient needs of older adults can be wide-ranging given the various changes that may occur with aging, such as those associated with reduced energy needs. Further, according to a summary report on the DRIs, physiological functioning, such as nutrient absorption, varies greatly after age 70. HHS officials similarly noted that nutritional needs change with each stage in life, and the needs of older adults who are in their 60s and those who are in their 90s or older may be substantially different. Additionally, researchers note that information on the varying nutritional needs of the different age groups of older adults is limited. For instance, the advisory committee that developed the 2015-2020 Dietary Guidelines noted that more data are needed on older adults’ diets, particularly for those 71 or older, and the degree to which age-related changes affect older adults’ ability to establish and maintain proper nutrition. Similarly, researchers at the Jean Mayer USDA Human Nutrition Research Center on Aging—one of the largest research centers studying nutrition and aging in the United States—told us that research on different age groups has been hindered in part by limitations in national nutrition and health data on older adults, and adults 85 or older, in particular, despite the projected growth of this age group. HHS officials said they intend to include a focus on nutritional guidance for older adults in the 2025-2030 Dietary Guidelines update, but they have not yet documented their plans to do so. Broadly, HHS and USDA officials told us they intend to address the nutritional needs of individuals across the entire lifespan in future updates to the dietary guidelines. USDA is leading the 2020-2025 Dietary Guidelines update, which will include guidance for those individuals in the earliest stages of life. HHS officials said that when they lead the 2025-2030 Dietary Guidelines update, they intend to include a focus on nutritional guidance for older adults. However, HHS has not yet documented this intention, such as through a formal plan. As noted, older adults’ nutritional needs can vary with age and many face certain challenges that additional nutrition guidance could help address, such as the management of chronic conditions or age-related changes, yet guidance currently falls short in part because of limited research evaluating older adults’ nutritional needs. In its Strategic Plan for fiscal years 2018-2022, HHS notes that one of the department’s objectives is to prevent, treat, and control communicable diseases and chronic conditions. As previously noted, the dietary guidelines are also expected to evolve to address public health concerns and the nutritional needs of specific populations. A plan for incorporating a focus on older adults in a future dietary guidelines update, such as one that addresses their various needs based on available research on this population and identifies existing information gaps, could help ensure federal nutrition guidelines better address the nutritional needs of this population. Several Nutrition Assistance Programs Serving Older Adults Include Nutrition- Related Requirements, and Federal Oversight of Requirements in Some Programs Is Limited Four of the Six Federal Nutrition Assistance Programs Serving Older Adults Include Nutrition Requirements The four federal nutrition assistance programs that we reviewed and that provide meals and food directly to older adults have federal nutrition requirements, while two other programs we reviewed that provide older adults with benefits to purchase food do not. Specifically, HHS’s congregate and home-delivered meal programs and USDA’s Child and Adult Care Food Program (CACFP) have nutrition requirements for older adults’ meals, and the Commodity Supplemental Food Program (CSFP) has nutrition requirements for the monthly food package provided to older adults. Two other federal programs—USDA’s Supplemental Nutrition Assistance Program (SNAP) and Senior Farmers’ Market Nutrition Program—provide older adults with benefits to purchase food, and neither program has specific nutritional requirements that must be met when purchasing food. The four programs with nutrition requirements used the federal nutrition guidelines—the Dietary Guidelines for Americans—as the basis for their nutrition requirements. These guidelines are also the basis for nutrition requirements in other federal nutrition assistance programs, such as those that serve children. As discussed earlier, the current guidelines provide broad guidance on nutrition for healthy populations and therefore serve a role in health promotion for all individuals. Several Programs Also Require the Provision of Services to Help Older Adults Meet Nutritional Needs Nutrition Education Several of the nutrition assistance programs that have nutrition requirements for meals or food served to older adults also require other services to help ensure older adults’ nutritional needs are met. These services include nutrition education, screenings and assessments, and the use of nutrition professionals. Three of the four selected nutrition assistance programs serving older adults that have nutrition requirements also require nutrition education to support efforts to meet older adults’ nutritional needs. These programs are HHS’s congregate and home-delivered meal programs and USDA’s CSFP, which provides monthly food packages. See figure 5 for examples of nutrition education materials from selected states. To help promote health and delay adverse health conditions among older adults, area agencies on aging, either directly or through their local providers, are required to provide nutrition education to congregate and home-delivered meal participants. According to HHS regional officials we spoke with, there are no requirements for the frequency or type of nutrition education that must be provided, though as officials in one region noted, programs are encouraged to provide education that is science- based. According to the nationwide evaluation of the congregate and home-delivered meal programs, almost half of state agencies surveyed in 2014 required area agencies on aging, either directly or through their local providers, to provide nutrition education at least quarterly, and about one-quarter of state agencies require it to be provided semi-annually or annually. Officials from two of the four state agencies told us local providers educate participants in a variety of ways, including by directly sharing nutrition-related information about specific menu items or meals offered to participants or by partnering with other entities, such as universities, to help educate older adults on nutritional well-being. State agencies overseeing CSFP food packages must also establish a nutrition education plan and ensure that local providers provide nutrition education to program participants. For example, providers must include information about the nutritional value and use of the foods provided in the food package and should account for specific ethnic and cultural characteristics of program participants. USDA regional officials and state agency officials overseeing CSFP in three of the four states told us that providers generally use USDA’s household foods fact sheets—which includes food product descriptions, general food storage information, recipes, and nutritional information—to provide nutrition education to CSFP participants. State officials in our selected states also noted other methods CSFP providers used to support nutrition education. For example, officials in one of the states told us one of their distribution sites provides nutrition education materials in 17 languages to accommodate the different cultural backgrounds of the population it serves. Officials in another state we visited told us some of their provider sites partner with universities, inviting staff from the university’s nutrition program to the provider site to share and discuss nutrition information with participants. Screening and Assessments Both of HHS’s congregate and home-delivered meal programs require states to ensure area agencies on aging or local providers conduct nutrition screenings and assessments of participants to help identify health risks. According to HHS data for fiscal year 2016, the most recent year for which data are available, just over one-fifth (347,002) of the 1.6 million congregate meal participants served and more than one-half (496,729) of the 868,382 home-delivered meal participants served were deemed at high nutrition risk. HHS officials stated that there is no federal policy or requirement on how assessments are conducted or their frequency, and states have the flexibility to determine their own process for assessing the nutritional needs of participants. However, HHS provides a tool that states may use for these assessments. See sidebar for the Federal Nutrition Screening tool used to determine a person’s nutrition risk. According to the nationwide evaluation of the congregate and home-delivered meal programs, over half of area agencies on aging and local providers of congregate and home-delivered meal programs had a formal process for assessing nutritional needs. Further, HHS regional officials we spoke with suggested that these assessments generally occur annually. Across the four selected states we visited, the majority of area agencies on aging conducted nutrition screenings and assessments, with the frequency varying from every 6 months to every few years. The Older Americans Act requires states to prioritize certain groups with high social and economic needs, such as those who are low-income, minorities, or isolated, and two area agencies on aging told us they use nutrition risk screenings and assessments to address malnutrition and identify those individuals who fall in these categories. Nutrition Professionals HHS’s congregate and home-delivered meal programs require the use of nutrition professionals, such as registered dieticians, to help local providers meet the nutritional needs of older adults—primarily through menu reviews to verify that each menu is following federal nutrition requirements, according to HHS officials. According to the nationwide evaluation of the congregate and home-delivered meal programs, at least one-half of the state agencies, area agencies on aging, and local providers used the services of a nutrition professional to help meet the nutritional needs of older adults. In the four selected states, three state agencies had a nutrition professional on staff or contracted with a nutrition professional who worked with area agencies on aging to review menus, and in the other state, a nutrition professional was on staff or contracted for by area agencies on aging or local provider sites. In addition to menu reviews, nutrition professionals in the four selected states were also involved in activities such as training meal providers or providing nutrition education and counseling to participants. Federal Oversight of Meal Programs Provides Limited Information on the Extent to Which Programs Are Adhering to Nutritional Requirements and Addressing Challenges As part of HHS’s oversight of the congregate and home-delivered meal programs, regional officials meet with state staff and review state plans and other program information, but these efforts do not require states to provide documentation that meals served to participants comply with the programs’ nutrition requirements. State agencies are responsible for monitoring area agencies on aging’ implementation of these programs and ensuring that meals are consistent with the programs’ nutritional requirements. HHS regional offices, in turn, conduct oversight of the nutrition programs through its reviews of states. HHS’s guidance directs regional staff to collect information from states on the use of nutrition professionals in these programs. However, HHS’s guidance does not direct regional staff to systematically review or collect any other information from states, such as approved menus, to confirm that meals served to participants are consistent with the programs’ nutrition requirements. A recent national evaluation of meals provided through the congregate and home-delivered meal programs, however, indicates that state oversight of meals’ consistency with program nutrition requirements may have limitations. According to the 2017 evaluation, while program meals generally contributed positively to participants’ diets, the meals were higher in sodium and saturated fat than the recommended limits. For example, the diets of the majority of congregate and home-delivered meal participants included adequate amounts of a range of vitamins and minerals, with the exception of magnesium and calcium. However, a majority of participants had intakes of sodium and saturated fat from these meals that exceeded the dietary guidelines’ recommended limits. Specifically, 94 percent of congregate meal participants and 69 percent of home-delivered meal participants had sodium intakes from program meals that exceeded the dietary guidelines’ recommended limit. Likewise, 89 percent of congregate meal participants and 72 percent of home-delivered meal participants had saturated fat intakes from program meals that exceeded the recommended limit, despite the role state agencies play in monitoring programs to ensure meals meet federal nutrition requirements. According to the evaluation, overconsumption of sodium and saturated fat may pose a public health concern. Information obtained from the selected states we visited also suggests that state oversight of congregate and home-delivered meals’ consistency with program nutritional requirements may have limitations. Specifically, some selected states did not utilize a nutrition professional at the state level to help ensure meals served through the programs met federal nutrition requirements. For example, in one state, the state-level nutrition professional position was vacant and, officials from an area agency on aging we spoke with confirmed that state-level monitoring of menus for compliance with nutrition requirements had not occurred due to the vacancy. Area agency on aging officials added that the vacancy has also meant that state staff are not available to train or provide guidance to area agencies on the programs’ nutrition requirements. In the other state, officials from an area agency on aging told us the state agency has not focused on oversight of providers’ menus. HHS is responsible for overseeing its federal nutrition assistance programs to ensure compliance with the programs’ nutrition requirements. More complete information on state efforts to assess meal consistency with federal nutrition requirements could help HHS assure that meals served to program participants are meeting those requirements. In USDA’s CACFP, which provides meals to older adults at adult day care centers, USDA regional offices review states’ monitoring of local providers for consistency with federal meal pattern requirements. States are required to review each entity involved in the CACFP at least once every 3 years. During these reviews, state staff must assess provider compliance with federal requirements, which includes a review of a sample of the provider’s menus to ensure they comply with federal meal pattern requirements. Through federal management evaluations, USDA regional staff review states’ monitoring of the program, including their reviews of menus to ensure compliance with meal pattern requirements, and conduct onsite reviews at both the state agency and local provider level. Regional staff told us they review all states at least once every 3 years. However, USDA regional officials told us they lack information on how the program is working at adult day care centers, in part because its onsite reviews of adult day care providers are generally limited, unlike on the child care side of the program. According to USDA officials, the majority of state agencies oversee both child care and adult day care CACFP providers, and USDA’s criteria for selecting providers for onsite reviews focus on those providers receiving the highest reimbursement amounts. According to regional officials, because CACFP serves a significantly greater number of meals to children than to adults, providers receiving the highest reimbursement amounts are those serving meals in child care sites in the majority of states. Thus, federal onsite reviews of providers serving meals to older adults in adult day care centers generally have been limited. USDA’s regional officials told us that because they have not done onsite reviews at most adult day care centers recently, they lack information on how the program is working in those centers. USDA officials in four of the seven regional offices told us they receive few questions or requests for technical assistance from state agencies or providers operating the program in adult day care centers. However, our discussions with providers in the four selected states suggest that they face challenges operating the program in these centers and addressing the varying needs of participants they serve, such as those with physical and mental impairments, and may benefit from additional information or assistance. USDA is statutorily required to review state agency and provider compliance with regulations governing program administration and operation of certain nutrition assistance programs, including CACFP. Further, USDA guidance notes that its management evaluations are critical for monitoring state agency program compliance and improving program operations by providing a basis for assessing the administration of the CACFP and developing solutions to challenges in program operations. Without taking action to ensure on-site reviews of adult day care centers participating in CACFP are conducted more consistently, USDA may be missing an opportunity to identify and help address challenges adult day care centers face in operating the program, such as challenges meeting varied needs of participants. Such efforts could help them better assess the extent to which centers are meeting the nutritional needs of the older adults they serve and to better target technical assistance. For USDA’s CSFP, which provides monthly food packages to older adults, USDA regional office oversight includes reviews of state agencies’ monitoring of local providers and visits to local providers, covering all states at least once every 3-5 years. Regional staff indicated that they review monthly participation data, food inventory reports, and state plans as part of their oversight of the program. As part of their visits with local providers, regional officials told us they open and review food packages at local sites to ensure packages include the required food components and assess the types of nutrition education provided to participants, such as recipes or cooking classes. Providers Face Challenges, Such as Increased Demand for Nutrition Programs and Meal Accommodations, and Some Lack Information to Address Them Providers Reported Challenges Meeting Increased Demand for Nutrition Programs, with Some Leveraging Additional Resources to Meet Needs The growth in the older adult population has led to an increased demand for nutrition programs to serve them, and some providers told us they faced challenges meeting the nutritional needs of this population. From 2009 through 2018, the population of adults 60 or older grew by 31 percent. Federal funding for certain nutrition assistance programs serving older adults has not increased at the same rate as the population. Specifically, during that same time period, federal funding for HHS’s congregate and home delivered meal programs grew by 13 percent. HHS officials told us that with the increased demand for these programs and relatively flat federal funding, some providers have been unable to maintain the same level and quality of service that they have historically provided. According to state officials and providers in three of the four selected states we visited, the increased demand for older adult nutrition programs has resulted in waiting lists, in particular for the home-delivered meal program. For example, state officials in one selected state we visited told us they have large waiting lists in their state for the home-delivered meal programs due to a higher demand for services. They indicated that, in the absence of other changes, they will only be able to serve new people through attrition of current program participants. One provider in the same state said they have a waiting list of more than 12,000 older adults for their home-delivered meal program. Another provider told us they are currently serving about 10 percent of the older adult population in their area, although the need for these services is greater, and they have continually had a waiting list for their home-delivered meal program. Some providers have leveraged additional funding sources to decrease waiting lists and expand the reach of their congregate and home- delivered meal programs. Specifically, in two of the four states we visited, some providers said they have received additional funding to support nutrition and other services for older adults through a local property tax— called a millage tax. In one of these states, a local provider told us that the local millage tax provided $9.8 million for older adult services in 2018. Officials noted that these funds allowed providers to add new meal routes and decrease waiting lists for home-delivered meals, as well expand the capacity of senior centers to serve more older adults through nutrition and other programs. In three of the four selected states, some providers reported partnering with various entities, including grocery stores, local farmers, and others to obtain food at low or no cost or serve more older adults, which helped them to meet the increased demand for the congregate and home- delivered meal programs. For example, in one state, the area agency on aging that directly provides meals joined a larger consortium of organizations to purchase food at a lower cost from a food vendor. In another state we visited, a provider we spoke with reported that the majority of its food for older adults’ meals came from food donations provided by local grocery stores and food banks and through a program in which local farmers dedicate some of their produce for donation. This provider indicated that food donations saved them $140,000 in food costs in 2018 (see fig.6). Providers Face Challenges Meeting Needs for Certain Meal Accommodations and Some Lack Information to Help Address These Needs. Providers we spoke with in the four selected states reported challenges meeting older adults’ needs for certain meal accommodations, and both providers and state officials that administer the congregate and home- delivered meal programs as well as the CACFP meal program across the four states reported a need for additional information from the federal agencies overseeing these programs. As previously noted, the majority of older adults in the United States now have more than one chronic condition and older adults may have physical or mental impairments—all factors that may necessitate certain accommodations to ensure meals meet their nutritional needs. Although some providers we spoke with have taken steps to mitigate challenges meeting these needs, some reported that they continue to face challenges, such as the lack of skilled chefs and other resources, to make such accommodations. Congregate and Home- Delivered Meal Programs Providers of HHS’s congregate and home-delivered meal programs in three of the four states said they faced challenges making meal accommodations to meet the dietary needs of older adult participants with chronic health conditions. As previously noted, 62 percent of older adults 65 and older had more than one chronic health condition in 2016—the most recent year for which data are available. Eight of the 14 congregate and home-delivered meal providers across the selected states we visited said they do not tailor meals to meet participants’ special dietary needs— for example, due in part to limited resources and capacity. For example, four providers told us it is cost prohibitive to tailor meals. At one site we visited that does tailor meals, local officials told us that their vendor charges more for tailored meals because of the additional work involved to customize meals to meet the needs of participants with specific health conditions. Another provider said that some chefs lack the skills needed to prepare such meals. For example, the provider said that although some older adults need mechanically soft or pureed meals because of oral health issues, staff may lack the skills to produce those meals. Federal restrictions on reimbursing liquid meals may make providing such meals cost-prohibitive, according to officials in selected states. For example, state and local officials and a provider in two selected states said that program participants who are unable to chew, swallow, or digest solid foods due to various health conditions, may need such meals, yet these meals do not qualify for federal meal reimbursement. According to HHS officials, while a liquid meal does not qualify for meal replacement, states may use federal funds dedicated to providing nutrition education, counseling, and other aging services to purchase these meals. Some of these program providers in the selected states used additional funding sources to help them make meal accommodations for program participants with special dietary needs, and HHS also funds awards that can be used for this purpose. For example, an area agency in one selected state we visited received a grant from a local foundation to provide some of their home-delivered meal participants with special dietary meals, including for those with renal conditions and diabetes for up to 3 months. Similarly, another provider used a grant to provide liquid meals to home-delivered meal participants who needed them. Since 2017, HHS has also awarded grants to support innovative projects that enhance the quality, effectiveness, and outcomes of the congregate and home-delivered meal programs, and some of the projects have focused on providing meal accommodations for certain program participants. For example, a grantee in one state used these grant funds to develop and deliver modified meals appropriate for home-delivered meal participants with reduced dental function. Another state grantee created new medically-tailored meals for program participants transitioning from hospital to home. According to HHS officials, the department has seen positive preliminary results from the innovation grants, but does not currently have a centralized location that compiles information for congregate and home- delivered meals providers on promising approaches for making meal accommodations for participants with special dietary needs. HHS officials said they have shared some information on the projects through webinars and conferences and provided links to webinar materials on the National Resource Center on Nutrition and Aging website—funded by HHS. Further, HHS officials noted that they posted additional relevant materials, such as a toolkit focused on lowering sodium in meals, on the Center’s website. However, these materials are not compiled in one location on the Center’s website, which may hinder meal providers’ ability to locate all of the relevant information HHS has compiled. State officials and providers across the four selected states said that federal guidance on accommodating the special dietary needs of older adult program participants is limited and additional support would be helpful. HHS is responsible for collecting and disseminating information on older adults. Providing information on promising practices and available opportunities may help support providers’ efforts to accommodate the special dietary needs of some older adults participating in these programs. Child and Adult Care Food Program (CACFP) State and local entities administering USDA’s CACFP in adult day care centers in the four selected states reported that they face challenges providing meal accommodations to meet the nutritional needs of program participants. Officials in three selected states said they believe the federally-required meal patterns do not fully address older adults’ nutritional needs, including those with special dietary needs. For example, milk is a federally-required component of breakfasts and lunches served through the program, though officials from three selected states said that milk can be problematic for older adults because many are lactose-intolerant or do not like drinking milk. Further, officials in one state said that the meal pattern includes a significant amount of carbohydrates, which is inconsistent with the needs of older adults who have diabetes. Although CACFP requires adult day care centers to serve meals consistent with federal meal pattern requirements or a participant’s plan of care, which may include medically-prescribed meal accommodations, state officials reported some older adults face barriers to obtaining medical documentation of meal accommodation needs. Specifically, officials from two selected states said that some participants may not have access to medical providers, and officials from one of those states explained that a visit to a medical provider is sometimes cost- prohibitive for those with limited incomes. Officials in two of the four selected states said adult day care meal providers have used available federal options that allow older adults to tailor their own meals to meet their nutritional needs, though officials also noted that these options have limitations. For example: State officials in one selected state said they encourage adult day care centers to implement the federal “offer versus serve” option. This option allows adult participants, including older adults, to decline, for example, up to two of the five meal components required with a lunch—milk, fruits or vegetables, grains, and meat or meat alternate. According to USDA guidance, this option may reduce waste and give adults more choices. However, officials in this state noted that making choices is sometimes difficult and time-consuming for program participants with cognitive impairments, such as Alzheimer’s disease or dementia. State officials in another state said that the federal family-style meal service option, which allows older adults to serve themselves from communal platters of food with assistance from supervising adults, if needed, also provides older adults with the ability to tailor meals to meet their needs. However, state officials in this state noted this meal service approach also creates challenges with feeding certain older adults appropriately. For example, this approach makes it harder to meet the needs of those with particular dietary or functional requirements, such as those who have specific nutritional needs due to chronic conditions or those with swallowing or chewing issues. State officials and adult day care providers across all four selected states said that federal guidance for providing meals to older adults in adult day care centers is limited, and providers in two of the states said they lack information on ways to address some of the challenges associated with providing meals that meet the nutritional needs of older adults in these centers. For example, providers noted that information on promising practices for serving the differing needs of older adults in these centers, including those with special dietary needs and those with functional limitations, would assist their efforts to meet participants’ nutritional needs. State officials or providers in all four selected states said that FNS’s efforts to provide guidance and trainings are more focused on the child care component of the CACFP than the adult day care component. USDA officials confirmed their efforts to provide guidance to meal providers have been primarily focused on the child care side of the program in light of the larger number of participants served. Although USDA provides some guidance and information to address the adult component of the CACFP, some CACFP entities serving older adults may not be aware of these resources, and information on promising practices or other resources to help providers meet the varying needs of older adults is more limited. USDA officials said CACFP guidance and trainings address the implementation of adult meal pattern requirements and existing flexibilities with these requirements, such as allowable substitutions for milk. USDA also produced a handbook specifically for adult day care centers in 2014 to help assist providers in these centers. However, USDA officials said that awareness of existing guidance and trainings available may be lacking, in part, because turnover for CACFP providers is high and new providers may not be aware of existing resources. Some providers also said that more information on how to address the special dietary needs and functional limitations of some participants would be helpful, as USDA’s existing guidance and trainings focus on standard adult meal pattern requirements. For example, while the 2014 handbook includes information on meal patterns and different serving methods to provide meals, it does not include information specific to meeting the differing needs of older adults in these centers. In October 2019, USDA officials told us that they are in the process of updating this handbook to reflect new policies, guidance, and promising practices for addressing the needs of older adults. USDA officials also stated that they are in the process of reviewing a promising practice to address meal accommodations for older adults with varying needs. USDA is responsible for providing training and technical assistance to states in order to assist state agencies with program management and facilitate effective operation of the program. Without awareness of existing resources and additional guidance and information to help adult day care providers address the challenges they face meeting the nutritional needs of the older adults they serve, providers may continue to be limited in their ability to do so. Commodity Supplemental Food Program (CSFP) USDA, state, and local officials administering the CSFP said that the federal requirements for foods provided in each monthly food package limit the extent to which providers can tailor or alter the foods provided to accommodate individual participants’ nutritional needs; though some approaches and recent changes help address this challenge. For example, two food package providers we spoke with said they use other methods of food delivery along with the food package such as a pantry or grocery store-style model, which allows participants to come to a site and choose from a variety of foods that meet the requirements (see fig. 7). USDA also recently issued updated federal requirements for the type and quantity of foods provided in the food package, which department officials said provide more variety to be more useful to older adults. As previously noted, some regional USDA officials told us that early feedback from states on the changes has been positive, though states have until November 2019 to implement the new requirements. For example, USDA officials in one regional office said states provided positive feedback on the introduction of new food items, such as lentils. Providers Also Reported Other Challenges That Hinder Efforts to Meet Older Adults’ Nutritional Needs, Though Some Have Taken Actions to Help Address Them Providers reported ongoing program administration challenges, such as staffing constraints, which to some extent challenge their efforts to meet the nutritional needs of older adults. For example, state and local officials and providers of the congregate and home-delivered meal programs across three of the four selected states said they face challenges finding and retaining a sufficient number of staff for program operations, which could include preparing and serving meals, and delivering meals. Four of the 14 providers of these programs reported that they struggle to offer competitive wages and benefits, which hinders their ability to hire and retain staff. To help overcome staffing constraints, some providers partnered with various entities. For example, in all four selected states, providers of the congregate and home-delivered meal programs established partnerships with entities such as colleges and local businesses to solicit volunteers to help with program operations. In one state, a provider partnered with a local college’s nursing program and students volunteered to assist with assessments for home-delivered meal participants. In another state, staff from a local police department volunteer and deliver meals to home- delivered meal participants in one area. One meal provider said that the efforts of volunteers, who donate their time and cover expenses for gas and vehicle insurance to help provide home-delivered meals to participants, are worth $100,000 in annual support to their program. This provider noted that they would be unable to operate the program without volunteers. See figure 8 for pictures of volunteers helping to prepare food in selected states. Providers of the CSFP food packages and congregate and home- delivered meal programs in three selected states we visited also reported challenges obtaining transportation to bring older adults to meal and food distribution sites and deliver meals and food packages to older adults, though some have found ways to mitigate these challenges. For example, providers in three selected states said a lack of transportation options prevents some older adults from visiting congregate meal sites as well as food package distribution sites, as public transportation is not always available and many older adult participants do not drive. According to local officials in one state, transportation is also a challenge for the home- delivered meal program, particularly in rural areas, because the distance between participants’ homes affects the cost of delivering meals. Similarly, officials at one local agency on aging said providers in its area would like to serve more people, but are unable to add additional routes because of transportation costs. To help mitigate transportation challenges and manage associated costs, some providers in the selected states have adjusted meal services and found alternative ways to transport clients to meal service sites. For example, to help control transportation costs, three providers in two selected states changed from delivering one hot meal daily to delivering multiple frozen meals once a week to home-delivered meal participants. In addition, one provider partnered with a local meal delivery service that used FedEx to deliver 10 home-delivered meals every 2 weeks to program participants. To help alleviate transportation challenges that older adults face getting to meal sites, three providers in two states partnered with private companies to provide participants with rides to and from meal sites for a minimal fee. Another provider used grant funds they received from their state to purchase vans they then used to provide older adults with transportation to and from the meal sites. Some providers also reported challenges accommodating the varied dietary preferences of different groups of older adults, as preferences sometimes vary by age and cultural or ethnic background, and being responsive to these preferences can increase the likelihood that meals will help older adults meet their nutritional needs. For example, HHS officials, as well as local providers in three out of the four selected states said the dietary preferences of adults in their 60s sometimes vary greatly from the preferences of adults in their 90s. Local officials in two states said that providers of congregate and home-delivered meal programs in their states noted that “older old” adults may prefer meals that include meat and potatoes, while “younger old” adults may prefer lighter meals, such as those consisting of soups and salads. In addition, providers in three selected states we visited told us they serve many older adults from diverse cultural or ethnic backgrounds, or with dietary preferences, such as a vegetarian diet, or who do not eat certain foods because of their religious beliefs. To meet the varied dietary preferences of the older adults they serve, and increase the likelihood that meals will help participants meet their nutritional needs, some providers reported taking various approaches. For example, one congregate meal site we visited offered a lunch entree choice of either meat and potatoes or a sandwich wrap with vegetables. Another congregate meal site offered a hot lunch, plus a soup and salad bar, in a restaurant-like setting. Providers also tried to incorporate certain foods on their menus that reflect the cultural or ethnic preferences of participants. For example, the adult day care provider and the congregate and home-delivered meal providers we visited in one selected state in the South all noted that their menus aim to include certain foods associated with their regional culture, such as red beans and rice. Conclusions By 2060, older adults are expected to make up nearly one-quarter of the total U.S. population. HHS and USDA play important roles in promoting the health of this growing population both through administration and oversight of federal nutrition assistance programs that serve older adults and efforts to update federal nutrition guidelines, which serves as the basis for nutrition requirements in these programs. While federal nutrition guidelines provides broad guidance on nutrition for healthy populations, they do not address the varying nutritional needs of older adults, such as those who have common chronic conditions or face age-related changes. The 2025-2030 Dietary Guidelines update is expected to include a focus on nutritional guidance for older adults, but no formal plan to include this focus has been developed. A plan to incorporate the varied needs of older adults into the dietary guidelines could assist older adults with making their own dietary decisions and help providers of nutrition assistance programs better meet older adults’ nutritional needs. Further, HHS and USDA administration and oversight of the nutrition assistance programs is not fully addressing some of the challenges states and local providers indicated hinder their efforts to meet older adults’ nutritional needs. For example, providers we spoke with faced challenges meeting older adults’ needs for certain meal accommodations, and information from HHS and USDA regarding promising approaches to meeting those needs is limited or not sufficiently disseminated. Further, both HHS and USDA’s efforts to oversee older adult meal programs have limitations that affect information available at the federal level needed to ensure programs are meeting older adults’ nutritional needs. Recommendations for Executive Action We are making the following five recommendations. The Administrator of ACL should work with other relevant HHS officials to document the department’s plan to focus on the specific nutritional needs of older adults in the 2025-2030 update of the Dietary Guidelines for Americans, which would include, in part, plans to identify existing information gaps on older adults’ specific nutritional needs. (Recommendation 1) The Administrator of ACL should direct regional offices to take steps to ensure states are monitoring providers to ensure meal consistency with federal nutrition requirements for meals served in the congregate and home-delivered meal programs. (Recommendation 2) The Administrator of FNS should take steps to improve its oversight of CACFP meals provided in adult day care centers. For example, FNS could amend its approach for determining federal onsite reviews of CACFP meal providers to more consistently include adult day care centers. (Recommendation 3) The Administrator of ACL should centralize information on promising approaches for making meal accommodations to meet the nutritional needs of older adult participants in the congregate and home-delivered meal programs, for example in one location on its National Resource Center on Nutrition and Aging website, to assist providers’ efforts. (Recommendation 4) The Administrator of FNS should take steps to better disseminate existing information that could help state and local entities involved in providing CACFP meals meet the varying nutritional needs of older adult participants, as well as continue to identify additional promising practices or other information on meal accommodations to share with CACFP entities. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to HHS and USDA for review and comment. In its written comments, HHS agreed with our three recommendations to ACL (Recommendations 1, 2, and 4). In response to our first recommendation, HHS stated that ACL plans to work with the Office of Disease Prevention and Health Promotion and other relevant HHS officials and agencies to document HHS’s plans to emphasize the specific and varying nutritional needs of older adults in the 2025-2030 update. HHS also stated that ACL plans to acquire the services of a registered dietician with specialized expertise in older adults’ nutritional needs. In response to our second recommendation, HHS stated that ACL’s program and evaluation offices will collaborate on the development of plans to ensure state compliance with federal requirements. In response to our recommendation that ACL centralize information on promising practices, HHS stated that ACL will award a contract in fiscal year 2020 for a new National Resource Center on Nutrition and Aging to, among other things, centralize information on promising approaches so nutrition services providers can access it easily. HHS’s comments are reproduced in appendix II. In oral comments, USDA officials, including the Directors of the FNS Child Nutrition Program Monitoring and Operational Support Division and the Child Nutrition Program Nutrition Education, Training, and Technical Assistance Division generally agreed with our two recommendations to FNS (Recommendations 3 and 5). In response to our recommendation to improve CACFP oversight, FNS officials agreed with the intent of improving oversight of CACFP meals provided in adult care centers. These officials also noted that activities and changes in this area must be consistent with statutory and regulatory requirements, balanced with current priorities given the size of the program, and mindful of resources available to perform additional oversight. While we recognize that the CACFP serves fewer adults than children and that FNS oversight resources are limited, we believe that FNS is in a position to identify the best way to improve its oversight of CACFP meals provided in adult day care centers while taking into consideration the availability of its resources. In response to our recommendation to share additional information with state and local CACFP entities, FNS officials stated that there is existing guidance and information on the adult component of the CACFP, which it communicates through multiple channels. These officials said that some states and localities may be unaware of these resources, in part, because of high turnover among staff who administer these programs. FNS officials acknowledged that they could do more to increase awareness of existing resources, as well as continue to identify and share new practices to help entities providing CACFP meals in adult day care centers address challenges associated with providing meals that meet nutritional needs of older adults. USDA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of HHS and USDA and interested congressional committees. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our report examines (1) the relationship of older adults’ nutrition to health outcomes and the extent to which federal nutrition guidelines address older adults’ nutritional needs; (2) the extent to which federal nutrition assistance programs serving older adults have nutrition-related requirements and how these requirements are overseen; and (3) challenges program providers face in meeting the nutritional needs of older adults. In addition to the methods discussed below, to address all three research objectives we reviewed relevant federal laws, regulations and guidance. Federal Data To provide context for all three research objectives, we examined federal projections of growth in the older adult population covering the time period of 2016 through 2060. We relied on the U.S. Census Bureau’s projections of the U.S. population by various demographic traits including age, sex, race, Hispanic origin, and nativity. We assessed the reliability of these data by reviewing technical documentation describing the methodology, assumptions, and inputs used to produce the 2017 National Population Projections, upon which the 2020-2060 estimates are based. We determined these data to be sufficiently reliable for the purposes of our report. To provide context on the federal nutrition assistance programs serving older adults, we examined federal data on expenditures and participation in these programs for the most recent fiscal year available. For the congregate and home-delivered meal programs, we relied on State Program Report data from fiscal year 2017, the most recent data available at the time of our review, from the U.S. Department of Health and Human Services’ (HHS) AGing Integrated Database. These data are submitted on an annual basis by states to HHS’s Administration for Community Living (ACL). For program expenditure and participation data for the Child and Adult Care Food Program, Commodity Supplemental Food Program, Senior Farmers’ Market Nutrition Program, and Supplemental Nutrition Assistance Program (SNAP), we relied on fiscal year 2018 data from the U.S. Department of Agriculture’s (USDA) National Data Bank and submitted through USDA’s Food and Nutrition Service (FNS) grantee reports. We also relied on fiscal year 2017 data from USDA’s Characteristics of SNAP Households report on the number of older adult participants in SNAP, the most recent year for which these data were available. To assess the reliability of these data, we interviewed FNS officials and reviewed relevant technical documentation. We determined that these data were sufficiently reliable for the purposes of our report. Literature Search To address our first objective on what is known about the relationship between older adults’ nutrition and health outcomes, we conducted a literature search to identify relevant peer-reviewed studies on the relationship between nutritional needs and health outcomes of older adults covering the time period of 2013 through 2018. We searched research databases, such as ProQuest, Scopus, and Ebsco (AgeLine, EconLit, and CINAHL), using search terms such as nutrition and aging and dietary guidelines for seniors. We reviewed the results of the search to identify publications that (1) included a literature review and synthesis of studies on the connection between nutrition and health outcomes for older adults, including the factors that may affect older adults’ nutritional needs, such as age-related changes and (2) emphasized the general diet-health relationship among broad populations of older adults. Because these broader studies were most relevant to our objective, we excluded studies that (1) focused on the relationship between a specific food or nutrient and a single health outcome (e.g., salt and cardiovascular disease) or (2) studied a narrow group of older adults (e.g., residents of a single U.S. state or region). We conducted detailed reviews of these studies to assess the soundness of the reported methods and the credibility and reliability of the conclusions drawn by the authors, and deemed them to be sufficiently credible, reliable, and methodologically sound for the purposes of our report. Site Visits To help inform all of our research objectives and gather information about nutrition assistance programs that provide meals and food packages to older adults at the local level, we conducted visits to 25 local meal and food distribution sites in four states: Arizona (5 sites), Louisiana (10 sites), Michigan (6 sites), and Vermont (4 sites) between December 2018 and March 2019. We interviewed officials from a variety of entities involved in administering these programs in each of the states, including 20 state and area agencies on aging and 20 local providers; observed meal services and food distribution; and held conversations with older adult program participants. We selected states and local sites within those states based on a high percentage of adults 60 or older, and to ensure variation across the sites in geographic location, urban and rural location, percentage of older adults in poverty, and program provider and site type. We visited a wide variety of site locations including, but not limited to, senior centers, community centers, adult day care centers, and senior housing. Because we relied on a nongeneralizable sample of sites and states, the views of the entities we interviewed do not represent the views of all providers of federal nutrition assistance programs providing meals and food packages to older adults or participants in those programs. Prior to each selected state visit, we gathered information from state and area agencies on aging responsible for administering these programs using semi-structured interview questions. We collected information on state and area agency on aging roles in administering nutrition assistance programs for older adults, federal nutrition requirements in these programs, oversight and monitoring of programs, partnerships to help meet the nutritional needs of older adults, outreach efforts, assistance from federal agencies, and challenges in administering the programs and meeting the nutritional needs of the older adult populations served. At each site, we gathered information from local providers and participants using semi-structured interview questions. We collected information on program provider operations; characteristics of the population served; efforts to meet the nutritional needs of the population served, other nutrition-related services; challenges with meeting the nutritional needs of the population and efforts to address them; outreach efforts; and assistance received from regional, state, and federal agencies. We also collected perspectives on food received and program impacts on health outcomes from those participating at sites. In addition, at each site we observed food and meal delivery and the approximate number of participants and staff operating the site. Interviews and Reviews of Relevant Documents To inform all three research objectives, we interviewed officials from HHS’s Administration for Community Living and USDA’s Food and Nutrition Service in their national office and all of their regional offices. We also interviewed a broad range of national groups, including advocacy, research, and service provider organizations involved in nutrition assistance programs serving older adults. These included AARP, Feeding America, Food Research and Action Center, Jean Mayer USDA Human Nutrition Research Center on Aging, Mathematica Policy Research, Meals on Wheels America, National Academies, National Association of Area Agencies on Aging, National Association of Nutrition and Aging Services Programs, National Association of States United for Aging and Disabilities, National Commodity Supplemental Food Program Association, and National Council on Aging. To inform our first objective on the extent to which federal nutrition guidelines address older adults’ nutritional needs, we reviewed the federal guidance reports that detail the nutrition requirements for Americans, including those reports supporting the 2015-2020 Dietary Guidelines for Americans and the body of work on the Dietary Reference Intakes. To obtain information specific to our second objective on how nutrition assistance programs serving older adults are overseen, we reviewed relevant federal program documents on monitoring and oversight of these programs. In addition, we reviewed relevant studies conducted on behalf of HHS that evaluated the impact of its nutrition assistance programs on older adults’ nutrition. These studies evaluated program participants’ diet quality and nutrient intake, as well as program administration, among other things. We assessed the reliability of results in these evaluations by interviewing officials responsible for conducting these evaluations. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Rachel Frisk and Theresa Lo (Assistant Directors), Claudine Pauselli (Analyst-in-Charge), Jessica Ard, and Vernette G. Shaw made key contributions to this report. Also contributing to this report were Priyanka Sethi Bansal, Tim Bushfield, Daniel Concepcion, Kathleen van Gelder, Sarah Gilliland, Isabella Guyott, Serena Lo, Stacy Ouellette, Amber Sinclair, Joy Solmonson, Almeta Spencer, Curtia Taylor, Adam Wendel, and Sirin Yaemsiri.
Why GAO Did This Study The U.S. population is aging and, by 2030, the U.S. Census Bureau projects that one in five Americans will be 65 or older. Recognizing that adequate nutrition is critical to health, physical ability, and quality of life, the federal government funds various programs to provide nutrition assistance to older adults through meals, food packages, or assistance to purchase food. This report examines (1) the relationship of older adults' nutrition to health outcomes and the extent to which federal nutrition guidelines address older adults' nutritional needs, (2) nutrition requirements in federal nutrition assistance programs serving older adults and how these requirements are overseen, and (3) challenges program providers face in meeting older adults' nutritional needs. GAO reviewed relevant federal laws, regulations, and guidance and conducted a comprehensive literature search; visited a nongeneralizable group of four states—Arizona, Louisiana, Michigan, and Vermont—and 25 meal and food distribution sites, selected for a high percentage of adults 60 or older, and variations in urban and rural locations, and poverty level; and interviewed officials from HHS, USDA, states, national organizations, and local providers. What GAO Found Research shows that nutrition can affect the health outcomes of older adults. Federal nutrition guidelines provide broad guidance for healthy populations, but do not focus on the varying nutritional needs of older adults. Department of Health and Human Services (HHS) data show that the majority of older adults have chronic conditions, such as diabetes or heart disease. Research shows that such individuals may have different nutritional needs. As older adults age, they may also face barriers, such as a reduced appetite, impairing their ability to meet their nutritional needs. HHS plans to focus on older adults in a future update to the guidelines, but has not documented a plan for doing so. Documenting such a plan could help ensure guidelines better address the needs of the population. Of the six federal nutrition assistance programs serving older adults, four have requirements for food that states and localities provide directly to participants, and federal agencies oversee states' monitoring of these requirements. In HHS's and U.S. Department of Agriculture's (USDA) meal programs, states must ensure meals meet requirements. Yet, HHS does not gather information from states, such as approved menus, to confirm this, and localities in two of the four selected states said state monitoring of menus was not occurring. Further, USDA regional officials told GAO they lack information on how meal programs operate at adult day care centers as they primarily focus on other sites for their on-site reviews. Additional monitoring could help HHS and USDA ensure meal programs meet nutritional requirements and help providers meet older adults' varying needs. In the states GAO selected, meal and food providers of the four nutrition programs with nutrition requirements reported various challenges, such as an increased demand for services. Providers in three of the four states reported having waiting lists for services. Providers of HHS and USDA meal programs in all four states also reported challenges tailoring meals to meet certain dietary needs, such as for diabetic or pureed meals. HHS and USDA have provided some information to help address these needs. However, providers and state officials across the four states reported that more information would be useful and could help them better address the varying nutritional needs of older adults. What GAO Recommends GAO is making five recommendations, including that HHS develop a plan to include nutrition guidelines for older adults in a future update, and that HHS and USDA improve oversight of meal programs and provide additional information to meal providers to help them meet older adults' nutritional needs. HHS and USDA generally concurred with our recommendations.
gao_GAO-19-647
gao_GAO-19-647_0
Background When Mexico and the United States created the Mérida Initiative in 2007, the Mexican government pledged to tackle crime and corruption and the U.S. government pledged to address domestic drug demand and the illicit trafficking of firearms and bulk currency to Mexico. During the early years of the Mérida Initiative, much of the U.S. funding for the initiative was intended to purchase equipment to support Mexican federal security forces, including about $591 million for aircraft and helicopters from fiscal years 2008 through 2010. In 2011, U.S. and Mexican officials agreed to expand the scope of the initiative to prioritize institution building, and later increased the initiative’s focus on community engagement and human rights efforts. According to State, an Executive Order on TCOs issued in 2017 signaled that the focus of the Mérida Initiative would shift to countering TCOs’ illicit activities, such as drug production and the cross- border movement of drugs, cash, and weapons. State/INL and USAID are the lead U.S. agencies for developing the Mérida Initiative’s programming. In these roles, State/INL and USAID work with GOM officials to help outline Mérida Initiative projects’ plans, objectives, and intended impact. State/INL and USAID both manage and fund the Mérida Initiative with the support of a wide range of project implementers, including DOJ, DHS, DOD, contractors, nongovernmental organizations, and international organizations. All Mérida Initiative projects are currently funded through three appropriations accounts: the International Narcotics Control and Law Enforcement (INCLE) account administered by State/INL, the Economic Support Fund (ESF) account, from which the Mérida Initiative project funding is administered by USAID, and the Development Assistance (DA) account, also from which the Mérida Initiative project funding is administered by USAID. According to State/INL and USAID officials, GOM does not provide direct funding to Mérida Initiative projects. Instead, State/INL considers any GOM funding for justice, national security/defense, and public order and domestic security as indirectly supporting the goals of the Mérida Initiative. In addition, according to USAID data, Mexican nonprofit and private sector entities provided about $23 million in matching funds for USAID-funded Mérida Initiative projects active from fiscal year 2014 through 2018. State and USAID Allocated Over $700 Million for Mérida Initiative Projects from Fiscal Year 2014 through 2018 From fiscal year 2014 through 2018, State/INL and USAID allocated about $723 million for Mérida Initiative projects under the following five U.S. government–wide foreign assistance funding categories: Civil Society, Counternarcotics, Good Governance, Rule of Law and Human Rights, and Transnational Crime. U.S. agencies use these government- wide categories to broadly define foreign assistance programs for planning, budgeting, and reporting, which provides a common language to describe programs across agencies, countries, and regions. Over 80 percent of the funding, or $589 million, went toward Rule of Law and Human Rights, and Counternarcotics efforts. (See fig. 1.) Funding allocated for the Mérida Initiative has decreased over time from $178 million in fiscal year 2014 to $139 million in fiscal year 2018. Of the $723 million, State/INL allocated about $542 million and USAID allocated about $182 million. (See fig. 2.) 445 State and USAID Mérida Initiative Projects Were Active from Fiscal Year 2014 through 2018, Supporting a Wide Range of Efforts Four hundred and forty-five Mérida Initiative projects were active from fiscal year 2014 through 2018, with State/INL funding 388 projects and USAID funding 57 generally larger projects. State/INL and USAID each categorized their projects with greater specificity than the broad categories used for overall allocated funding. Both State/INL and USAID funded projects to assist Mexico’s transition to a newly reformed criminal justice system that includes oral arguments and the presumption of innocence, categorized by State/INL as “criminal justice” and by USAID as “rule of law.” In addition to projects related to criminal justice, most funding for State/INL projects was for those that focused on border and port security, professionalizing the police, and counternarcotics. For example, numerous State/INL projects provided training; technical assistance; and equipment—including for drug detection, border surveillance, and forensic drug laboratories—for Mexican law enforcement, border security, justice sector, and military officials. In addition to rule of law projects, most funding for USAID projects was for those that focused on crime and violence prevention, human rights, and transparency and accountability. Similar to State/INL, numerous USAID projects provided technical assistance to Mexican justice sector officials. Other USAID projects were designed to engage with civil society groups to address crime and violence, and corruption, and to promote trust in government. State/INL and USAID implemented these projects primarily through contracts, grants, cooperative agreements, interagency agreements, and agreements with international organizations. State/INL Mérida Initiative Projects Focused on Criminal Justice, Border and Port Security, Professionalizing the Police, and Counternarcotics State/INL-funded Mérida projects focused on criminal justice, border and port security, professionalizing the police, and counternarcotics. State/INL categorizes its Mérida Initiative projects under priority lines of effort developed by State/INL Mexico specifically for the Mérida Initiative. These lines of effort are defined in State/INL’s Mexico Country Plan: Advance Criminal Justice, Counternarcotics, Disrupt Illicit Finance, Professionalize the Police, and Secure Border and Ports. While State/INL uses these lines of effort to categorize State/INL-funded Mérida Initiative projects, these lines of effort also align with the broader U.S. government–wide foreign assistance funding categories outlined in figure 1 above. See table 1 for a description of State/INL’s lines of effort for the Mérida Initiative, and how these lines of effort align with the U.S. government– wide foreign assistance funding categories. The State/INL projects with the highest percentage of State/INL funding were those focused on Advancing Criminal Justice (28 percent) and Securing Borders and Ports (25 percent). Law enforcement related categories—Counternarcotics and Professionalize the Police—also constituted a substantial proportion (30 percent) of State/INL funding, as shown in figure 3. For a list of State/INL’s highest dollar value projects active from fiscal year 2014 through 2018 by these categories, see appendix I. Below are some examples of State/INL-funded Mérida Initiative projects supporting the agency’s five lines of effort in Mexico: Advance Criminal Justice. These 99 projects, with State/INL funding estimated at $241 million, focused on providing training, technical assistance, and equipment to Mexican justice sector and law enforcement officials as they transition to a new judicial system. These projects also provided tools and guidance to civil society to promote the rule of law and trust in government. For example: One DOJ project supported criminal investigations and prosecutions by providing training to GOM officials to improve their forensic laboratories, and by providing technical assistance to forensic scientists testifying as expert witnesses in criminal cases. Through another project, DOJ developed training materials and instructors to assist the GOM Attorney General’s office with the mechanics of Mexico’s judicial system reforms and to create a culture of professionalization within the Attorney General’s office. Some criminal justice projects engaged with civil society, such as a project that worked to promote a culture of lawfulness among Mexican children who attend elementary school in high-crime areas. Counternarcotics. These 76 projects, with State/INL funding estimated at $115 million, focused on assisting Mexican agencies countering the illicit drug trade in Mexico, primarily through technical assistance and equipment, including for forensic labs, drug detection, and surveillance. For example: Intelligence surveillance and reconnaissance technology has been provided to the Mexican Navy to expand its capacity to conduct counternarcotics operations. The Organization of American States implemented a project that expanded Mexico’s drug treatment courts, which offer rehabilitation services and other nonpunitive alternatives for drug offenders who would otherwise face time in prison. Disrupt Illicit Finance. These nine projects, with State/INL funding estimated at $17 million, provided equipment, training, and a public awareness campaign to assist the GOM in its efforts to address TCO’s money-laundering and other illicit financial activities. For example: One DOJ project provided anti–money laundering training to Mexican prosecutors at the state and federal levels. Another United Nations Office on Drugs and Crime project aims to combat money laundering through a public awareness campaign and complaint call center in Mexico. Professionalize the Police. Many of these 97 projects, with State/INL funding estimated at $144 million, provided training and technical assistance to Mexican law enforcement officials at all levels to improve their effectiveness, accountability, and adherence to the rule of law. An aspect of one project conducted surveys with law enforcement personnel and civil society to better inform effective police practices and to link these practices with levels of citizen trust. Two other projects supported tours to the United States for Mexican officials to study issues related to gender-based violence and women’s access to justice. Secure Borders and Ports. These 68 projects, with State/INL funding estimated at $217 million, focused on various efforts and equipment for GOM border and military officials—including equipment for biometrics, surveillance, and telecommunications—to secure Mexico’s air, land, and sea borders and ports. For example, DHS’s Customs and Border Patrol provided mentors and training to GOM border officials to improve their capacity to stem the northward flow of migrants entering Mexico along its southern border. USAID Mérida Initiative Projects Focused Primarily on Crime and Violence Prevention and Rule of Law USAID-funded Mérida Initiative projects focused on crime and violence prevention, rule of law, transparency and accountability, and human rights efforts. USAID categorizes its Mérida Initiative projects under the following development objectives developed by USAID Mexico and outlined in USAID’s Mexico Country Development Cooperation Strategy: Crime and Violence Prevention, Human Rights, Rule of Law, and Transparency and Accountability. Similar to State/INL’s Mérida Initiative lines of effort, USAID Mexico uses its development objectives to categorize USAID-funded Mérida Initiative projects. These objectives also align with the broader U.S. government–wide foreign assistance funding categories outlined in figure 1. See table 2 for a description of USAID’s development objectives for the Mérida Initiative, and how these objectives align with the U.S. government–wide foreign assistance funding categories. The USAID projects with the highest percentage of USAID funding were those focused on Rule of Law (39 percent) or Crime and Violence Prevention (22 percent) with Transparency and Accountability and Human Rights constituting slightly smaller percentages (15 percent and 14 percent, respectively). While funding for USAID projects was concentrated in the Rule of Law category, the number of USAID projects was spread relatively evenly among the categories of Crime and Violence Prevention, Human Rights, and Transparency and Accountability, as shown in figure 4. For a list of USAID’s highest dollar value projects active from fiscal years 2014 through 2018, see appendix II. Below are some examples of USAID-funded Mérida projects supporting the agency’s four development objectives in Mexico for the Mérida Initiative: Crime and Violence Prevention. These 20 projects, with USAID funding estimated at $70 million, worked with civil society, nongovernmental organizations, the private sector, and GOM officials to implement various activities, such as training, workshops, and outreach efforts, to mitigate crime and violence. A number of these projects focused on building the skills and knowledge of at-risk youth, such as those in high-crime areas or at risk of dropping out of school. For example, one project aimed to help at-risk youth in communities and detention centers return to school, gain employment, and improve life skills. Human Rights. These 15 projects, with USAID funding estimated at $46 million, worked to advance human rights through various activities that, for example, focused on protecting journalists and human rights defenders, preventing forced disappearances, and promoting freedom of expression. For example, one project supported the GOM’s efforts to implement its National Human Rights Plan by implementing clear procedures in line with international human rights standards. Rule of Law. These three projects, with USAID funding estimated at $126 million primarily provided technical assistance and outreach to assist Mexican officials as they transitioned to a new judicial system. For example, two large projects—one $68 million project and one $56 million project that has since closed—provided a wide range of technical assistance to GOM judges, public defenders, and attorneys general. Another smaller project worked with law schools to adapt their curricula to the new criminal justice system. Transparency and Accountability. These 15 projects, with USAID funding estimated at $49 million engaged with Mexican officials and civil society to address corruption and promote ethical behavior. Projects helped Mexican officials develop and implement anticorruption policies, strengthen transparency in their procurement processes, and implement GOM’s National Anti-Corruption System. For example, one project aimed to deter corruption and support transparency by improving the quality of investigative and data journalism in Mexico. State and USAID Primarily Use Contracts, Grants, and Interagency Agreements to Implement Mérida Initiative Projects State/INL and USAID implement Mérida Initiative projects primarily through contracts, grants, and agreements with international organizations, but State/INL also employs agreements with U.S. agencies (DOJ, DHS, and DOD). See tables 3 and 4 for the number of and funding for each type of State/INL and USAID funding mechanism, respectively. Agency Comments We provided a draft of this report to State and USAID for review and comment. State and USAID both provided technical comments, which we incorporated as appropriate. USAID also provided formal comments, which are reproduced in appendix III. In these comments, USAID noted that, with its support, the Mérida Initiative has been instrumental in advancing reforms to the Mexican criminal justice sector, promoting human rights, building strong and resilient communities, and improving integrity and accountability. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and the USAID Administrator. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Appendix I: Highest Dollar Value State/INL Mérida Initiative Projects, by Category, Active from Fiscal Year 2014 through 2018 This appendix provides a detailed list of the 10 highest dollar value Department of State, Bureau of International Narcotics and Law Enforcement Affairs (State/INL) Mérida Initiative projects active from fiscal year 2014 through 2018 by State/INL’s five lines of effort—Advance Criminal Justice, Counternarcotics, Disrupt Illicit Finance, Professionalize the Police, and Secure Border and Ports. State/INL provided the details in tables 5 to 9 below. Appendix II: Highest Dollar Value USAID Mérida Initiative Projects Active from Fiscal Year 2014 through 2018 This appendix provides a detailed list of the 10 highest dollar value United States Agency for International Development (USAID) Mérida Initiative projects active from fiscal year 2014 through 2018. USAID provided the details in table 10 below. Appendix III: Comments from the United States Agency for International Development Appendix IV: GAO Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James Michels (Assistant Director), Teresa Heger (Analyst-in-Charge), Terry Allen, Ashley Alley, Lilia Chaidez, Martin DeAlteriis, Neil Doherty, Francisco Enriquez, John Hussey, and Andrew Kincare made key contributions to this report.
Why GAO Did This Study For more than a decade, the activities of transnational criminal organizations have led to increased crime, violence, and lawlessness in parts of Mexico. In October 2007, Mexico and the United States created the Mérida Initiative, a bilateral partnership to address crime and violence and enhance the rule of law in Mexico. State/INL and USAID are the lead U.S. agencies for developing programming for the Mérida Initiative. Both State/INL and USAID also manage and fund the Mérida Initiative with the support of a wide range of project implementers, including the Departments of Defense (DOD), Homeland Security (DHS), and Justice (DOJ); contractors; nongovernmental organizations; and international organizations. GAO was asked to describe funding and projects the United States has provided under the Mérida Initiative. This report describes (1) State/INL and USAID funding for the Mérida Initiative from fiscal year 2014 through 2018 and (2) the number and type of Mérida Initiative projects active during these years. GAO reviewed State and USAID documents and data, and interviewed officials from State, USAID, DOD, DHS, and DOJ in Washington, D.C., and Mexico City. What GAO Found From fiscal year 2014 through 2018, the Department of State's (State) Bureau of International Narcotics and Law Enforcement Affairs (State/INL) and the U.S. Agency for International Development (USAID) allocated about $723 million for the Mérida Initiative, which aims to mitigate the impact of the drug trade on the United States and reduce violence in Mexico. State/INL and USAID allocated this funding under the following government-wide foreign assistance funding categories: Civil Society, Counternarcotics, Good Governance, Rule of Law and Human Rights, and Transnational Crime. U.S. agencies use these categories to broadly define foreign assistance programs for planning, budgeting, and reporting across agencies, countries, and regions. Over 80 percent of the funding went toward Rule of Law and Human Rights, and Counternarcotics efforts. Of the $723 million, State/INL allocated about $542 million and USAID allocated about $182 million. There were 445 State/INL and USAID Mérida Initiative projects active from fiscal year 2014 through 2018. State/INL funded 388 of the projects and USAID funded 57, which tended to be larger with higher funding amounts than State/INL projects. State/INL projects generally focused on providing training and assistance to Mexican officials from the justice sector, border security, military, and law enforcement, as well as equipment, including for forensic drug laboratories, drug detection, and border surveillance. Many USAID projects were intended to engage with Mexican civil society organizations and the public to address corruption, promote trust in government, or prevent crime and violence, such as through skill-building for youth, efforts to advance human rights, or technical support for judicial system development. State/INL and USAID implemented their projects mainly through contracts, grants, and interagency agreements, as well as through agreements with international organizations, such as the United Nations Office on Drugs and Crime and the Organization of American States.
gao_GAO-20-194
gao_GAO-20-194_0
Background Measures of Economic Mobility Intergenerational economic mobility describes how people’s incomes in adulthood compare with their parents’ incomes in the past or at similar ages. Several measures are used to assess the degree of economic mobility, but fundamentally, a society exhibits more economic mobility when incomes are less related to parents’ income. By contrast, where economic mobility is lacking, individuals are more likely to remain at the economic position of their upbringing. Economists traditionally measure economic mobility in three ways: Absolute economic mobility - whether people make more money (in inflation-adjusted dollars) than their parents did at a similar age (see fig. 1). For example, in 1970, 92 percent of 30-year-olds made more money in inflation-adjusted terms than their parents did at similar ages, implying an absolute economic mobility rate of 92 percent. Relative economic mobility - whether people are at a higher income percentile compared to their parents’ income percentile in the past. For example, according to one estimate, there was an 8 percent chance that a person born in the United States from 1980-1982 to parents in the bottom 20 percent of the income distribution would move to the top 20 percent of the income distribution for their birth cohort by the time he or she was approximately 30 years old. Intergenerational income elasticity (IGE) - the strength of the relationship between a person’s income and their parents’ income. The higher the number, between zero and one, the greater the relationship between parental income and children’s adult income (see fig. 2). For example, if IGE is zero, there is complete mobility between generations; parents’ income does not influence their children’s future income at all. If IGE is 1, there is no mobility between generations, as everyone stays at the same income level in which they were born. IGE measures the “persistence of advantage” from one generation to the next at all points along the economic ladder and therefore captures how much inequality is passed down through generations. A single standard measure of intergenerational economic mobility does not exist, and some researchers use more than one. Each of the three measures provides some insight into the level of opportunity available for people to better their economic circumstances relative to the circumstances of their birth. Many factors may be related to the level of economic opportunity available to an individual, including but not limited to overall macroeconomic conditions (e.g., economic growth), education, race, gender, geography (the region, commuting zone, county, or neighborhood in which a person lives), health care, and neighborhood characteristics. Characteristics of Millennials Millennials have a number of unique characteristics that distinguish them from previous generations. According to data from SCF, Millennials are a more diverse group than previous generations—40 percent of Millennial households are headed by someone who belongs to a racial or ethnic minority group. Millennials are also the most educated generation to date in terms of college degree attainment (see fig. 3). An estimated 62 percent of Millennial households had someone with at least an associate’s degree in 2016. Not only did Millennial households have more college degrees overall, a greater percentage of Millennial households in 2016 had advanced degrees, including master’s, doctorate, and professional degrees, compared to previous generations at similar ages. Meanwhile, only 44 percent of Millennials 25-34 years old were married or living with a partner and had children in 2016, while 54 percent of Baby Boomers were partnered and had children by age 34. Economic Mobility is Linked to Parental Income, and Varies by Race and Geography The 20 studies that we reviewed indicate that economic mobility has remained flat or declined in the United States over the last 40 years; none of the studies we reviewed found that economic mobility has increased (see text box). Additionally, estimates of intergenerational income elasticity (IGE) suggest that economic status persists across generations, particularly for the lowest and highest income groups. Studies identified parental income, race, and geography as key determinants of one’s economic mobility. These findings could have future implications for Millennials. Parental Income is a Key Predictor of Economic Mobility, Especially among the Lowest and Highest Earners money than their parents at the same age declined between 1970 and 2010 (see fig. 4). One study attributes this decline to an unequal distribution of economic growth, noting it has primarily benefited the highest earners. It remains to be seen if this downward trend will continue for the Millennial generation. The research we reviewed indicates that economic mobility varies by race. The findings on economic mobility and race suggest that not all groups of Millennials may experience the same levels of economic opportunity. Blacks experience less upward intergenerational mobility than whites. In particular, black men are less likely to be upwardly mobile and more likely to be downwardly mobile than white men, even with similar levels of education. Meanwhile, children of low-income white families have had higher rates of upward mobility over time than black children with similar socioeconomic characteristics. Some minority groups have higher economic mobility than others. One study that examined additional racial groups found high earnings among children of low-income Asian households, and found that Asians are likely to remain at income levels comparable to or above-white Americans, though these findings are largely driven by first-generation immigrants. Additionally, Hispanic Americans are moving up the income distribution across generations, although their overall economic mobility is somewhat lower than whites. Meanwhile, American Indians are more likely than whites to be downwardly mobile, even those in the wealthiest 1 percent. Childhood Location Affects Economic Mobility in Adulthood, but Outcomes Differ by Subgroups The research we reviewed indicates that the region, state, commuting zone, county, and most especially, the neighborhood in which one grows up affects economic mobility and future earnings, but these effects vary by demographic and income groups. Economic mobility varies by location. One study found that areas within the United States offer disparate opportunities, with some localities supporting higher rates of economic mobility than others (see fig. 5). In particular, counties in the southeastern United States were found to have lower levels of economic mobility than counties in the rural Midwest. Another study found that a child’s neighborhood has a statistically significant effect on life chances, and that growing up in a low-income, metropolitan neighborhood has a strong negative effect on future earnings. Conversely, growing up in an affluent neighborhood can have almost as large an impact on future earnings as completing a bachelor’s degree. Specific neighborhood characteristics drive differing rates of economic mobility. Several researchers linked economic mobility to certain area and neighborhood characteristics, including rates of poverty, racial segregation, economic inequality, the proportion of single-parent households, and school quality. Researchers identified racial segregation as a neighborhood characteristic broadly associated with lower mobility. One study found that economic segregation is also negatively associated with economic mobility. One study identified three neighborhood characteristics that are correlated with a weaker relationship between race and mobility: low poverty rates, a high percentage of low-income black fathers present, and low levels of racial bias among whites. According to this study, neighborhoods with these characteristics had higher mobility for black boys and a relatively small black-white mobility gap. The effects of geography on future earnings vary by race, socioeconomic status, and gender. The effects of race and neighborhood characteristics on economic mobility are related and hard to disentangle. For example, one study found that black boys have lower incomes in adulthood than white boys who grow up in the same neighborhood in 99 percent of Census tracts, even when accounting for income. This highlights the effect of race on economic mobility when children face the same neighborhood conditions. Conversely, the same study also found that 4.2 percent of black children grow up in neighborhoods with the characteristics associated with higher levels of mobility, compared to 62.5 percent of white children. This is in line with another study that found that neighborhoods can amplify racial inequality across generations. Another study notes that Hispanic and black children tend to live in neighborhoods with low mobility for those of their racial group, whereas white children tend to live in neighborhoods with higher mobility rates for whites. Neighborhood effects can also vary by socioeconomic status and gender. Regarding socioeconomic status, one study found that place may matter less for children from higher-income families, as they may be better able to insulate themselves from the effects of local conditions (e.g., by switching to private schools if public schools are weak.) Regarding gender, the same study finds that neighborhood matters more for boys than girls. Across studies, common themes emerged that suggest Millennials might not have the same level of economic mobility enjoyed by their parents’ generation. While the studies in our review varied in their estimates of key measures of economic mobility and its determinants, the studies were consistent in their findings that absolute economic mobility is declining, relative mobility is flat or declining, and economic status is somewhat rigid from one generation to the next. Moreover, the studies that examined drivers of mobility found that a child’s race and neighborhood have a significant effect on their economic mobility as adults. This is particularly relevant for Millennials because of their racial and ethnic diversity. It is not clear whether Millennials’ diversity and higher levels of education will lead to a reversal of these trends, or whether these trends will continue into the future. Millennials Have Similar Average Incomes and Lower Average Net Worth Compared to Previous Generations Despite Being More Educated If economic mobility is flat or falling, knowing how a cohort is doing at the beginning of its members’ working lives sheds light on the potential challenges that lie ahead as the cohort ages and moves toward retirement. We analyzed data from the Survey of Consumer Finances (SCF) to provide a snapshot of how Millennials are faring economically as young adults. We compared the financial circumstances of Millennial households in 2016 to Generation X households in 2001 and Baby Boomer households in 1989; in each year, we estimated measures of financial well-being for households in which the head of household, or any spouse or partner, was 25-34 years old. We found that incomes across the three generations have remained relatively flat, which is consistent with our review of economic mobility studies. We also found that Millennials have lower net worth, which we define as assets minus debt. With respect to assets, we found that Millennials are saving for retirement, but the accumulation of wealth through homeownership has decreased as fewer Millennials are buying homes. In terms of debt, Millennials hold large amounts of student debt compared to previous generations, but are also more likely to be college educated. Millennial Households Had Similar Average Incomes as Previous Generations Despite Higher Educational Attainment Rates Millennial households in 2016 had similar average real incomes compared to previous generations at similar ages, according to our analysis of SCF data (see fig. 6). Our analysis showed that median incomes were also similar across young adult households in the Millennial and Baby Boomer generations and that Millennial households had slightly lower median incomes than Generation X households (see fig. 7). We also examined average and median incomes among households with college degrees and found similar results. These findings suggest that, on average, real income levels have been stagnant for young adult households across these three generations. As described in figure 3, Millennial households are more likely to be college-educated compared to previous generations. While college graduates generally have higher incomes than non-college graduates, the income of degree holders has remained flat over time. A recent study from the Federal Reserve Board of St. Louis found that the college income premium, the increase in earnings for college graduates compared to non-college graduates, does exist. According to this study, in the first quarter of 2018, college graduates received weekly wages that were 80 percent higher than high school graduates. However, college graduates in recent years have not made higher incomes than college graduates in the past, as they have had relatively flat inflation-adjusted wages since 2001. Millennials Had Lower Levels of Net Worth Than Previous Generations, With Lower Homeownership Rates and Higher Student Debt Overall, Millennial households in 2016 had significantly lower average and median net worth, defined as assets minus debt, than Generation X households at similar ages in 2001, according to our analysis of SCF data (see figs. 8 and 9). This may be explained by lower homeownership rates than previous generations, as well as larger amounts of student debt. Median net worth was much lower for Millennial households in the bottom 50 percent of the net worth distribution compared to previous generations. While median net worth for the lowest net worth quartile of Baby Boomers and Generation X was around zero, it was substantially negative for Millennials in the lowest quartile, indicating that debt was greater than assets (see fig. 10). The median net worth of those Millennial households in the highest 25 percent was also significantly lower than the median net worth of those at the top in previous generations. We analyzed both average and median net worth to examine how net worth was concentrated among young households. Our analysis showed that estimates of median net worth were much lower than estimates of average net worth across all three generations, suggesting that net worth was unevenly distributed among these households and that a relatively small number of households held a substantial percentage of total net worth. As a part of our analysis of net worth, we examined specific types of assets and debt, including homeownership, retirement resources, and student loans, and found the following: Millennials had lower rates of homeownership compared to previous generations. Our analysis of SCF data showed that a significantly lower percentage of Millennial households in 2016 were homeowners compared to previous generations in 2001 and 1989 (see fig. 11). We estimated that about 43 percent of Millennial households owned homes, compared to 51 percent of Generation X households and 49 percent of Baby Boomers. As a result of lower rates of homeownership, Millennial households had less mortgage debt, but also less home equity, compared to households in other generations at similar ages. Home equity has historically been an important source of retirement security as people age. It is unclear whether Millennial households will reach similar rates of homeownership as previous generations, but it is possible they may be more likely to buy homes at older ages compared to previous generations. Millennials were as likely to have retirement resources as previous generations. A similar percentage of Millennials had retirement resources in 2016 (either defined benefit pensions or retirement accounts, such as an IRA, 401(k), or other account-type pension), compared to Baby Boomers in 1989 and Generation X in 2001 (see fig. 12). Millennials have a similar average value of retirement accounts as Generation X (see fig. 13). This may be due, in part, to auto- enrollment policies, which create default retirement savings accounts for workers, and are relatively new. Millennials have a higher average value of defined contribution retirement accounts compared to Baby Boomers, likely because of the shift over time in the retirement system from defined benefit pensions to account-type pensions, such as 401(k)s. Student loans were the key source of debt that distinguished Millennials from previous generations. We found that Millennial households were significantly more likely to have student loans than previous generations at similar ages (see fig. 14). We measured the potential burden of student loan debt by estimating student loan-to-income ratios and found that this measure was significantly higher for Millennial households in 2016 compared to previous generations when they were young. On average, Millennial households in 2016 had a student loan-to-income ratio that exceeded 100 percent compared to ratios of under 50 percent in previous generations (see fig. 15). While the student loan-to-income ratio has increased over time for households of all incomes, it has most greatly affected lower-income households. For example, while we estimated that the average student loan-to-income ratio was about 100 percent for young households in the bottom income quartile in 2001, we estimated it was significantly higher for young households in the bottom income quartile in 2016 (see fig. 16). These findings suggest that, on average, it could take Millennials several more years’ worth of total income to pay back total household student loan debt (without interest). Although Millennial households have more student debt than previous generations, they may also benefit from federal student loan repayment plans and forgiveness programs. Households that qualify for these programs may not have to repay their student debt in full, though to date about half of student loans are still under standard repayment plans and few potentially qualified borrowers have been granted forgiveness (see textbox). Income-Driven Repayment (IDR) plans, available through the Department of Education for federal student loans, generally base student loan payment amounts on a borrower’s income and extend repayment periods from the standard 10 years to up to 25 years with any remaining balance forgiven at the end of the period. Some borrowers may qualify for very low payments and these payments count toward loan forgiveness at the end of the repayment period. As of September 2018, almost half ($414 billion) of the $859 billion in outstanding Direct Loans were being repaid by student loan borrowers using IDRs. The long-term effects of higher educational attainment, along with higher education loans, on Millennial households is unclear. It is possible that those with advanced degrees may be better situated over time to repay their student loans. However, while an estimated 18 percent of Millennial households in 2016 had advanced degrees (master’s degree or above), an estimated 45 percent had student loans, indicating that many Millennial households with student loans did not have an advanced degree. In addition, while the college income premium is real, high levels of student debt may affect the ability to accumulate wealth, which may be why average net worth levels have decreased for college graduates. (PSLF) program forgives federal student loan balances for eligible borrowers who have made 10 years of qualifying payments while in certain public service jobs. As of March 2019, the Department of Education reported that 1,089,846 borrowers had an approved Employment Certification Form, the first step in potentially qualifying for PSLF. However, 99 percent of applicants were denied PSLF, highlighting the confusion with respect to applying and ultimately getting debt relief from these programs. The Millennial generation is different from previous generations on several measures of financial well-being, so there is uncertainty about how they will do financially as they age. On one hand, they have higher levels of educational attainment, and college graduates earn substantially more than non-college graduates. On the other hand, despite Millennials completing college degrees at higher rates than previous generations, average and median income are not higher for Millennials overall, which is consistent with flat intergenerational economic mobility and persistence of economic status across generations. Millennials also have less home equity than past generations because they are buying homes at lower rates. Given relatively stagnant average income across generations, it is not clear whether Millennials will begin earning more and buying homes later in life or whether lower homeownership rates will persist over time. Millennials are saving for retirement at rates comparable to Generation X, and saving early in life should benefit Millennials in the long run. Yet, they have significantly higher levels of student loan debt than past generations. Some Millennials may ultimately qualify for programs that help them lower their federal student loan debt, but it remains to be seen how these factors will affect Millennials’ financial circumstances in the long run, including in retirement. Agency Comments We provided a draft of this report for review and comment to the Departments of Labor (DOL) and the Treasury and to the Social Security Administration (SSA). We received technical comments from DOL, which we incorporated as appropriate. Treasury and SSA provided no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Labor and the Treasury as well as the Administrator of the Social Security Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) what is known about intergenerational income mobility, and (2) how the financial circumstances of Millennials compare to previous generations. In order to determine what is known about intergenerational income mobility (which we use interchangeably with “economic mobility”) in the United States, we conducted a literature review of relevant, recent economic studies. We identified the majority of the studies we reviewed through systematic searches of databases such as ProQuest, Scopus, and EBSCO using search terms such as “economic mobility,” “income mobility,” “intergenerational income mobility,” or “intergenerational income elasticity.” We searched for scholarly and peer-reviewed publications, working papers, government reports, and think tank reports. We also reviewed studies recommended during expert interviews as well as some included in the bibliographies of key studies on the topic of economic mobility. We used four criteria to target our literature search. In order to be included, studies had to: (1) include original estimates of at least one of three measures of intergenerational economic mobility: absolute economic mobility, relative economic mobility, and intergenerational income elasticity; (2) focus on the United States; (3) be published in the past 5 years (2014-2019), or 2 years if a working paper (2017-2019); and (4) be published in a U.S.-based publication. We then reviewed over 280 abstracts and further evaluated approximately 90 potentially appropriate studies, eliminating ones that did not meet our four criteria. A technical review of each study by at least two GAO economists included an assessment of key findings about economic mobility, methodology, data, assumptions, and limitations. Twenty studies met our four criteria and, based on our technical review, had sufficient methodological rigor for the purpose of providing information on economic mobility. Researchers attempting to estimate the degree of economic mobility in the United States face challenges in acquiring datasets with precise income measurements and that track incomes across generations with sufficient sample sizes. Potential reasons why researchers produce different estimates of economic mobility measures include: Differences in Datasets and Their Respective Limitations. Different datasets may not equally represent every segment of the population. For example, studies making use of the Panel Study of Income Dynamics (PSID) are not generalizable to populations not included in large numbers when the PSID began, such as recent immigrants and institutionalized populations. In addition, some studies rely on data that are not fully representative of the entire income distribution, either because they do not include a sufficient sample of households with very high income or, conversely, households with very low or zero earnings. Some datasets do not capture individuals who are not working or not filing taxes during the period of analysis. For instance, in one study making use of tax data, the authors noted that if parents never file a tax return, they cannot be linked to their child. In that study, parents of approximately 5 percent of children were not identified. In some cases, the data capture a limited age range, which leaves open the possibility of somewhat different results among different age ranges. In addition to different sampling strategies, datasets also capture different variables for each individual or household observed. Even the most comprehensive datasets currently available may lack the data to completely account for factors that may influence mobility, such as changes in family structure over time or detailed individual demographic characteristics for both parent and child households. Differences in Treatment or Construction of Variables. Estimates of intergenerational income mobility can be affected by choices the researcher makes, such as selecting a price deflator to inflation-adjust parents’ incomes; selecting the ages at which children and parents will be compared, accounting for changing trends in household size and composition; determining the value of non-cash benefits (e.g., employer-sponsored health insurance); and determining work-related costs associated with dual-earner households (e.g., child care). Some studies impute earnings for non-tax filers, and different methods of imputation may lead to slightly different results; in other studies, those with no reported income or observations with other missing variables (e.g., demographic characteristics) may simply be dropped from the dataset. How “parent” and “child” are defined may also differ across datasets (e.g., a parent could be the first adult to claim a child on their tax return, or could be an adult male living with a minor child in a household). Additionally, some studies required the researchers to construct datasets that matched parents and children at different points in time. Each researcher makes choices about how to handle the data, which can lead to different estimates. While we did not perform checks on these constructed data, the studies in our review generally included descriptions of the data and methodologies used as well as the difficulties and limitations associated with dataset construction, which we evaluated in our technical review. Differences in Choice of Economic Mobility Measure and Model Specification. Each measure of economic mobility provides a slightly different lens on mobility and has different interpretations. Absolute economic mobility, which compares the inflation-adjusted income of parents and children at similar ages, tends to reflect trends in overall economic growth and distribution of that growth. For instance, 92 percent of 30-year-olds in 1970 made more in inflation adjusted terms than their parents did at that age, while about half of children born in the 1980’s grew up to make more money than their parents by age 30. The difference may largely have been due to higher economic growth and a more equitable distribution of that growth along the income distribution from 1940-1970, whereas growth was slower and distributed differently between 1970 and the present. IGE offers a different metric with different limitations. Studies that estimate IGE regress log child income on log parent income. This conveniently yields a coefficient that can be interpreted as “the percent change in child income given a 1 percent change in parent income.” However, such estimates tend to be unstable because the relationship is non- linear and sensitive to the treatment of children with zero or very small incomes (because the log of zero is mathematically undefined). IGE is very sensitive to assumptions about the income of those with missing income data and typically does not include households with zero earnings, and so excludes some households with no income. Additionally, elasticities are sensitive to changes in cross-sectional income distributions (like during recessions). If children’s income distribution becomes more unequal, then the elasticity will become larger, all else equal. Despite these limitations, based on our technical review, all of the studies summarized in the report are of sufficient methodological rigor for the purpose of providing information on economic mobility. The authors of the studies we reviewed were generally aware of and transparent regarding the limitations of the datasets they worked with, and carried out analyses to test their results for robustness to different assumptions. Although there were differences in study datasets and methodologies, common themes emerge from the body of literature we reviewed. For example: None of the studies we reviewed found economic mobility to be increasing—all found it to be either flat or declining. While there was variation among studies regarding the exact degree to which parental income influences individuals’ income as adults, all studies we reviewed that examined parental income found it to be an important determinant of economic mobility. None of the studies that examined race found blacks to have higher mobility than whites. The studies we reviewed that examine geography agree that different locations have different economic mobility and that part of this variation is connected to the characteristics of a given place (such as school quality or level of segregation), not just to the characteristics of people who choose to live there. In other words, while the studies varied in their point estimates of various measures of economic mobility and its determinants, there was broad consensus among the studies regarding the sign (positive versus negative) and interpretation of the estimates. Additionally, these studies represent an advance in the data and analysis capabilities relative to past studies that examined economic mobility, largely because improved computing power has enabled more complex analyses of large datasets comprised of millions of records. See table 1 for the list of studies included in our review. Analysis of Millennials’ Financial Circumstances After considering possible datasets, we chose the Survey of Consumer Finances (SCF) for this analysis because the data are appropriate for estimating measures of income and wealth across generations, including asset and debt categories of interest like homeownership and student debt. The SCF is a triennial survey of U.S. households sponsored by the Board of Governors of the Federal Reserve System in cooperation with the Department of the Treasury. Every 3 years, the SCF interviews a different sample of households and aims to be representative of households across economic strata, including the top of the wealth distribution. The SCF provides information on household balance sheets, including detailed information on assets and debts, as well as pensions, labor force participation, and demographic characteristics at the time of interview. We compared the financial circumstances of young households across 3 years of the SCF, as each year was representative of a generation (or birth cohort) when someone in the household (either the head of household or a spouse or partner) was 25-34 years old, following similar previous GAO work. Data Limitations Our analysis of SCF data allowed us to make intergenerational comparisons, but not to follow the same individuals over time, so we were not be able to compare children to their parents using these data. While our analysis allowed us to make comparisons, it did not allow us to make statements as to why Millennials are different or similar to other generations. Moreover, our data analysis focused on relatively older Millennials whose experiences may be different than those born later in the generation, especially due to the timing of the Great Recession. The SCF dataset is based on self-reported data and as a result, the data are subject to nonsampling error, including the ability to get information about all sample cases; difficulties of definition; differences in the interpretation of questions; and errors made in collecting, recording, coding, and processing data. Also, demographic analyses using these data may be limited based on the sample size needed to produce reliable estimates. Lastly, we cannot make predictions about the future financial circumstances of Millennials based on this snapshot in time. There are also limitations with the SCF with respect to making comparisons by gender. In a household headed by a single person, the head is taken to be the single core individual. However, in households headed by a central couple who is of mixed sex, the head is taken to be the male in the household. This assumption makes it difficult to make reliable comparisons by gender. Finally, the SCF generally asks questions of household heads and their spouses (and not others living in the household), so it likely underemphasizes young adults who were still living with their parents, which is more prevalent for the Millennial generation. Thus, there may be some selection bias in the SCF with respect to relatively more financially well-off Millennials. For the data used in our analysis, we reviewed documentation and tested the data for anomalies. We determined that these data were sufficiently reliable for the purposes of this report. Analysis of SCF We defined young households in each generation as those in which the household head or any spouse or partner was 25-34 years old. We compared Millennial households in 2016 to Generation X households in 2001 and Baby Boomer households in 1989. Baby Boomers were born from 1946 to 1964 and were 25-43 years old in 1989, so we used the 1989 SCF for Baby Boomer households when they were young adults. Generation X individuals were born from 1965 to 1981 and were 20- 36 years old in 2001, so we used the 2001 SCF for Generation X households when they were young adults. Millennials were born from 1982 to 2000 and were 16-34 years old in 2016, so we used the 2016 SCF for Millennial households when they were young adults. We used the SCF’s measures of income, net worth, assets, and debt from the summary extract data as measures of financial circumstances. We defined household income as the sum of income across all sources. Income includes a family’s cash income, before taxes, for the full calendar year preceding the survey. The components of income are wages, self-employment and business income, taxable and tax- exempt interest, dividends, realized capital gains, benefits from social safety net programs, pensions and withdrawals from retirement accounts, Social Security, alimony and other support payments, and miscellaneous sources of income for all members of the primary economic unit in the household. We defined household net worth as assets minus debt. Assets include financial assets, including liquid assets in bank accounts, certificates of deposit, money market accounts, stocks and bonds, cash value of life insurance, retirement accounts, and other financial assets. Assets also include nonfinancial assets, such as the value of vehicles, primary residences, other residential property, businesses, and other nonfinancial assets. Debt includes mortgages, home equity loans, credit card balances, education loans, vehicle loans, other installment loans, and other debt, including loans against pensions or life insurance. Households could have financial resources outside of net worth, including future income from defined benefit plans or Social Security; however, we did not attempt to estimate the actuarial present value of these financial resources in our net worth calculation given the long time horizon to retirement and the amount of uncertainty associated with such a measurement. In addition, in our professional judgment, the inclusion of these financial resources would not have altered our finding that Millennials have lower net worth compared to previous generations; the inclusion of these financial resources would likely have widened the gap further between Millennials and previous generations because previous generations had greater access to DB plans than the Millennial generation. We estimated means and medians for variables of interest, both overall and by quartile. We estimated the standard errors and constructed the confidence intervals taking into account the dual-frame sample design in order to estimate the sampling variance for these estimates. One part of the design is a standard, multistage area-probability design, while the second part is a special over-sample of relatively wealthy households. This is done in order to accurately capture financial information about the population at large as well as characteristics specific to the relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to make estimates from the survey data nationally representative of households overall. Unless otherwise indicated, estimates in this report are statistically significant at the p<.05 level, and the error bars in the figures represent the 95 percent confidence intervals for the estimates. We conducted this performance audit from November 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michael J. Collins (Assistant Director), Jessica K. Rider (Analyst-In-Charge), Jessica Mausner, Kathleen McQueeney, and Layla Y. Moughari made key contributions to this report. Also contributing to this report were James Bennett, Alicia Cackley, Pin-En Annie Chou, Justin Dunleavy, Sarah C. Gilliland, Gina M. Hoover, Susan J. Irving, Dan Luo, Sheila R. McCoy, John W. Mingus Jr., Corinna Nicolaou, Oliver M. Richard, Vernette G. Shaw, Joseph Silvestri, Almeta Spencer, Frank Todisco, and Adam Wendel.
Why GAO Did This Study The idea that individuals should have the opportunity to economically advance beyond the circumstances of their birth is a familiar element of the American Dream. In an economically mobile society, it is possible for individuals to improve their economic circumstances through effort, education, investment, and talent. In addition to opportunities through the private, public, and nonprofit sectors, the federal government also promotes economic mobility through many efforts, including supporting education, job training, business incentives and development, and child health and well-being programs. However, a recent survey indicates that over approximately the last two decades fewer people report being satisfied with the opportunity to get ahead by working hard. According to recent studies, the Millennial generation, who comprise the largest portion of the American workforce, report feeling overwhelmed by their financial situation and concerned about their future financial security. GAO was asked to review trends in economic mobility and Millennials' economic situation compared to previous generations. This report examines (1) what is known about intergenerational income mobility, and (2) how the financial circumstances of Millennials compare to previous generations. To perform this work GAO conducted an extensive literature review and analyzed data from the nationally representative Survey of Consumer Finances. What GAO Found Recent research indicates that, across three key measures, economic mobility in the United States is limited. Specifically, the Millennial generation (those born between 1982 and 2000) might not have the same opportunity as previous generations had to fare better economically than their parents. According to studies GAO reviewed, the share of people making more money than their parents at the same age (absolute mobility) has declined over the last 40 years, and the chances of moving up the income distribution (relative mobility) have been flat over time. Using a third measure of economic mobility (intergenerational income elasticity), researchers have found that income in adulthood is linked to how much a person's parents made, and that between one-third and two-thirds of economic status is passed down from parents to children. This is especially true of the lowest and highest income groups. Researchers also identified race and geography as key determinants of an individual's economic mobility. Millennials have different financial circumstances than Generation X (born 1965-1981) and Baby Boomers (born 1946-1964), and in light of flat or declining economic mobility, there is uncertainty about how they will fare financially as they age. A snapshot of data that allowed GAO to compare Millennials aged 25-34 to the previous two generations at similar ages showed that Millennial households were more likely than other generations to be college educated; however, incomes have remained flat across the three generations, implying that Millennials have not yet benefited from the potential additional lifetime income earned by college graduates. Millennial households had significantly lower median and average net worth than Generation X households at similar ages (see figure), especially among those with low net worth. Median net worth for the lowest quartile of Baby Boomers and Generation X was around zero, but it was substantially negative for Millennials, indicating that debt was greater than assets for the median low net worth Millennial household. Regarding assets, a significantly lower percentage of Millennials owned homes compared to previous generations at similar ages, but had retirement resources at rates comparable to Generation X and Baby Boomers. Finally, Millennials were more likely to have student loan debt that exceeded their annual income. It remains to be seen how these factors will affect Millennials' financial circumstances in the long run, including retirement.
gao_GAO-19-514
gao_GAO-19-514_0
Background In the U.S. commercial airline industry, passengers travel on network, low-cost, and regional airlines. With thousands of employees and hundreds of aircraft, network airlines support large, complex hub-and- spoke operations, which provide service at various fare levels to many destinations. Low-cost airlines generally operate under a low-cost business model, which typically includes providing point-to-point service using fewer types of aircraft. Regional airlines typically operate small aircraft—turboprops or regional jets with up to 100 seats—and generally provide service to smaller communities on behalf of network airlines. Airlines rely on a wide variety of IT systems to schedule and transport passengers; some of these IT systems interface with networks operated by travel-booking sites, other airlines, and the FAA. These IT systems touch all phases of a passenger’s travel experience, including booking, check-in, boarding, and baggage, as well as airline operations behind the scene, including flight planning, crew scheduling, and flight dispatch, according to FAA. In addition, aviation stakeholders explained that airline IT systems operate in a dynamic, data-intensive environment that demands around-the-clock availability and real-time information. In recent years, the introduction of new mobile applications and telecommunications infrastructure has added to the myriad systems and network connections now critical to an airline’s operations. Airlines face challenges in maintaining or enhancing their IT systems. For example, some airlines operate a web of IT systems that were developed over many years as manual systems transitioned to electronic and computer-processed functions. Replacing software and upgrading these older systems, such as reservations and crew scheduling, can be complicated undertakings as airlines serve millions of travelers and need to keep data flowing across their networks. For example, in its financial filings, Southwest pointed to the significant challenges and costs involved in introducing new IT capabilities while managing existing systems. Increasingly dependent on the use of IT systems to run its ongoing operations, the company recently completed a multi-year initiative to transition to a new third-party reservation system through Amadeus, among other investments. In addition, a wave of industry consolidation stemming from airline bankruptcies in the late 2000s has affected airline IT systems, requiring significant sustained focus among airlines on merging different IT infrastructures necessary to support worldwide flight operations without interruption. For instance, we previously found that United struggled to integrate computer and reservation systems following its merger with Continental in 2010, although the airline has subsequently completed this transition, according to airline representatives. Likewise, in 2015 American pointed to its reliance on technology when discussing principal risks posed by the integration of its computer, communications, and other technology systems with those of US Airways following the merger of the two airlines. Additionally, some airlines rely on regional partners or third-party IT providers to help manage certain IT systems, such as reservations, crew scheduling, and flight dispatch, further adding to the variety of systems that airlines depend on to run their operations. Moreover, the airline industry is going through a transformation as it shifts to digital merchandizing and retailing to better serve consumers, a process which requires access to real-time information, according to an industry stakeholder. Finally, the speed of technology evolution has accelerated, making it a constant and iterative process to keep systems refreshed and operating in sync, a situation that poses additional challenges, according to a stakeholder. Passengers may be affected by an airline IT outage in different ways depending, in part, on the type and severity of the outage—for example, whether the outage stems from a software glitch or a hardware failure— and the system affected. (See fig. 1.) Effects can range from standing in line to be checked in by a ticket agent instead of using a mobile application to delayed and canceled flights if a hardware failure forces the airline to ground all of its flights until the system is back online. System failures may have cascading effects across other airline IT systems or operations, as well. For example, an outage in a flight dispatch system could cause hours-long delays for subsequent flights. Likewise, aviation stakeholders noted that crew positioning can hinder recovery from an outage as delayed flight crews “time out,” further extending the effects of an outage. In addition to these effects, passengers and airlines can also face higher costs from delayed or canceled travel, including increased operational expenses facing airlines as crews and aircraft sit idle, as well as indirect costs, such as those faced by travelers as their itineraries are delayed or canceled. FAA and DOT Have Limited Roles in Overseeing Airline IT Systems and Addressing Effects from Outages on Passengers FAA’s Role Is Primarily Initiating Traffic Management Initiatives Requested by Airlines FAA plays a key, but limited, operational role in responding to airline IT outages. As previously noted, FAA is responsible for ensuring the safe, efficient operation of the NAS. Agency officials we interviewed emphasized that airline IT outages have a limited effect on FAA’s management of the NAS because such outages tend to affect the demand for airspace, not its capacity. As a result, FAA officials explained that if flights are delayed or canceled because of an airline IT outage, the NAS is often less congested for those that remain flying. However, in managing the air-traffic control system, FAA is responsible for initiating and administering traffic management initiatives (such as a ground stop) if requested by an airline experiencing an IT outage. For example, an airline might request that FAA initiate a ground stop if the airline is unable to report flight dispatch information to the FAA, such as the weight and balance of aircraft. FAA works with airlines to accommodate flights back into the NAS when the outage is over. Once an airline recovers from an outage, FAA may also need to initiate traffic management initiatives if demand exceeds capacity in the system— potentially causing delays both for the airline that experienced the outage, as well as others. FAA does not routinely collect data about airline IT outages—which fall outside of its management of the NAS, according to agency officials— although it does collect data on NAS operations, which could include some information about these events. Specifically: The National Traffic Management Log (NTML)—the real-time narrative log of NAS traffic management initiatives kept by air traffic controllers—includes information about ground stops or other initiatives such as time the stop was put in place, affected airports, and when the initiative was lifted. Log entries may also include additional information about the outage, if such information is provided to air traffic control by the airline experiencing it. The Operations Network (OPSNET) system, among others, collects operational data, including air traffic operations and delay data to analyze the performance of the FAA’s air traffic control facilities. However, according to agency officials, data on the effects of airline IT outages (including delay and cancellation data related to airline IT outages) are discarded because information about airline-caused flight disruptions do not provide instructive information to FAA about whether the agency is efficiently operating the NAS. FAA does not directly oversee airline IT systems related to reservations, check-in, baggage, and boarding or their use, according to agency officials. These systems are managed by the airlines themselves. For airline IT systems that interface with FAA’s operational systems, such as automated systems used in air traffic control, FAA works with airlines to ensure that any output (i.e., data feeds) interfaces correctly with the agency’s systems. FAA may provide observations to the airline if its IT systems are not providing accurate information, such as if crews are not being correctly scheduled and tracked, fuel plans are not accurate, or flight plans are not correctly calculated and observable. For Passengers, DOT Helps Ensure Compliance with Consumer Protections, Which May Be Triggered by Certain Airline IT Outages DOT’s Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division are responsible for helping ensure airlines’ compliance with passenger protection requirements and educating passengers on their rights. Airline IT outages are not specifically addressed by any of DOT’s consumer protection regulations. Rather, when these outages occur, they may trigger broader consumer protections afforded passengers. For example, airlines are required by DOT’s interpretation of the statutory prohibition on unfair and deceptive practices to provide refunds for flights that are canceled or significantly delayed if a passenger declines any rerouting that the airline may offer. In the case of delay, however, what amounts to a significant delay is not defined in this policy, and as discussed below, individual airlines may or may not set their own thresholds. According to agency officials, DOT is currently conducting a review of air carriers’ handling of involuntary changes to passengers’ travel itineraries. DOT also regulates compliance through its tarmac delay rule, which requires airlines to mitigate or avoid consumer harm in the event of a lengthy tarmac delay. In addition to these consumer protection regulations and policies, DOT oversees airlines’ compliance with obligations included in airline contracts of carriage or customer service plans. These contracts and plans must be publicly posted by airlines on their websites. As we have previously reported, DOT helps ensure airlines’ compliance with its passenger protection requirements by educating airlines on new regulations or clarifying existing regulations, responding to airlines’ questions, and reviewing airlines’ consumer service policies. According to DOT officials, the agency encourages proactive reporting of incidents by airlines, such as airline IT outages, including a brief description of the incident and any steps taken by the airline to provide accommodation to affected consumers. DOT also receives and investigates complaints from passengers and uses complaint data to identify which airlines to inspect and whether to begin investigations that may result in fines or enforcement actions. According to agency officials, DOT received 126 complaints that explicitly mentioned a domestic airline IT outage from 2015 through 2017. These complaints involved five such outages. For comparison, in all, the agency received between 17,000 and 21,000 complaints per calendar year during that timeframe, according to DOT’s Air Travel Consumer Report. According to DOT officials, complaints that explicitly mentioned an airline IT outage largely mirror in substance those received for other causes of flight disruptions. (These complaints are discussed in more detail below.) According to DOT officials, no investigations have been carried out focusing solely on airline IT outages, but DOT investigations have included airline IT outages that contributed to violations of DOT’s consumer protection regulations. For example, DOT found that an IT outage affecting Delta’s operational systems, including gate management and flight dispatch systems, caused significant surface congestion and resulted in a violation of tarmac delay regulations. This violation was among those included in enforcement proceedings resulting in a civil penalty and consent order to the airline. Finally, to monitor airline on-time performance and baggage handling and to provide information to consumers, DOT requires certain airlines to report data to BTS monthly, including the causes of flight delays and cancellations. However, the causes are grouped into broad categories and do not specify IT outages as a cause. BTS, which is an independent statistical agency within DOT, publishes summary data from reporting air carriers on the number of domestic on-time, delayed, canceled, and diverted flights on its website. DOT’s Office of Aviation Enforcement and Proceedings also publishes a monthly Air Travel Consumer Report with this information. We discuss these data in greater detail below. Information on Airline IT Outages and Their Effects Is Limited, but Suggests That Outages Result in a Range of Passenger Inconveniences We Identified 34 IT Outages Affecting Almost Every Domestic Airline in Our Review Using a variety of information sources, we identified 34 airline IT outages from 2015 through 2017 affecting 11 of the 12 airlines in our review. No government data, academic literature, or other information source could be used to determine a comprehensive count of airline IT outages, and information is also limited regarding the types, causes, and effects of these incidents. Additionally, airlines do not regularly share detailed data about their IT outages publicly, such as the number of flights or passengers affected or the technical cause of the outage, although general information about these incidents is sometimes provided on their websites and social media accounts or to the press. To identify airline IT outages in the absence of other sources of information, we validated a preliminary list of outages developed through a review of open source information, including media coverage. This preliminary list was validated through a combination of interviews with the airlines and third-party IT providers and a review of publicly available airline information, FAA NTML log entries, and DOT consumer complaints. Through our validation process, airline representatives and others identified additional airline IT outages that had not been reported or acknowledged publicly by airlines or third-party IT providers, reflecting the variation in quantity or quality of information available regarding these events. For example, we found more information about IT outages that had nationwide or multi-day consumer or operational effects because these incidents garnered more coverage—and often an official airline response—as compared to those that were of shorter duration or affected a regional carrier or smaller number of flights, passengers, or airports. Additionally, we found less or incomplete information on outages at third- party IT providers and regional carriers because their effects were dispersed across multiple airlines. We found that the number and severity of flight disruptions associated with the airline IT outages we identified varied widely. About 85 percent (29 of 34) of our identified outages resulted in some flight disruptions, including 5 outages we identified that caused over 800 delays or cancellations. However, we were unable to verify the exact number of disrupted flights caused by each outage. At least 14 outages resulted in a ground stop, some of which lasted for several hours, according to a review of FAA’s NTML logs. We identified seven outages that had no associated flight disruptions, although they inconvenienced customers in other ways. For example, during these incidents customers experienced problems buying tickets online, checking into flights on an airline’s website, or using frequent flier benefits. Because no comprehensive data are available on airline IT outages and their related effects, we could not compare these incidents with the effects on flights caused by other disruptive events, such as severe weather like hurricanes or snowstorms. However, FAA analysis of two of the IT outages that caused over 800 flight disruptions found that the number of delays or cancellations resulting from these outages was on par with or worse than those caused by severe weather in the same months the outages occurred. Likewise, representatives from one airline stated that operational effects from airline IT outages are comparable to severe weather events, although outages occur much less frequently. An aviation industry representative noted that these events are typically unexpected, hindering the ability of airlines to react and recover. By contrast, disruptions from weather may be forecast ahead of time, allowing airlines to prepare for predicted disruptions, including accommodating customers, adjusting flight crews and schedules, and pre-positioning aircraft, according to the same representative. The airline IT outages we identified were caused by a range of IT and infrastructure issues, according to airline representatives we interviewed and official press statements. These issues included hardware failures, software outages or slowdowns, power or telecommunications failures, and network connectivity issues, among others. In several instances, an IT issue in one airline system had cascading effects across other systems not affected by the initial outage. For example, a large volume of online traffic shut down an airline’s website and subsequently disrupted the airline’s reservations and check-in systems. Representatives from six airlines, an IT expert, and four other aviation industry stakeholders pointed to a variety of factors that could contribute to an outage or magnify the effect of an IT disruption. These factors ranged from underinvestment in IT systems after years of poor airline profitability, increasing requirements on aging systems or systems not designed to work together, and the introduction of new customer-oriented platforms and services. Representatives from airlines we interviewed also described some of their IT system investments and risk mitigation efforts undertaken in response to an outage or to address potential disruptions, such as investing in new backup systems or technologies. For example, five airlines have sought to reduce vulnerability by expanding IT operations beyond a single data center or moving them to the cloud, which allows for the delivery of computing services through the Internet. Likewise, two airlines described efforts to ensure connectivity and reduce the effects of IT disruptions by using multiple telecommunications network providers. Several airline representatives and an IT expert said that these airline IT investments are aimed at enhancing overall system functionality as well as revenue. However, the IT risk expert we spoke with noted that carrying out major upgrades to their IT systems can be challenging because these systems are always in use. Additionally, according to stakeholders we interviewed, airlines employ a variety of contingency planning and recovery strategies to respond to unforeseen technical issues, including IT outages. For example, one airline described incorporating routine system testing, artificial intelligence, and outage drills into planning for system disruptions to avoid outages or speed recovery. Airline efforts to increase the resiliency of their IT systems, such as those described above, could prevent or lessen the impact of such outages. BTS Data Broadly Capture Flight Delays and Cancellations BTS data capture the causes of flight delays and cancellations in several broad categories, which do not isolate flight disruptions resulting from airline IT outages and do not reflect the root cause of flight disruptions. As previously mentioned, BTS collects on-time performance data from the airlines, including the causes of flight delays and cancellations. On a monthly basis, certain airlines are required to report at least one cause of delay (in minutes) for each flight delayed 15 minutes or more from the following five categories: air carrier, extreme weather, NAS, security, and late arriving aircraft. Similarly, for each flight that was canceled, airlines are required to report the cause from one of four categories: air carrier, extreme weather, NAS, and security. BTS guidance instructs airlines to report flight delays that are within the control of the airlines in the air- carrier category. Also included in the air-carrier category, according to the guidance, are more than 40 other potential causes of delays or cancellations, such as aircraft maintenance, baggage, terminal operations, and crew matters. As a result, flight disruptions from IT outages are indistinguishable from other airline-caused issues within this category. Additionally, delays caused by airline IT outages may be captured in a category other than air carrier because of how airlines can report the causes of flight delays based on BTS guidance. For example: Multiple causes for a delay. Airlines have the option to report either just the main cause or all the causes for a flight delay as long as the airline consistently applies the same method in its monthly report to BTS. Also, if there is more than one cause for a flight delay that starts at the same time, airlines are required to report the cause that lasted the longest. As a result, delays caused by an airline IT outage may be attributed to other categories if they happen at the same time as other issues affecting an airline’s operations, such as poor weather or airport conditions. Late arriving aircraft delays. Airlines can report a flight delay in the late arriving aircraft category if the previous flight arrived late and caused the next flight (on the same aircraft) to depart late. Airlines are not required to provide additional information on the cause of the delay for the previous flight (air carrier, NAS, security, or extreme weather). As a result, delays from incidents that can cause ripple effects on an airline’s operations, such as an IT outage or severe thunderstorms, may be attributed to the late arriving aircraft category. NAS delays. Airlines can report delays in the control of the FAA, airport operators, or state and local officials in the NAS category, which includes ground stops, flight volume delays, and air traffic control issues, among others. However, BTS guidance does not specify how airlines should report delays caused by ground stops requested by the airlines, including after an IT outage. As a result, these delays may be captured in the NAS category. BTS data are collected to provide general information on the quality of airline performance to consumers and to improve airline scheduling, rather than detailed information about specific flights or events. Consequently, these data provide limited insight into the effects of individual events, including airline IT outages, both because flight disruptions may be captured in more than one category and because the data do not allow for the isolation of effects for affected flights. We reviewed BTS data for most of the airline IT outages we identified and found, for example, that for 3 outages, airlines reported the largest total number of flight delays in the NAS causal category on the day that the airline requested a ground stop because of the outage—rather than in the air-carrier category. In addition, we reviewed BTS data for the 5 outages we identified where the airline involved delayed or canceled at least 800 total flights and found that airlines spread the causes of flight delays and cancellations across several categories, primarily air carrier, late arriving aircraft, and NAS for the first day of these outages. For example, we found that airlines attributed 44 percent of all reported flight delays to late arriving aircraft for these days. (See fig. 4). DOT officials did not see a need for additional reporting requirements on flight delays and cancellations caused by airline IT outages given the effects of such events are not unique when compared to other causes of flight delay and because these incidents involve a small portion of consumer complaints received by DOT. Aviation stakeholders we spoke to told us that airlines track flight disruptions for internal purposes such as managing operations and scheduling. For example, representatives from one airline said that the airline tracks delays and cancellations associated with IT outages and other issues internally to identify patterns and reoccurring issues that need improvement, such as scheduling, staffing, and maintenance. DOT officials noted that obtaining more detailed information on the causes of flight delays and cancellations would require a cost and benefit analysis to determine whether the benefit from collecting the data would exceed the airlines’ cost to report the data. Officials also noted that the agency has undertaken efforts to provide additional information to consumers. Notably, to provide more insight into the underlying causes of delay attributed to late arriving aircraft, BTS began calculating the original causes of delays in the late arriving aircraft category and providing these data on its website in response to a recommendation made by the DOT Inspector General in 2013. Information on the Effects on Passengers Is Largely Anecdotal and Illustrates Varied Passenger Experiences No data are publicly available to quantify with any degree of precision the number of passengers affected by airline IT outages, and only one airline provided this type of information to us. Airline contracts of carriage set the minimum accommodations passengers are entitled to when their flights are delayed or canceled, which could include refunds, rebooking, or other amenities, such as food or meals. However, there is no comprehensive information about the accommodations that were actually received by passengers, and available information is largely anecdotal. Even with respect to the same IT outage, different people may be affected differently. For example, passengers may be affected by the complexity of the NAS and their individual circumstances. According to an airline representative we spoke with, an airline may be able to quickly rebook affected passengers on a different airline for one destination, for example, but may have difficulty rebooking passengers for another destination if other flights are full. Further, while network airlines have hub-and-spoke networks that include a number of route options or frequent service between cities, others—particularly point-to-point or low- cost carriers—may have more limited service, further constraining the ability to rebook individual passengers. Finally, passengers travel for different reasons and their tolerance for disruption can differ, as well, according to DOT officials. Thus, someone flying to visit a friend may have a different tolerance for delay than someone traveling for a job interview, they noted. Airlines are required by DOT to provide refunds for canceled—and significantly delayed—flights if a passenger chooses to cancel his or her trip. Beyond these requirements, however, airlines are not obligated to provide accommodations for flight disruptions such as cancellations and delays unless specified in an airline’s contract of carriage, according to DOT. These contracts govern what, if anything, a passenger is entitled to, although airlines may offer additional accommodations to inconvenienced passengers. Generally accommodations received by inconvenienced passengers could include rebooking on the same airline or alternate travel; refunds or compensation in the form of money or other benefits (e.g., credit for later travel); and amenities such as hotel stays and food, according to their contracts of carriage. Airlines can—and in some cases do—go above and beyond the obligations set forth in their contracts of carriage, as illustrated by some examples below. Accommodations Included in Airlines’ Contracts of Carriage Vary To better understand the accommodations that passengers may have received as the result of airline IT outages, we reviewed airlines’ contracts of carriage for the airlines in our scope with applicable contracts. None of these contracts addressed IT outages directly, but flight disruptions caused by outages would be covered under the broader contract terms addressing cancellations and delays. We found that the contracts vary in terms of what accommodations are provided for, as well as the extent to which airlines have discretion in providing them. For example, while several airline contracts include provisions to provide hotel vouchers, transportation to the hotel, or meals, other airlines—notably several low- cost carriers—do not. Likewise, some airlines establish set time thresholds for when they are obligated to provide a certain accommodation (e.g., after a delay of at least 4 hours), while others do not. Specific accommodations we identified in our review of airline contracts of carriage are discussed below, and table 1 further details some of the variation that we found. Alternate transportation. All nine airlines in our analysis provide for rebooking on their own airline in the event of a flight delay or cancellation such as might be caused by an airline IT outage, although Frontier includes certain airports near a passenger’s original destination as acceptable alternatives in its contract of carriage. Under this exception, for example, Frontier could rebook a passenger on a flight to Tampa if he or she had originally planned to travel to Orlando, or vice versa, in the event of a flight disruption. Three of the airline contracts of carriage we reviewed provide for travel on a different airline—or the use of alternate ground transportation— typically at their discretion, and a fourth airline provides for alternate transportation if a passenger’s flight has been diverted to a different airport. Airline representatives with two low-cost carriers described their unsuccessful efforts to develop agreements with network airlines to facilitate the rebooking of passengers on another airline. Refunds for cancellations. If a flight is canceled and no alternative is available—or if available flights are not acceptable to the passenger— all nine airlines in our analysis provide for refunds, although three airlines may instead reroute passengers to nearby cities. Under their contracts of carriage, airlines typically provide refunds for the unused portion of a ticket in the event of flight disruptions. If, for example, passengers have already completed the outbound portion of a roundtrip ticket, they would receive a partial refund for the unused, return portion, rather than the entire ticket. Finally, three airlines (Hawaiian, Southwest, and United) offer passengers the option of travel credits in lieu of a refund in their contracts of carriage. Refunds for delays. The majority of airlines in our review provides refunds or flight credit for flight delays, although refunds in some cases could be contingent on the absence of an acceptable alternative, such as being rebooked on a subsequent flight or to an alternate airport. As mentioned above, DOT requires airlines to provide refunds for flights that are “significantly delayed” but does not define how long such a delay is and instead relies on a case-by-case determination. Four of the contracts we reviewed establish a specific timeframe for the delay after which a passenger is entitled to a refund, while the others do not establish such a threshold. For example, a passenger flying on Alaska Airlines could request and receive a refund for a flight disruption lasting at least 2 hours, and passengers on Delta are entitled to a refund, if requested, after a 90 minute delay. By contrast, airlines without a defined threshold for a delayed flight have discretion for when passengers would be eligible for refunds, particularly with regard to nonrefundable tickets. Hotel stay. The majority of airlines in our review provide for hotel stays in their contracts of carriage (and ground transportation to the hotel), to varying degrees, although two low-cost carriers (Frontier and Southwest) do not. The contracts of carriage for seven airlines include a hotel stay for passengers inconvenienced by flight disruptions, and of these four stipulate that passengers have to be away from home or from their points of origin or destination; five require that the flight disruption span certain hours (e.g., 10pm to 6am); and one includes credit for a long-distance phone call. Four of the contracts we reviewed include additional provisions for hotel stays (or other accommodations) to passengers with disabilities or other needs. For example, under its contract of carriage, American will provide amenities to maintain the safety and welfare of certain passengers if they are delayed (e.g., customers with disabilities, unaccompanied children, the elderly, or others with special needs or circumstances). Food. Three airlines in our review provide for meals for passengers inconvenienced by flight disruptions in their contracts of carriage. For example, JetBlue’s contract of carriage provides for meal vouchers or pizza for flight delays of 6 or more hours. In addition, airlines may deliver meals or offer other amenities to passengers waiting for delayed or canceled flights, even in the absence of the promise of food in the contract of carriage. In these cases, additional accommodations may be publicly announced on airline websites, by social media accounts, or through statements to the press, or they may be provided directly to individual flights or passengers at the airport. For example, in response to severe thunderstorms in 2017, Delta had pizza delivered to passengers waiting in airports across the Southeast. Monetary compensation or travel credit. Inconvenienced passengers are not entitled to monetary compensation in the case of a flight delay or cancellation in the United States, and none of the airlines in our review includes such compensation in their contracts of carriage. Nevertheless, two airline contracts of carriage include provisions for travel credit—above and beyond a refund—for flight disruptions. JetBlue’s contract of carriage provides for travel credit for canceled or delayed flights with several tiers, depending on the timing of the cancellation or length of the delay. For example, passengers delayed over 6 hours are entitled to $250 credit for future travel on JetBlue. Likewise, Alaska’s contract provides for a discount code for future travel (and a letter of apology) for passengers delayed longer than 2 hours. Although not included in Delta’s contract of carriage, the airline provided $200 in travel vouchers to all customers with flight disruptions lasting at least 3 hours for two of the IT outages we identified, according to airline representatives. Consumer Concerns Stemming from Airline IT Outages As mentioned above, collecting and analyzing passenger complaints is one way DOT helps ensure that an airline fulfills its obligations included in its contract of carriage and customer service plan, as well as any additional accommodations that may be publicly offered. Our review of passenger complaints filed with DOT stemming from airline IT outages found that they included complaints related to the lack of monetary compensation for delayed or canceled flights and refusals to refund other expenses, such as rental cars or missed hotel or cruise reservations, among other concerns. For example, complaints related to a Southwest outage in 2016 included several related to lack of compensation or other amenities, such as food or hotel stays offered by the airline. As noted above, Southwest’s contract of carriage does not provide for compensation, food, or hotel stays in the event of a delay or cancellation. Complaints filed after the Delta outage of 2016 acknowledged receipt of a $200 travel voucher in compensation or a hotel voucher, but pointed to other non-refunded expenses incurred or difficulties in redeeming these vouchers. The three consumer or passenger advocacy groups with whom we spoke raised several concerns with regard to passengers inconvenienced by airline IT outages. Stakeholders we spoke with responded to these concerns and addressed how airlines respond to IT outages. Passengers may not receive the same accommodations. In the absence of requirements for accommodations or compensation, passengers are dependent on whether or not the affected airline chooses to be generous, according to the consumer advocates we interviewed. They also noted that mileage plan or first class passengers may receive more accommodations than others, even when passengers are affected by the same underlying outage, as may be true in other circumstances, as well. Representatives from one airline told us that they attempt to promptly address the needs of all of their passengers but acknowledged that accommodations may vary depending on passenger circumstances, including passenger status (e.g., frequent-flyer program members or VIP travelers). Airline obligations toward affected passengers may be confusing for passengers. According to consumer advocates we spoke with, even if a passenger understands that an airline’s contract of carriage lays out its obligations to passengers affected by an IT outage, these contracts are often lengthy and difficult to understand. As noted above, our review of DOT complaints stemming from airline IT outages found that many passengers expected to receive compensation or other accommodations in response to these events, although such accommodations were not included in contracts of carriage. We reported in 2017 that airlines committed to reviewing their contracts of carriage to see if they could be simplified. Contracts of carriage may not clearly exclude IT outages from force majeure events, according to consumer advocates. Flight disruptions caused by extreme weather, terrorism, and other events that are seen as being beyond the control of the airline are typically treated as special situations in airline contracts of carriage, and as a result, inconvenienced passengers may not receive accommodations that they otherwise might. Consumer advocates voiced concerns that airline IT outages might be treated as events outside the airline’s control (i.e., Acts of God or force majeure events) given ambiguity in how these exceptions are defined. We found that IT outages were not explicitly included among the force majeure events identified in the contracts of carriage we reviewed. In interviews and written statements, representatives with four of the airlines in our review varied in the extent to which they characterized airline IT outages as incidents in the control of the airline, but generally indicated that passengers would be accommodated as if the outages were. Agency Comments We provided the Department of Transportation (DOT) with a draft of this report for review and comment. DOT responded by email and provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or KrauseH@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology Our objectives for this report were to: identify (1) the Department of Transportation’s (DOT) and Federal Aviation Administration’s (FAA) roles, if any, in relation to airline IT outages and their effects and (2) what is known about these outages, including the number of flights and passengers affected. The scope of this report focuses on those airline IT systems that affect passenger experiences, including systems related to reservations and check-in, as well as those used by airlines for flight planning and dispatch. Our scope excluded IT systems involved in avionics (such as aircraft navigation systems); in-flight operations (such as passenger WiFi networks); and internal operations (such as company email systems). Our analysis included the 12 airlines that were required to report on-time performance information to DOT’s Bureau of Transportation Statistics (BTS) from 2015 through 2017, including network carriers (Alaska, American, Delta, and United); low-cost carriers (Frontier, JetBlue, Spirit, Southwest, and Virgin America); regional carriers that provide service for partner airlines (ExpressJet and SkyWest); and Hawaiian, which provides a niche service. Given the role of third-party IT providers, we also included Amadeus and Sabre in our scope. To identify relevant DOT and FAA authorities and responsibilities vis-à-vis airline IT outages in several areas, including operations, oversight, and data-collection, we reviewed relevant laws, regulations, policies, and guidance, as well as prior GAO work addressing agency roles. We interviewed DOT officials with BTS, which collects data on airline on-time performance, and the Office of the Assistant General Counsel for Aviation Enforcement and Proceedings and its Aviation Consumer Protection Division, which oversee consumer protections and receive consumer complaints. We also interviewed FAA officials with the Office of the Chief Information Security Officer, which advises the agency on matters relating to IT management and security. Within FAA’s Air Traffic Organization, we interviewed officials with Systems Operations Services, which administers traffic management initiatives including ground stops, and its National Airspace System (NAS) Operations and Office of Performance Analysis. These two offices are responsible for programs related to air traffic control systems and assessing the performance of the NAS, respectively. Through our review of relevant plans and an interview with officials in DOT’s Office of the Secretary, we determined that airline IT systems are not included in federal plans for critical infrastructure protection. According to DOT officials, outages in these systems do not have the potential to reach established thresholds for potential casualties or damages. By contrast, air traffic control systems and airports are included in sector-specific plans addressing critical infrastructure protection in the case of a terrorist attack or other natural or manmade disaster. To determine what is known about airline IT outages, we reviewed DOT data sources, including BTS and FAA performance and operations data, as well as passenger complaints received by DOT in response to airline IT outages from 2015 through August 2018. We also conducted interviews with or received written responses from 11 (of 12) airlines in our scope, and interviewed other stakeholders, including third-party IT system providers Amadeus and Sabre; an IT risk expert (Robert Charette); industry associations, including Airlines for American (A4A), the Regional Airline Association (RAA), and Airports Council International (ACI); and employee union representatives with the Air Line Pilots Association (ALPA). We determined that DOT and FAA data were not designed, and could not be used, to comprehensively identify airline IT outages. To identify airline IT outages in the absence of detailed DOT or FAA data, academic literature, or internal (proprietary) airline data on these incidents, we validated a preliminary list of such outages developed using open source material that included media coverage and publicly available airline sources for outages from 2015 through 2017. Specifically, we searched GAO subscription databases (e.g., ProQuest, Nexis, and EBSCO) to create a preliminary list of 37 airline IT outages from media coverage; performed additional searches of articles and official airline websites to collect more information on and corroborate incidents identified; provided our list of identified IT outages to the 12 airlines in our scope and two third-party IT providers (Amadeus and Sabre) for confirmation; and corroborated 20 of the identified IT outages with FAA’s National Traffic Management Log’s (NTML) log entries and DOT’s consumer complaint data. Through this process, we were able to corroborate 34 airline IT outages from 2015 through 2017, and we are confident that our list of outages includes all of the outages large enough to garner national-level, multi- day media coverage and an official response from an airline executive. While accurate, our list is not comprehensive because three airlines and a third-party IT provider identified additional outages that we did not find in our preliminary search, including one airline that shared information on more than 20 additional outages. We did not include these additional outages in our count to ensure that our methodology was consistent. To account for outages that may have occurred subsequent to our review, we identified an online listing of airline IT outages and validated 9 of 11 of the outages included from 2018 through January 2019 using publicly available airline or airport information or coverage in at least 3 media sources. This list and our validation process provides evidence that airline IT outages continued to occur during this timeframe, but does not match the rigor applied to the identification of outages we identified from 2015 through 2017. As a result, we are not confident that this list identified all of the outages large enough to garner national-level, multi- day media coverage and an official response from an airline executive. Once we had identified airline IT outages through other sources and could look at data for specific dates, we were able to use DOT and FAA data to provide additional insight into flight disruptions (i.e., flight delays or cancellations) and ground stops caused by outages. For example, we requested that FAA conduct analysis on 3 of the 34 outages we had identified to determine what FAA operational data could reveal about the effects of these outages. We selected these 3 outages to reflect a range of flight disruptions for comparative analysis, including variations in size and cause of the outage. We also assessed the extent to which the effects on passengers could be seen in the BTS on-time performance data reported by airlines. For these data, we sought to determine the cause and magnitude of delays and cancellations for each outage. We also reviewed NTML log entries for the dates of known outages to further identify potential information, including incidents of ground stops. Finally, to obtain more information about the potential effects on passengers resulting from these events, we reviewed consumer complaints to DOT stemming from airline IT outages. These complaints were provided to us by DOT’s Aviation Consumer Protection Division and include reference to the associated outage. To understand how airlines accommodate inconvenienced passengers, we reviewed airline contracts of carriage for 9 of the 12 the airlines in our scope. These contracts are the legally binding contracts between carriers and passengers and may include specific provisions such as refund procedures and responsibility for delayed flights, among other things. We excluded two regional airlines (ExpressJet and SkyWest) that operate under the contracts of carriage of their mainline partners and Virgin America, which merged with Alaska in 2018 and no longer has a separate contract of carriage. In addition to the stakeholders mentioned above, we also interviewed consumer or passenger advocacy groups, including representatives with the Consumers Union, the National Consumers League, and Travelers United to identify any concerns regarding consumers affected by airline IT outages. We conducted this performance audit from February 2018 to June 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Heather Krause, (202) 512-2834 or KrauseH@gao.gov. Staff Acknowledgments In addition to the individual named above, other key contributors to this report were Jonathan Carver, Assistant Director; Molly Laster, Analyst-in- Charge; Neha Bhatt; David Hooper; Rich Hung; Delwen Jones; SaraAnn Moessbauer; Emily Mussey; Josh Ormond; Corinne Quinones; Pamela Snedden; James Sweetman, Jr.; and Elizabeth Wood.
Why GAO Did This Study In recent years, the airline industry experienced several well-publicized IT system outages to reservation, check-in, flight planning, and other systems. Such outages can result in widespread disruption to air travel, inconveniencing passengers, who may be delayed or face out-of-pocket costs, and can also affect airlines' revenue and operations. Airlines are responsible for operating and maintaining their IT systems. GAO was asked to review airline IT outages. GAO examined: (1) DOT's and FAA's roles related to airline IT outages and (2) what is known about these outages and their effects on passengers. GAO identified relevant federal laws and responsibilities and interviewed DOT and FAA officials. In the absence of DOT and FAA data to identify airline IT outages, GAO identified outages using open source documents for the 12 airlines reporting to BTS from 2015 through 2017 and validated these outages using a multi-step process with publicly available airline information, interviews with airline representatives, and FAA and DOT data. GAO also reviewed airlines' contracts of carriage, which are legally binding contracts between airlines and passengers, to understand how airlines accommodate passengers inconvenienced by IT outages, as well as 140 consumer complaints related to airline IT outages received by DOT from 2015 through June 2018. What GAO Found The Department of Transportation (DOT) and, within it, the Federal Aviation Administration (FAA) have limited roles overseeing or addressing the effects of outages from information technology (IT) systems that airlines rely on to schedule and transport passengers (e.g., reservation or flight planning systems). FAA's operations and oversight. At an airline's request, FAA may halt the operation of all or part of that airline's flights during an outage and work with the airline to reintegrate flights upon recovery. FAA does not directly oversee airline IT systems but works with airlines to ensure that airline data interfaces correctly with FAA's operational systems. DOT's consumer protection. Airline IT outages are not specifically addressed in DOT's consumer protections for passengers, although other protections may apply, such as restrictions on tarmac delays if a passenger is held on a flight during an outage. DOT oversees airlines' adherence to their contracts with passengers. These may include specific provisions such as refund procedures and responsibility for delayed flights, among other things. DOT also receives consumer complaints and uses complaint data to initiate investigations that may result in fines or enforcement actions. DOT's data collection. DOT requires large airlines to report information about on-time performance to the Bureau of Transportation Statistics (BTS), including the causes of flight delays and cancellations in several broad categories (e.g., airline caused, weather, and late-arriving aircraft). Using multiple sources, GAO identified 34 IT outages from 2015 through 2017, affecting 11 of 12 selected airlines. No government data were available to identify IT outages or determine how many flights or passengers were affected by such outages. BTS data provide information to consumers about airline performance broadly but are not designed to identify the effects of individual events, such as the number of flight delays and cancellations resulting from IT outages. According to GAO's validation of multiple sources, however, about 85 percent of the identified outages resulted in some flight delays or cancellations. Because of limited data, information about how passengers have been inconvenienced from outages is largely anecdotal (see figure for examples of inconveniences). Further, airlines vary in what they provide to these passengers (e.g., food, hotel, or rebooking on another airline) when IT outages occur. Consumer complaints stemming from IT outages accounted for less than one percent of all complaints received by DOT from 2015 through June 2018, and according to agency officials, these complaints raised concerns similar to complaints resulting from other causes of flight disruption. Complaints reviewed by GAO included the lack of food, a hotel, or compensation, among other things.
gao_GAO-20-223
gao_GAO-20-223_0
Background Coast Guard Organizational Changes since 9/11 Since 9/11, the Coast Guard has made a series of organizational changes to realign its functions. First, from 2004 through 2006, under an effort known as “Sectorization,” the Coast Guard revised its field structure by consolidating field activities under individual commands, known as sectors. The Coast Guard’s 37 sectors report to nine districts, and each district reports to one of two area commands. District commanders are responsible for regional operations and execute operations and missions within their area of responsibility. Sector commanders are responsible for local operations within each district. Each of the Coast Guard area commands, districts, and sectors is responsible for managing its assets and accomplishing missions within its geographic area of responsibility and, for the purposes of this report, are referred to as field units. Figure 1 shows the Coast Guard’s field structure. In June 2006, the Coast Guard implemented another organizational change effort known as modernization. The goal of modernization was to realign its mission planning and mission support functions, among other things. According to Coast Guard documents, the effort was intended to address challenges the Coast Guard faced in aligning its operations with Coast Guard-wide priorities, and delivering mission support in a more cost effective manner. It was also intended to realign the Coast Guard’s operations and policies across multiple headquarters program offices. For example, the Coast Guard has six operational mission programs overseeing its statutory missions, and before modernization the leadership of each of them developed separate action plans and policies to execute their missions, while independently making resource decisions. Through modernization, the Coast Guard also sought to improve delivery of mission support services throughout the field, particularly with respect to maintenance of the Coast Guard’s assets, including its vessels, aircraft, and shore infrastructure. Coast Guard Actions to Determine Workforce Requirements The Coast Guard uses three analytical tools to determine its workforce requirements: manpower requirements determinations, the Sector Staffing Model, and the Activity-Based Staffing Model for boat stations. Manpower requirements determinations, which begin with a manpower requirements analysis (MRA), are the Coast Guard’s preferred tool for determining the number of personnel and mix of skills its units require to meet mission needs, according to Coast Guard documents. The analysis identifies both the number of personnel required, and their necessary competencies, while also taking into account the effect of existing, new, or modified requirements on Coast Guard’s workforce. The Coast Guard considers the manpower requirements determination process to be its preferred method to determine workforce requirements for its assets and field units. The Coast Guard’s other two analytical tools—the Sector Staffing Model and Activity-Based Staffing Model—use historic levels of activity to determine workforce requirements. The Sector Staffing Model assesses workforce requirements for shore force units, while the Activity-Based Staffing Model assesses boat stations. For comparison, while activity models may identify the workforce needed based on the activities previously conducted by a unit, determinations identify the workforce needed to conduct the activities required by a unit to accomplish its planned mission, based on documented requirements. For this reason, the Coast Guard considers activity models to be less reliable for determining workforce needs than manpower requirements. Table 1 summarizes these three Coast Guard analytical tools for determining workforce requirements. Coast Guard Realigned Operations and Mission Support Functions, but Did Not Consistently Apply Key Reform Practices to Modernization Effort The Coast Guard Realigned Operations and Mission Support Functions In a 2018 report to Congress, the Coast Guard stated that under the modernization effort, it realigned its operations and mission support functions to address deficiencies that affected its ability to fulfill missions. Between 2009 and 2015, the effort focused on establishing headquarters organizations and business processes to manage operations and mission support. Central to the effort was the Coast Guard’s establishment of three new headquarters organizations. Deputy Commandant for Operations. Created to manage operational strategy and policy. The Deputy Commandant for Operations is responsible for the strategic management of the Coast Guard’s mission programs. This includes assessing and monitoring the performance of the Coast Guard’s missions and developing Coast Guard-wide strategy and operational policy. The Deputy Commandant for Operations also provides support for issues that affect multiple Coast Guard missions, such as managing intelligence activities, coordinating interaction with external stakeholders, and identifying new and emerging issues that threaten operations, such as cyberattacks. According to the Coast Guard’s 2018 report to Congress, consolidating these functions under a single organization has enhanced operational effectiveness and efficiency and aligned national priorities with Coast Guard-wide planning efforts. In 2019, the Coast Guard placed its reserve component under the Deputy Commandant for Operations to better incorporate the Coast Guard’s reserves into its plans for meeting mission needs. Deputy Commandant for Mission Support. Created to manage mission support delivery and business processes. The Deputy Commandant for Mission Support is responsible for managing mission support policy, strategy, planning, and resourcing to meet mission needs for human resources, engineering and logistics, information systems, and acquisitions. At the field level, through the Director of Operational Logistics, this organization assists with maintenance of assets and logistics planning through a network of bases. The Director of Operational Logistics manages Coast Guard bases which deliver operations level support to specific assets and oversees the functions of each Coast Guard base. In addition, the Deputy Commandant for Mission Support organization manages Coast Guard Logistic and Service Centers. Each logistic or service center exercises authority over its functions and the delivery of mission support to the Coast Guard’s fleet of aircraft and vessels. For example, the Aviation Logistics Center, located in Elizabeth City, North Carolina, is the lead entity for ensuring aviation asset services, such as maintenance and supply, for Coast Guard’s aircraft, while the Surface Forces Logistics Center, in Baltimore, Maryland, is responsible for ensuring these services for its vessels. Coast Guard officials told us that the modernized mission support structure enabled the Coast Guard to standardize delivery of products and service. For example, they told us that this structure helped them ensure that the materials and parts provided remained consistent across the Coast Guard’s field units. Force Readiness Command (FORCECOM). Created as an organization within the Deputy Commandant for Mission Support to prepare the Coast Guard workforce to properly perform and execute missions. FORCECOM is responsible for overseeing Coast Guard’s training plans and policies. This includes developing and delivering training courses, and conducting performance and compliance assessments of units, to determine whether each mission has the necessary equipment and personnel skills to ensure operational readiness. Figure 2 provides an overview of the Coast Guard’s modernized organizational structure and the responsibilities of the headquarters organizations known as the Deputy Commandant for Operations and Deputy Commandant for Mission Support. Coast Guard Continues to Make Organizational Changes In 2018, the Coast Guard reported to Congress that while it completed its primary organizational changes, it continued to modernize its business processes. For example, it reported that it continued making improvements to its risk management process, organizational structure, and mission support functions, including human resources utilization and asset acquisition. In October 2019, Coast Guard officials told us that some of these adjustments continue in smaller, incremental efforts within the Deputy Commandant offices and individual Coast Guard programs. For example, Coast Guard officials from the Office of Mission Support Integration within the Deputy Commandant for Mission Support told us that efforts to modernize its mission support functions were ongoing. Officials told us that they were centralizing management of certain support delivery functions. Officials told us that centralization would help to ensure consistency in how functions are performed across the organization, as well as provide access to timely and complete information about the status of assets, personnel, and equipment. They told us that the Deputy Commandant for Mission Support had largely centralized such functions for one directorate—engineering and logistics—and expected to apply them for another directorate responsible for information systems in fiscal year 2020. According to officials, the Coast Guard has faced difficulty applying these same mission business practices to human resources since these practices focus on a specific capability and are geared more towards assets, such as vessels and aircraft, rather than personnel. Specifically, while information about the status of asset availability is generally static, there are more variables to determining the Coast Guard’s human resources needs. For example, in addition to identifying the size of the workforce necessary to perform missions, the Coast Guard must also consider how to retain personnel and develop a workforce that can adapt to changes such as addressing emerging threats like cyber-attacks. In 1995, the Coast Guard integrated the reserve and active duty workforce at the field level; however, the component did not have headquarters representation. In 2006, the Coast Guard issued a modernization goal to optimize the use of the reserve component by ensuring the workforce had the necessary training and support. In 2018, the Coast Guard chartered a project team to evaluate the state of the reserve component’s governance. The team found that the structure under the Deputy Commandant for Mission Support did not take into account the difference between the reserves workforce and Coast Guard programs. In 2019, the Coast Guard established a new reserve component organization under the Deputy Commandant for Operations. Changes stemming from modernization continued with the Coast Guard’s reorganization of its reserve component (see sidebar). In June 2019, the Coast Guard moved its reserve component from the Deputy Commandant for Mission Support to the Deputy Commandant for Operations. Coast Guard officials stated that the change was meant to address longstanding issues, such as not incorporating the reserve component into Coast Guard-wide policymaking. Coast Guard officials stated that when the reserve component was under the mission support organization, it was not strategically managed to align with Coast Guard- wide mission needs. For example, when reserve components were dispatched, there was no plan to support all of the operational needs of the mission, such as by providing additional equipment needed by the reserve workforce. Figure 3 provides a timeline of key actions the Coast Guard took from 2004 through 2019 to modernize its organizational structure. Coast Guard Has Not Consistently Applied Selected Key Reform Practices to Modernization Effort The Coast Guard has not consistently applied selected key reform practices to its modernization effort. Specifically, the Coast Guard did not apply or partially applied 5 of 7 selected key practices. We have previously reported that an agency must closely and carefully manage organizational reforms, since fully implementing major transformations can span several years. This is particularly important when the transformations include several major changes to the organization. The Coast Guard’s 2018 report to Congress on its modernization effort acknowledged that the risk of complications increases significantly with large-scale reorganization efforts, such as modernization, and noted that such changes require formal processes to look for complications as they arise and to fully assess their impact on the organization, including its workforce. To this end, we assessed the Coast Guard’s implementation of its modernization effort against selected key reform practices in three subcategories—Leadership focus and attention; Managing and monitoring; and Strategic workforce planning—and found the Coast Guard did not consistently apply these practices. Additionally, we assessed the extent to which the Coast Guard’s reorganization of its reserve component applied key reform practices under the Leadership focus and attention, Managing and monitoring and Strategic workforce planning subcategories. Figure 4 shows our assessment of the extent to which the Coast Guard’s actions to implement the modernization effort applied selected key reform practices. Leadership Focus and Attention We found that the Coast Guard generally applied two key practices under this subcategory, including identifying a case for change and dedicating a team to lead the initial implementation effort, and it partially applied the key practice of holding leadership accountable for its success. Identify case for change. The Coast Guard generally applied this key practice because it identified a case for change to continue to drive the need for the modernization effort. Our prior work shows that key elements of successful initiatives are the demonstrated commitment of top leaders and accountability for change. Further, top leadership involvement and clear lines of accountability for making improvements are critical to overcoming organizations’ natural resistance to change. According to Coast Guard documents, in 2006, when the modernization effort started, Coast Guard leadership promoted the changes outlined in the Coast Guard’s 10 modernization initiatives through internal memos and action plans. Coast Guard documentation highlighted the benefits of the change and identified the next steps to be taken in order to complete the change. Additionally, commandants issued their strategic priorities highlighting plans for the modernization effort. More recently, the Coast Guard’s 2018 report to Congress reiterated the importance of the modernization effort, noting that the challenges that initially drove the need for organizational changes continue to challenge the Coast Guard. Dedicated implementation team. The Coast Guard generally applied this key practice because it established a team to implement its modernization changes. In 2007, Coast Guard created the Strategic Transformation Team to coordinate the early implementation of the modernization effort. According to Coast Guard officials from the Office of Resource, Organizational Analysis, and Workforce Management, as the effort moved from the planning stages to implementation, the team consolidated the goals in the Coast Guard’s 10 modernization initiatives into five main reorganization efforts. The team was responsible for ensuring that the implementation of these five efforts was consistent with the initial goals of modernization. This included facilitating the use of the Coast Guard’s existing organizational review and approval processes for organizational changes and leading the measurement processes for ensuring that the goals of modernization were met. Hold leaders accountable. The Coast Guard partially applied this key practice because it initially established an office to oversee its modernization but did not continue these efforts to ensure leadership accountability for modernization implementation. In 2009, the Coast Guard created a permanent oversight office under the Office of the Vice Commandant to transition the coordination responsibilities of the Strategic Transformation Team to monitor implementation of the modernization effort. The office was given an expanded role of managing change efforts across the Coast Guard, including overseeing the development of metrics related to organizational change efforts to ensure that these changes achieved goals. However, in 2015 the Coast Guard disestablished this oversight office and did not specify any office responsible for ensuring organizational change efforts met intended goals. According to Coast Guard officials from the Office of Resource, Organizational Analysis, and Workforce Management, the Coast Guard redistributed some of the oversight office’s responsibilities among other offices within the established headquarters organizations. The officials told us they did so since they determined the initial goals of modernization—to create the new headquarters organizations—had been met and oversight was no longer needed. These officials stated that the individual headquarters organizations could manage any necessary planning moving forward for their specific organization. As such, the Coast Guard’s shifting leadership priorities affected what parts of the modernization effort were implemented and, in some cases, resulted in years spent working towards a change that was later terminated. For example, in 2012, the Commandant stated that the original modernization initiative to establish a single operations command to manage field operations was not near completion, taking up institutional energy, and impacting operations. As a result, he decided to discontinue the effort and retain the two area field command structure. However, according to Coast Guard officials from the Office of Resource, Organizational Analysis, and Workforce Management, planning for the effort was close to completion, and ending it led to the reassignment of staff. During this time, the Coast Guard also reduced FORCECOM’s role from managing and measuring the overall readiness capabilities of the service to focusing on workforce training, and moved the organization under the Deputy Commandant for Mission Support. At that point, the Coast Guard had already prepared and issued a business plan for FORCECOM outlining the initial primary mission, goals and metrics for evaluating effectiveness. We also assessed the Coast Guard’s application of key reform practices against its reorganization of the reserve component and found, similar to our determinations of the modernization effort, it partially applied key practices under Leadership focus and attention. For example, while the Coast Guard identified key leadership and stakeholders currently responsible for implementing the effort, it could not demonstrate that there is a process to ensure leaders are held accountable for this implementation. Managing and Monitoring The Coast Guard did not apply the two key practices of tracking implementation progress or collecting data to measure progress of the effort, and partially applied the other key practice of measuring employee satisfaction with the modernization effort. We have previously found that organizational transformations must be carefully and closely managed in order to monitor progress towards achieving intended goals, since fully implementing major transformations can span several years. This is particularly important for the modernization effort which the Coast Guard reported in 2018 had fundamentally altered how it conducts business across the organization, for every mission and at every level. Managing and monitoring organizational reforms includes applying key practices such as tracking and measuring progress and developing mechanisms to seek and monitor employee satisfaction with changes resulting from reforms. Track implementation progress. The Coast Guard did not apply this key practice because it did not track its progress in implementing the modernization effort on an ongoing basis. Officials told us that during the early stages of modernization, the Coast Guard developed implementation plans and engaged in a significant planning effort to finalize the organizational realignment. These plans provided a method to track the Coast Guard’s progress as they implemented each phase of modernization; however, as the effort matured, the Coast Guard determined that the effort did not require the same amount of planning as initial implementation. In 2009, during the early stages of modernization, the Coast Guard reported that it had efforts planned or underway to monitor the implementation progress of the modernization effort, including developing implementation plans, goals, and performance metrics. As the modernization effort matured and the Deputy Commandant for Mission Support and Deputy Commandant for Operations were created, Coast Guard officials determined that they did not need the same amount of planning, and the Coast Guard stopped updating its implementation plans. Additionally, for the reorganization of the reserve component, the Coast Guard has minimally applied practices under the Managing and monitoring category. In particular, the Coast Guard did not track implementation progress of the reorganization. For example, the Coast Guard established the new reserve component without finalized plans or milestones and metrics against which it could track implementation progress. Collect data to measure progress. The Coast Guard did not apply this key practice because it did not collect data to measure the extent to which the modernization effort achieved its goals. In 2009, the Coast Guard reported that it had plans underway to identify existing metrics and gather data that would enable evaluation of the performance and effectiveness of its modernized processes and facilitate continued improvements. This was to include indicators that could be applied across the modernization efforts’ multiple goals and priorities such as quality, timeliness, cost, and outcomes. At the time, the Coast Guard reported that this would take approximately 6 months to 1 year to complete. However, according to officials from the Office of Resource, Organizational Analysis, and Workforce Management, its plans were discontinued due to the disestablishment of the oversight office and changing leadership priorities. Further, they stated that the Coast Guard no longer felt the need to monitor the effort since it determined the initial goals had been achieved with the establishment of the new headquarters organizations. In 2018, the Coast Guard reported to Congress that changes to mission support systems and business processes were significant changes and demonstrated the success of the modernization effort by developing a more effective and efficient organization. However, while officials from multiple offices told us that these changes resulted in better data and greater efficiency, the Coast Guard could not identify metrics or a data collection system that could demonstrate that the Coast Guard’s implementation of the modernization effort had improved effectiveness or efficiency. Moreover, in our review of the Coast Guard’s organizational change process, we found no metrics, time frames, or milestones to track whether, and to what extent, its organizational changes were achieving the goals of the effort. Similarly, for the reorganization of the reserve component, the Coast Guard did not collect data to measure progress. For example, the Coast Guard established no milestones or metrics against which to measure the reserve components’ progress in achieving its intended goal of improved mission performance. Measure employee satisfaction. The Coast Guard partially applied this key practice because it sought employee feedback during the early stages of the modernization effort, but did not continue to measure employee satisfaction with the effort. During the initial implementation of modernization, the Coast Guard used a combination of informal and formal mechanisms to seek employee satisfaction. For example, according to a 2009 National Academy of Public Administration report, the Commandant reached out to personnel through informal means, such as social media, to communicate and obtain real time feedback from staff affected by the organizational changes. Formally, the Coast Guard obtained anecdotal information through surveys of staff through the Organizational Assessment Survey and the Office of Personnel Management’s Federal Employee Viewpoint Survey; however, these methods do not include specific questions related to the impact of organizational change efforts. Specifically for modernization, beyond efforts during the early stages of modernization, there has been no sustained Coast Guard-wide effort to monitor the impact of the change on employees. According to a senior Coast Guard official from the Office of Resource, Organizational Analysis, and Workforce Management, the Coast Guard is not required to conduct such assessments as changes are implemented. Specifically, the document governing the Coast Guard’s organizational change process does not specify measuring employee satisfaction as part of the organizational change request process. Additionally, though the Coast Guard currently has formal mechanisms in place that would enable it to seek employee satisfaction, our review of recent surveys found that these instruments do not include questions specific to the impact of organizational change efforts; nor do they capture employee perspective in a timely manner. Strategic Workforce Planning We found that the Coast Guard partially applied the key practice of assessing effects of modernization on its workforce by engaging in some activities that assess its impact on its current and future workforce and planning to determine whether needed resources and capacity were in place. We have previously reported that people are at the heart of any serious reform effort because people define the organization’s culture, drive its performance, and embody its knowledge base. This is echoed in the Coast Guard’s large-scale enterprise-wide change management guidance, which stresses the need for a formal, structured approach to manage the people side of change to increase likelihood of success. One of the goals of the modernization effort was to create a Coast Guard- wide human resources strategy to better support mission execution. The Commandant reiterated this commitment in September 2018 testimony to Congress by stating that the Coast Guard’s strategic plan would incorporate its 2016 Human Capital Strategy, a 10-year plan to ensure that the Coast Guard develops the workforce necessary to meet mission demands. In addition, the Coast Guard has taken steps to build a Force Planning Construct model to inform leadership on the forces and capabilities needed to execute its steady state and contingency operations. In its April 2018 Manpower Requirements Plan to Congress, the Coast Guard stated that it envisioned using the model to assess future workforce needs. According to developers of the model, the foundation of the tool was the completion of manpower requirements determinations for all 158 Coast Guard unit types. However, the Coast Guard has completed such determinations for a small fraction of its workforce, as we discuss later in this report. Finally, for the reorganization of the reserve component, we found that the Coast Guard had minimally applied the key practice under Strategic workforce planning. In particular, officials from the new reserve component told us that even though the reserve force is not covered by existing workforce planning tools, the Coast Guard continued to proceed with reorganizing the reserve force structure. For each of the key reform practices that were not fully implemented, we found that the Coast Guard’s organizational change request process and associated guidance documents did not require such practices to be followed, nor did they require tracking implementation of changes, collecting data to measure progress, or assessing employee satisfaction. By not fully implementing each of these key practices, the Coast Guard may miss opportunities to demonstrate that its investment in the modernization effort meets its ultimate goals to enhance efficiency and effectiveness and to improve the overall performance of the Coast Guard. Systematically tracking progress of organizational change efforts and measuring their effects, including employee satisfaction, would better position the Coast Guard to identify challenges, if any, to meeting the goals of the organizational change in a timely manner. Further, the Coast Guard noted that metrics used to show the effect on its efficiency, mission effectiveness, and operations may be used to measure and influence future modernization efforts. Coast Guard Has Assessed a Small Portion of its Workforce Needs and Does Not Have the Information Needed to Achieve its Manpower Assessment Goal The Coast Guard’s manpower requirements determination process is its preferred method for determining workforce needs because it identifies the workforce needed to conduct required mission activities; however, since it began implementing the process in 2003, the Coast Guard has completed it for only 6 percent of its workforce. Further, for those positions with which the Coast Guard has used the manpower requirements determination process, it has not consistently done so in accordance with Coast Guard guidance. For example, while required by Coast Guard guidance, the Coast Guard has not tracked the number of MRAs and manpower requirements determinations completed. In its April 2018 Manpower Requirements Plan to Congress, the Coast Guard set a goal for using the manpower requirements determination process to identify staffing needs for all positions in all units, but does not have information on the resources it would need to do so. Coast Guard Has Assessed Workforce Requirements for a Small Portion of its Workforce The Coast Guard has completed workforce assessments for a small portion of its 58,000 personnel across its 158 unit types. From calendar years 2014 through 2019, the Coast Guard used its three analytical tools—manpower requirements determinations, the Sector Staffing Model, and the Activity-Based Staffing Model—to complete workforce assessments for approximately 21 percent of its 58,000 position workforce. According to Coast Guard guidance, manpower requirements determinations are to be updated every 5 years. However, the Coast Guard completed the manpower requirements determination process, its primary workforce analysis tool, for only about 2 percent of positions during this 5-year span. In 2019, the Coast Guard used the Sector Staffing Model to assess workforce requirements for about 9 percent of positions. Finally, in 2019 the Coast Guard used the Activity-Based Staffing Model for boat stations to assess workforce requirements for about 9 percent of positions, according to officials. Coast Guard Has Generally Not Implemented the Manpower Requirements Determination Process According to its 2016 Human Capital Strategy, the manpower requirements determination process is the Coast Guard’s primary tool for defining the human capital its units require to meet mission needs. To this end, the Coast Guard’s goal is to use this process to establish manpower requirements for all positions in all units. Coast Guard guidance for implementing the manpower requirements determination process includes three key steps as noted in the service’s 2015 Staffing Logic and Manpower Requirements Manual. MRA. The manpower requirements determination process begins with programs or Coast Guard leadership, such as the Commandant or Vice Commandant, requesting an MRA, which is a comprehensive review of workforce needs as determined from a wide range of factors. These factors include regulations, training, and competencies needed to effectively perform each mission. The MRA assesses the information necessary to adjust personnel, resources, mission, or risk, depending on availability of resources. Officials from the manpower requirements determination program, contractors, or in some cases, other Coast Guard programs, may conduct MRAs. After Action Report. MRA requesters are to submit an after action report within 6 months after the MRA is completed. The after action report is to outline actions to be taken based on an MRA. These actions could include adding resources, adjusting requirements, or assuming additional risk. Manpower Requirements Determination. The process is to conclude with a manpower requirements determination. The determination identifies the number and type of positions a unit type requires to meet mission-based capability requirements. In developing the determination, stakeholders are to review MRA results and develop the determination, while documenting any changes from the initial MRA. These stakeholders typically include representatives from the program assessed in the MRA and experts from around the Coast Guard in areas such as personnel assignments, workforce forecasting, training availability and capacity, and resource oversight, among others. The manpower requirements determination program then submits the determination to be signed by the Assistant Commandant for Human Resources. This signed memorandum, known as the determination, formalizes the final manpower requirement. Figure 5 summarizes the Coast Guard’s manpower requirements determination process, according to Coast Guard guidance. We found that the Coast Guard has not ensured that all three key steps of the manpower requirements determination process are completed since it began implementing it in 2003. Since 2003, the Coast Guard conducted MRAs for 28 percent of its workforce. However, the Coast Guard completed manpower requirements determinations for only 6 percent of its workforce. Moreover, we found that this trend continued with MRAs that the Coast Guard completed within the past 5 years. For example, according to our analysis of Coast Guard documentation, from calendar years 2014 through 2019, the Coast Guard conducted MRAs for 13 percent of its workforce, but completed determinations for 2 percent. Further, Coast Guard officials reported they did not have documentation of having conducted after action reports for any MRAs. Figure 6 shows the share of the Coast Guard’s workforce that is supported by the manpower requirements determination process. The top row shows the share of workforce supported by this process since its inception in 2003. The bottom row shows the workforce supported by up to date MRAs and determinations—completed between 2014 and 2019— according to guidance. The Coast Guard’s 2018 Manpower Requirements Plan to Congress states that the Coast Guard’s goal is to have updated manpower requirements determinations for all authorized positions in all units. When it reaches that goal, the manpower requirements determination process will allow the Coast Guard to know which units are the most understaffed, and to make service-wide decisions based on where the most urgent needs are. Only when determinations have been completed for its entire workforce can Coast Guard leadership allocate personnel in the most effective and efficient manner. Notably, Coast Guard documents emphasize the importance of an enterprise-wide approach to track and manage resources because it enables leadership to compare needs and make informed trade-offs across programs. In 2019, officials in the manpower requirements determination program told us that MRAs were to be updated every 5 years. Officials stated that this is a best practice that aligns with the Department of Homeland Security’s workforce strategy. Pacific Area Command officials we spoke with also told us that they view the guidance as requiring that MRAs should not be older than 5 years. Additionally, the Coast Guard’s 2015 Staffing Logic and Manpower Requirements Manual states that the Manpower Requirements Determination Program Division Chief is responsible for ensuring that each unit type has undergone an MRA within the past 5 years. Nevertheless, in November 2019, Coast Guard officials in the manpower requirements determination program told us that they view it as a goal to update MRAs every 5 years, not a requirement. We found that the Coast Guard does not have current guidance explaining the process steps for Coast Guard officials to follow to systematically execute the manpower requirements determination process. Coast Guard officials told us they were using a combination of two documents to guide its manpower requirements determination process, and neither document was both current and comprehensive in terms of detailing the steps to follow. For example, the 2015 Staffing Logic and Manpower Requirements Manual contains individual process step requirements, but has been rescinded. In contrast, the 2018 Manpower Requirements Manual provides current policy, but does not include guidance on process steps that program officials are to follow. In its 2018 manual, the Coast Guard rescinded the 2015 manual without replacing or affirming its process steps. Officials stated that analysts in the manpower requirements determination program use the rescinded 2015 guidance in executing the process because they have no other guidance to follow. Officials in the manpower requirements determination program provided several reasons for why the program has not consistently ensured that all steps are completed. First, officials told us that completing a determination for each MRA had not always been a priority for the Coast Guard. Officials said that in some cases manpower requirements determinations were not completed due to disagreement among stakeholders about how to apply the results of the MRA. Officials said, for example, that while an MRA may find that a program is significantly understaffed, some stakeholders may argue against including the full scale of the shortfall in the determination due to limited resources and competing needs. Second, officials stated that some determinations were not completed because some programs requesting MRAs were not interested in obtaining the final determination upon receiving the MRA. Specifically, they explained that sometimes the program that requested to initiate the manpower requirements determination process is most interested in the staffing data contained in the MRA, rather than the final determination, which formalizes the trade-offs and results proposed in the MRA. Officials in the manpower requirements determination program told us that both the 2015 and 2018 manpower requirements determination guidance did not identify circumstances when a manpower requirements determination was not required to be completed for an MRA. Further, program officials told us that they were not aware that the process guidance they reported using required after-action reports. Coast Guard officials also stated that having the process guidance in a rescinded document had made their ability to implement and oversee the process a challenge due to the possibility of officials applying the guidance inconsistently. They further said they recognized the manpower requirements determination process was not clear and needed to be revised, and that doing so may help ensure officials consistently implement the process. In June 2019, officials said they planned to issue updated guidance, but had not established a timeframe for doing so. By issuing updated guidance for conducting manpower requirements determinations that outlines required process steps, and any circumstances in which the process steps do not need to be performed, the Coast Guard can better ensure that those responsible for implementing the process do so consistently. Coast Guard Has Not Tracked the Extent of Manpower Requirements Analyses and Determinations Completed In addition to requiring MRAs to be conducted every 5 years, the rescinded 2015 Coast Guard guidance, which officials reported using to execute the manpower requirements determination process, states that the manpower requirements determination program is to maintain and update a master list of MRAs conducted to enable the program to track and organize its workload. However, the Coast Guard has not tracked the extent to which it has assessed Coast Guard unit types through the manpower requirements determination process, as required in the 2015 process guidance, which officials report is still in use. For example, in March 2019 Coast Guard officials stated that they did not maintain a list of MRAs or manpower requirements determinations completed since the program began in 2003, and they were not aware that maintaining a list was a requirement. Officials prepared a list to respond to our request, and in April 2019, provided us with a list of MRAs and determinations the Coast Guard had completed since 2003. However, we found that the list was not accurate. The Coast Guard’s list underrepresented the number of MRAs completed by almost half. Specifically, it showed the Coast Guard had completed MRAs for 34 unit types since 2003, whereas our review of Coast Guard documents found that the Coast Guard had completed MRAs for 63 unit types during this span. We also found that the Coast Guard had not accurately reported to Congress about its progress in assessing workforce requirements. While the list the Coast Guard compiled for us underrepresented the number of MRAs completed, the information it provided to Congress in its April 2018 report overrepresented the extent to which it has assessed its workforce needs. Specifically, in April 2018 the Coast Guard reported to Congress that it had recently analyzed workforce needs for 54 percent of its workforce using the manpower requirements determination process and its activity models. However, more than half of the MRAs it had completed had not been updated in the past 5 years, as Coast Guard guidance requires. We found that less than half of the Coast Guard’s reported figure—21 percent of its workforce—is supported by a workforce analysis that has been updated in the last 5 years. The Coast Guard’s manpower requirements plan does not have time frames or milestones outlining how it plans to reach its manpower requirements determination goal of completing MRAs and determinations for its entire workforce. Coast Guard officials stated that their April 2018 Manpower Requirements Plan to Congress lays out their goal with respect to conducting manpower requirements determinations. However, this plan does not include time frames or milestones for completing determinations for all unit types, nor does it signal that the Coast Guard will track MRAs and determinations it has completed. Coast Guard officials stated that they were using a multi-year program to prioritize manpower studies and complete them as resources allowed. When asked for further information about this plan, officials stated that there was no specific document outlining the plan; rather, the intent of the 2018 Manpower Requirements Plan was to indicate their manpower analysis goal involves a multi-year journey. By updating its manpower requirements plan to include time frames and milestones for completing MRAs and determinations for all positions in all units, the Coast Guard can track progress toward its goal and make necessary adjustments in its planning, as needed. The Coast Guard has reported on the importance of tracking and completing manpower requirements determinations to justify its resource allocation decisions. For example, its 2018 Manpower Requirements Manual states that methods to determine workforce requirements have historically varied from program to program. This variability prevented the Coast Guard from compiling reliable workforce data and comparing workforce needs across the Coast Guard. According to the 2018 manual, manpower requirements determinations enable key decision-makers to effectively manage workforce needs because they provide the data needed to objectively predict future manpower requirements and compare staffing needs across the entire workforce. By tracking and documenting the extent to which it has completed MRAs and determinations for its workforce, the Coast Guard will be better positioned to know which unit types have a defensible basis for the number and type of personnel needed to meet mission demands and to prioritize which MRAs to conduct. Coast Guard Does Not Have Information on the Resources Needed to Reach its Manpower Assessment Goal The Coast Guard has not determined the resources—both staff and funding—it needs to meet its goal for its manpower requirements determination program to complete determinations for all units. Program officials told us that they have used the manpower requirements determination process for a limited share of its workforce because of resource limitations. Coast Guard documents show that it has been almost 10 years since the Coast Guard last performed an MRA for the manpower requirements determination program to determine its own workforce needs. The 2010 analysis found that the program would require at least 30 full-time equivalent positions to accomplish the Coast Guard’s goal of completing about 25 MRAs each year, which would enable it to assess the Coast Guard’s 158 unit types roughly every 5 years. As of January 2020, the program had six analysts dedicated to conducting manpower analyses and, according to officials, may only be able to produce one MRA each year. Program officials estimated that the cost of conducting an MRA may vary widely, from $170,000 to more than $5 million for more complex unit types. Nevertheless, program officials told us they generally did not track information on the costs of conducting MRAs. According to officials, the manpower requirements determination program cannot track all such costs because cost data is spread across different program offices. For example, officials stated that for contracted MRAs, contracting fees are easier to identify, but the manpower requirements determination program does not have access to other major costs, such as travel by officials conducting the analysis. While the manpower requirements determination program oversees the MRA process, and is tasked with ensuring manpower requirements determinations are completed for every unit in the Coast Guard, officials said that generally the program that is the subject of the MRA provides funding for the study, and only that program maintains access to travel costs associated with the MRA. They said the manpower requirements determination program does not request cost information from the programs requesting MRAs. Additionally, the manpower requirements determination program does not collect cost information from programs that conduct their own MRAs. The Coast Guard has increasingly used contractors to complete MRAs. While the Coast Guard has not tracked the costs of conducting MRAs, Coast Guard analysis has shown that having MRAs completed by contractors is more costly than completing them in-house. Program officials said they have increasingly used contractors because of staffing limitations. For example, from calendar years 2010 through 2019, contractors completed nearly half of the Coast Guard’s 54 MRAs. Figure 7 shows the MRAs and manpower requirements determinations completed by the Coast Guard and contractors from 2003 through 2019. Coast Guard guidance states that in a resource constrained environment, leaders need to make risk-based decisions to prioritize tasks and optimally allocate resources to execute its missions. In addition, our work in the area of strategic human capital management has shown that reassessing resource requirements helps organizations to achieve their missions and match resources to their needs. Developing information on the resources needed for staffing and funding the manpower requirements determination program to achieve its manpower goal would better position the Coast Guard to make informed trade-off decisions and allocate its limited resources to those units most in need of manpower requirements determinations. Conclusions The Coast Guard’s roles and responsibilities have grown over the past two decades following the terrorist attacks of 9/11. Among other things, increased national security roles, first response duties during natural disasters, and compliance duties for ensuring the safety of increased commercial maritime activity have underscored the importance of the Coast Guard’s multiple missions. Organizational changes it made through the modernization effort were intended to realign operations and support functions. To that end, the creation of headquarters organizations achieved modernization’s initial goals. However, the Coast Guard continues to change as a result of modernization, and it has placed less effort on ensuring achievement of the longer-term goals of creating a more efficient and effective organization. Establishing a process for tracking and measuring the effectiveness of the organizational changes brought on by modernization, including measuring employee satisfaction, would better position Coast Guard to understand whether its goals have been achieved. The Coast Guard reported to Congress in April 2018 that it faced challenges in meeting its daily mission demands because it was operating below the workforce necessary to meet its mission demands. However, the service does not have a complete picture of the workforce necessary to meet its mission demands or whether its existing mix of personnel is efficiently and effectively allocated across units. The Coast Guard considers its manpower requirements determination process instrumental in determining the workforce needed to perform its duties, and the foundation of models the Coast Guard uses to determine workforce size in times of contingency or heightened security. Updated guidance for its staff tasked with conducting such assessments would enable the Coast Guard to better ensure that the process is fully implemented. Further, as of January 2020, the Coast Guard had updated analyses for a small fraction of its workforce, and had not updated its Manpower Requirements Plan with time frames and milestones for achieving its goal of assessing its entire workforce. Additionally, it does not have information on the extent to which analyses have been completed over the years or the resources it needs to complete assessments for its entire workforce. By tracking and updating the completion of MRAs and determinations, updating its plan to complete manpower requirements determinations, and obtaining information on the resources needed to implement such a plan, the Coast Guard will better ensure that it has the right number of people with the right set of skills to meet its mission demands. In this way, the Coast Guard will be better positioned to inform Congress of its workforce and associated resource needs. Recommendations for Executive Action We are making the following six recommendations to the Coast Guard: The Commandant of the Coast Guard should establish a systematic mechanism to track implementation and measure the Coast Guard’s progress in achieving organizational change goals. (Recommendation 1) The Commandant of the Coast Guard should establish a mechanism to periodically seek and monitor employee satisfaction with organizational change efforts. (Recommendation 2) The Commandant of the Coast Guard should update its Manpower Requirements Manual with guidance for how to execute its manpower requirements determination process, and take steps to ensure the process is implemented. (Recommendation 3) The Commandant of the Coast Guard should track and document the extent to which it has completed manpower requirements analyses and determinations for each unit type. (Recommendation 4) The Commandant of the Coast Guard should update its April 2018 Manpower Requirements Plan to include time frames and milestones for completing manpower requirements analyses and determinations for all positions and units. (Recommendation 5) The Commandant of the Coast Guard should determine the resources its manpower requirements determination program needs, both staff and funding, to achieve its goal of completing manpower requirements determinations for all positions and units. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. DHS provided comments, reproduced in appendix V. DHS concurred with our six recommendations and described actions planned to address them. DHS also provided technical comments, which we incorporated into the report, as appropriate. With regard to our first recommendation, DHS stated that the Coast Guard’s Office of Resources, Organizational Analysis, and Workforce Management will update the Coast Guard Organizational Manual to establish policy requiring that requests to change organizational structure include a plan, and establish a mechanism to track implementation and measure progress in achieving organizational change goals. The Coast Guard estimated completing the effort by December 31, 2020. With regard to our second recommendation, DHS stated that Coast Guard leadership agrees that mechanisms to periodically seek and monitor employee satisfaction with organizational change efforts are valuable. DHS stated that the Coast Guard already conducts periodic surveys and each of these instruments provide opportunities for the workforce to provide feedback, including on organizational issues, and that it seems preferable for survey owners to add questions to existing surveys, as opposed to implementing new survey instruments. DHS requested GAO consider the recommendation as implemented because such feedback mechanisms were already in place, and therefore establishing new mechanisms was unnecessary. As we note in our report, it is important that the Coast Guard identify challenges, if any, to meeting the goals of organizational change in a timely manner. We found the Coast Guard’s current surveys do not capture employee perspectives as organizational changes are implemented. In determining whether to close this recommendation, we will review Coast Guard documentation demonstrating that the Coast Guard has modified its existing surveys with added questions that monitor employee satisfaction with organizational changes, and that it has plans for implementing the surveys in a timely manner. With regard to our third recommendation, DHS stated that the Coast Guard’s Office of Human Resources Strategy and Capability is developing a Tactics, Techniques and Procedures document to provide guidance for executing the manpower requirements determination process. The document will provide additional guidance on the overall MRD process, including explicit directions for the collection and analysis of manpower data, and will establish Coast Guard enterprise standards for key factors and allowances used when conducting manpower analysis. The Coast Guard estimated completing the effort by September 30, 2020. With regard to our fourth recommendation, DHS stated that in December 2019 the Coast Guard’s Office of Human Resources Strategy and Capability initiated the process to document and track manpower requirements in the Coast Guard’s system of record. The Coast Guard estimated completing the effort by December 31, 2020. With regard to our fifth recommendation, DHS stated that the Coast Guard’s Assistant Commandant for Human Resources Directorate would update its Manpower Requirements Plan during the next periodic report submitted to Congress, due in fiscal year 2022. The Coast Guard estimated completing the effort by March 31, 2022. With regard to our sixth recommendation, DHS stated that the Coast Guard’s Office of Human Resources Strategy and Capability will review its September 2010 MRA, revalidate the inputs, and update the findings of the MRA to reflect the current needs of the manpower requirements determination program. The Coast Guard estimated completing the effort by September 30, 2020. We are sending copies of this report to the appropriate congressional requesters, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or AndersonN@gao.gov. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope and Methodology This appendix provides additional information on our objectives, scope and methodology. This report examines (1) how the Coast Guard modernized its organization and the extent to which it has applied key reform practices to its organizational change efforts and (2) the extent to which the Coast Guard has assessed its workforce needs. To address our first objective we analyzed Coast Guard documents related to the modernization effort. The documents included policies and guidance regarding how the effort was to be implemented, as well as descriptions of the status of these efforts. To evaluate the extent to which the Coast Guard applied key reform practices and considerations for evaluating organizational change efforts we assessed Coast Guard policies and procedures related to Coast Guard operations against the key practices we outlined in our June 2018 report on government reorganization. We collected and analyzed documentation related to Coast Guard’s actions taken to implement organizational change efforts such as the modernization effort and the integration of the Coast Guard’s reserve component into the headquarters governance structure. We assessed these reports, data and documents against selected criteria for key practices and considerations for agency reorganization identified in our June 2018 report. We selected relevant key practices by examining each of the potential four categories and 12 subcategories identified in our June 2018 report to determine the extent to which the practices under each applied to the Coast Guard’s modernization and reserve component integration efforts. The four categories are Goals and outcomes,” “Process for developing reforms,” “Implementing the reforms,” and “Strategically managing the federal workforce.” We deemed two subcategories under the category of “Implementing the reform”, “Leadership focus and attention” and” Managing and monitoring” and one subcategory “Strategic workforce planning” under the “Strategic planning for the federal workforce” category as relevant criteria for assessing the Coast Guard’s modernization efforts. We deemed the remaining nine subcategories not relevant to the Coast Guard’s modernization efforts since modernization was implemented in 2006 and retrospective analysis of these criteria would not result in the agency being able to make changes. For the three subcategories included in our assessment, we determined seven key practices from these subcategories that were most relevant to the Coast Guard’s modernization efforts and applied those practices to our assessment. We reviewed Coast Guard documentation and then made qualitative determinations about the extent to which the Coast Guard’s implementation of its modernization efforts addressed these criteria. A second analyst independently reviewed and validated each determination. We evaluated the Coast Guard’s actions against key reform practices to determine if they were generally, partially, or not at all applied. Generally applied. Agency documentation demonstrated that Coast Guard officials substantially applied applicable key practices. Partially applied. Agency documentation demonstrated that Coast Guard officials applied some key practices but not to a significant degree. Not at all applied. Agency documentation did not demonstrate that Coast Guard officials applied key practices. We deemed the following seven subcategories under the four categories as relevant criteria for assessing the Coast Guard’s reserve component Integration efforts: “Establishing goals and outcomes,” “Involving employees and key stakeholders,” “Using data and evidence,” “Addressing high risk and Longstanding management challenges,” “Leadership focus and attention,” “Managing and monitoring,” and “Strategic workforce planning.” We determined that the remaining three subcategories were not relevant to the Coast Guard’s reserve component integration efforts because we deemed the key practice more applicable to a government-wide effort or determined that it was too early to consider as the reserve integration effort was in its initial implementation stage. For the seven subcategories included in our assessment, we determined 19 key practices from these subcategories were most relevant to the Coast Guard’s reserve component integration efforts and applied those practices to our assessment. We reviewed Coast Guard documentation and made qualitative determinations about the extent to which the Coast Guard’s reserve component Integration actions addressed these criteria. A second analyst independently reviewed and validated each determination. We assessed the Coast Guard’s actions using the modernization effort scale: (1) Generally applied; (2) Partially applied; (3) Not applied; and (4) Minimally applied. Minimally applied. Agency documentation demonstrated that Coast Guard officials applied a limited number of key practices with significant gaps associated with each key practice. Our determinations are preliminary observations of the effort because Coast Guard’s reserve component organizational effort was in its nascent stages during our review. This presented several challenges in determining the point at which Coast Guard actions justify a rating of generally applied and partially applied. We applied the following decision rules to resolve these discrepancies: If one practice of the subcategory was rated partially applied, then we concluded that the subcategory as a whole partially applied. If one practice of the subcategory was rated generally applied, but one or more other key practices as either partially applied or not at all applied, then we concluded that the subcategory as a whole partially applied. If one practice of the subcategory was rated partially applied, but one or more other key practices rated either minimally applied or not at all applied, then we concluded that the subcategory as a whole minimally applied. We interviewed cognizant officials at Coast Guard headquarters, and field units, including the Atlantic and Pacific Area commands, and two Coast Guard districts and two Coast Guard sectors collocated with them. We interviewed officials from the two area commands because of their role in implementing organizational changes, and the districts and sectors for their perspectives on the Coast Guard workforce assessment process. Headquarters and field officials interviewed were responsible for the overall management of their organization in addition to officials responsible for facilitating the implementation of organizational change efforts. We reviewed prior GAO reports on organizational realignment, Coast Guard organizational changes, and high-risk issues in the federal government. In addition, we reviewed other reports evaluating long- standing agency management challenges. Finally, we reviewed documents and information on these organizational change efforts and compared them against Coast Guard guidance on organizational changes. To address our second objective, we analyzed Coast Guard documents related to management tools the Coast Guard has developed to determine its workforce requirements and identify personnel needs. Documentation included guidance and analysis related to developing workforce staffing needs, and strategies that set out the Coast Guard’s stated human capital principles. As with the first objective, we interviewed cognizant officials at Coast Guard headquarters, its Atlantic and Pacific area commands, and the two Coast Guard districts and two sectors collocated with them. Headquarters officials we interviewed were responsible for the development of manpower requirements and overseeing implementation of workforce assessments for Coast Guard units. We also reviewed prior GAO reports on workforce planning and Coast Guard personnel issues. Finally, we reviewed documents and information on these efforts to assess workforce requirements, and compared them against Coast Guard guidance on organizational changes for conducting the manpower requirements determination process, and our prior work related to strategic human capital management. To assess the extent to which the Coast Guard has supported its workforce with manpower requirements analyses and determinations, we analyzed all manpower requirements analysis (MRA) and determination documents the Coast Guard completed from 2003, when it began implementing the manpower requirements determination process, through calendar year 2019, the last full year of data available at the time of our review. Specifically, we requested the entire collection of MRA and determination documents from the Coast Guard. We then requested the number of positions that make up each unit type with a completed MRA. We assessed the reliability of the Coast Guard’s data through electronic testing, reviewing documentation, and interviewing Coast Guard headquarters and field unit officials regarding how these data were collected and used. We determined that these data were sufficiently reliable for determining the number of positions within each type of Coast Guard unit. With this information, for every MRA and determination completed, we calculated the number of positions in the Coast Guard’s workforce supported by available data. We conducted this performance audit from December 2018 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Overview of the Coast Guard’s Modernization Effort Case for Change, Intended Goals, and Examples of Key Actions Taken, Calendar Years 2006 - 2015 In 2006, the Commandant of the Coast Guard issued 10 Commandant Intent Action Orders intended to address elements of the Coast Guard’s command and control structure, mission support systems, and business processes that were identified as detracting from mission execution. Table 2 provides an overview of the issues that drove the Coast Guard’s modernization effort, the intended goals for the effort as outlined in the 2006 Commandant Intent Action Orders, and examples of key actions the Coast Guard has taken to address the goals. Appendix III: Extent to Which the Coast Guard’s Reorganization of its Reserve Component Applied Key Reform Practices The Coast Guard’s reserve component is its only workforce dedicated to respond to contingency operations such as natural and manmade disasters. The Coast Guard found that demand for reserve forces to augment its active duty workforce had grown as the service was called to respond to more contingencies. In 2006, under its organizational modernization effort, the Coast Guard issued a goal to optimize the use of the reserve component by ensuring the reserve workforce had the necessary training and support. The Coast Guard shifted governance of the reserve component under the Deputy Commandant for Mission Support; however under this structure, the Coast Guard did not take into account the difference between the reserves workforce being considered a program as opposed to a distinct component of the United States military. As such, in 2018, the Coast Guard chartered a project team to evaluate the state of the reserve component’s governance and develop alternate options to better integrate the reserves into the Deputy Commandant for Operations. In 2019, the Coast Guard integrated its reserve component into its Deputy Commandant for Operations governance structure. Officials told us that the goals for the new reserve component organization are to provide headquarters decision-makers enhanced visibility of operational readiness, competencies assigned and attained, and to use predictive modeling to look 2 or 3 years ahead to anticipate readiness posture and administrative readiness. They noted that achieving these goals relies on better data collection and developing metrics that can capture Coast Guard-wide information. Officials told us that, as of June 2019, the new organization was at initial operational capacity using existing staff. Table 3 provides our assessment of the extent to which the Coast Guard’s actions to reorganize its reserve component governance structure had applied key reform practices and examples of actions and deficiencies. Appendix IV: Coast Guard Manpower Requirements Determination Process Completed from Calendar Years 2014 - 2019 From calendar years 2014 through 2019, the Coast Guard implemented the manpower requirements determination process for 30 of its 158 unit types. The Coast Guard completed manpower requirements analyses for 30 unit types. Of these 30 manpower requirements analyses, the Coast Guard completed required manpower requirements determinations— establishing a manpower requirement—for only four of these 30 unit types. Table 4 shows the most recent manpower requirements analyses, and corresponding determinations, completed from calendar years 2014 through 2019. Appendix V: Comments from the Department of Homeland Security Appendix VI: GAO Contact and Staff Acknowledgements GAO Contacts Staff Acknowledgements In addition to the above contacts, Jason Berman (Assistant Director), Jennifer Kamara (Analyst-in Charge), Ben Atwater, Susan Czachor, Elizabeth Dretsch, Eric Hauswirth, Tracey King, Daniel Kuhn, and Kevin Reeves made key contributions to this report.
Why GAO Did This Study The Coast Guard is a multi-mission maritime military service responsible for maritime safety, security, and environmental protection, among other things. Since 2006 the Coast Guard has implemented organizational changes to improve its effectiveness and efficiency. During this time, the Coast Guard also created a workforce assessments process to determine the number of personnel and skills required to meet mission needs. In April 2018, the Coast Guard reported to Congress that it was operating below the workforce necessary to meet its mission needs. GAO was asked to review the status of the Coast Guard's modernization and workforce assessment efforts. Among other things, this report examines the extent to which the Coast Guard (1) applied key practices for agency reorganization and (2) has assessed its workforce needs. GAO analyzed Coast Guard documents used to plan and implement its modernization effort against GAO key practices for agency reorganization. GAO also analyzed Coast Guard workforce assessments and data from 2003 through 2019. GAO also reviewed policy and planning documents and interviewed Coast Guard officials. What GAO Found The U.S. Coast Guard (Coast Guard) realigned its mission planning and mission support functions through an effort known as “modernization,” but did not consistently apply key practices for agency reorganization in implementing the effort. Of seven key practices, the Coast Guard did not apply two and partially applied three. For example, the Coast Guard did not measure its progress in achieving the goal of modernization, as key practices recommend. Coast Guard documents for organizational change and associated guidance do not require such practices to be followed. By ensuring such practices are implemented, the Coast Guard will be better positioned to determine the extent to which its investments meet modernization's goal of improving effectiveness and efficiency. Although the Coast Guard has informed Congress that it needs to increase its workforce, it has assessed a small portion of its workforce needs. Its preferred tool for assessing workforce needs is its manpower requirements determination process, which includes manpower requirements analyses (MRA) and is completed with a manpower requirements determination (MRD). Coast Guard guidance states that MRAs are to be updated every 5 years, and according to its April 2018 Manpower Requirements Plan, the Coast Guard's goal is to complete MRDs for all of its 58,000 personnel and 158 unit types. However, the Coast Guard had completed MRAs for 13 percent of its workforce and MRDs for 2 percent over the past 5 calendar years (see figure). The Coast Guard's plan does not include time frames and milestones for how it will achieve its workforce assessment goal, and information on the resources it needs to complete MRDs for all positions and units has not been updated in 10 years. By updating its plan to complete manpower requirements determinations and obtaining information on the resources needed to achieve its workforce assessment goal, the Coast Guard will be better positioned to ensure that it has the right number of people with requisite skills in the right units to meet its mission demands and to inform Congress of its manpower needs. What GAO Recommends GAO is making six recommendations, including that the Coast Guard measure progress in achieving the goal of modernization, update a plan with time frames and milestones for completing its workforce assessment goal, and obtain information on the resources needed to meet its goal. DHS concurred with our recommendations.
gao_GAO-19-272T
gao_GAO-19-272T_0
Background VA’s Current Disability Compensation Appeals Process VA’s process for deciding veterans’ eligibility for disability compensation begins when a veteran submits a claim to VA. Staff in one of VBA’s 57 regional offices assist the veteran by gathering additional evidence, such as military and medical records, that is needed to evaluate the claim. Based on this evidence, VBA decides whether the veteran is entitled to compensation and, if so, how much. A veteran dissatisfied with the initial claim decision can generally appeal within 1 year from the date of the notification letter sent by VBA. Under the current appeals process (now referred to by VA as the legacy process), an appeal begins with the veteran filing a Notice of Disagreement. VBA then re-examines the case and generally issues a Statement of the Case that represents its decision. A veteran dissatisfied with VBA’s decision can file an appeal with the Board. In filing that appeal, the veteran can indicate whether a Board hearing is desired. Before the Board reviews the appeal, VBA prepares the file and certifies it as ready for Board review. If the veteran requests a hearing to present new evidence or arguments, the Board will hold a hearing by videoconference or at a local VBA regional office. The Board reviews the evidence and either issues a decision to grant or deny the veteran’s appeal or refers the appeal back to VBA for further work. VA’s New Appeals Process According to VA’s appeals plan, VA intends to implement the Act by February 2019, by replacing the current appeals process with a process offering veterans who are dissatisfied with VBA’s decision on their claim five options. Two of those options afford the veteran an opportunity for an additional review of VBA’s decision within VBA, and the other three options afford them the opportunity to bypass additional VBA review and appeal directly to the Board. Under the new appeals process, the two VBA options will be: 1. Request higher-level review: The veteran asks VBA to review its initial decision based on the same evidence but with a higher-level official reviewing and issuing a new decision. 2. File supplemental claim: The veteran provides additional evidence and files a supplemental claim with VBA for a new decision on the claim. The veteran can also request a VBA hearing. The three Board options will be: 3. Request Board review of existing record: The veteran appeals to the Board and asks it to review only the existing record without a hearing. 4. Request Board review of additional evidence, without a hearing. 5. Request Board review of additional evidence, with a hearing. In November 2017, VA initiated a test of the new VBA higher-level review and supplemental claim options. According to VA’s appeals plan, a purpose of this test—the Rapid Appeals Modernization Program (RAMP)—is to reduce legacy appeals by providing veterans with a chance for early resolution of their claims within VBA’s new process. Participation in RAMP is voluntary, but veterans must withdraw their pending legacy appeal to participate, according to VA’s appeals plan. VA Has Not Provided Complete Information on Four Elements in the Act In our March 2018 report, we found that VA’s November 2017 plan for implementing a new disability appeals process while attending to appeals under way in the current (legacy) process, addressed 17 of 22 elements required by the Act. For the 5 remaining elements, we found that it partially addressed 4 elements related to implementation monitoring, productivity projecting, and workforce planning, and did not address 1 element related to identifying total resources. This element called for delineating the resources needed by VBA and the Board to implement the new appeals process and address legacy appeals. We recommended in March 2018 that VA address all 22 required elements in the Act in VA’s appeals plan to Congress—including delineating resources required for all VBA and Board appeals options— using sensitivity analyses and results from its test, RAMP, where appropriate and needed. Since our March 2018 report, VA has taken some action on each of the five elements that we found were not fully addressed at that time. For example, VA added details related to projecting staff productivity, identifying total resources, as well as determining personnel requirements and productivity projections for processing appeals. For identifying total resources, VA added FTE information for other offices that help implement the appeals process and prepared a model to project resource needs. Although VA now addresses the 1 element related to projecting productivity, it only partially addresses 4 elements related to monitoring implementation, workforce planning, and delineating the total resources. For example, as of November 2018, VA’s plan does not contain metrics for monitoring implementation. Moreover, for total resources, the updated plan does not delineate the total resources required by VBA and the Board, such as the resources necessary for information technology and training. We acknowledge that in some cases delineating total resources could prove challenging, such as delineating information technology resources for the legacy and new appeals processes. We also acknowledge that implementing corrective actions to fully address these 4 elements may be challenging within the next several weeks, but we continue to believe VA has an opportunity to further address these 4 elements as part of certifying the agency’s readiness prior to the full implementation of the new process. VA Has Addressed Some Gaps in Its Plans to Monitor and Assess Performance, though Further Steps Remain In our March 2018 report, we found gaps in VA’s planning for how it will monitor and assess performance of the new appeals process when it is implemented. Specifically, we reported that the plan did not (1) establish timeliness goals for two of the three Board options (i.e., Board review of additional evidence without a hearing and Board review of additional evidence with a hearing); (2) articulate aspects of performance important for managing appeals, such as accuracy of decisions, veteran satisfaction with the process, or cost; (3) explain how the performance of the new appeals process would be compared to that of the legacy process; or (4) explain how the agency would monitor relative workloads of, and resources devoted to, the new and legacy appeals processes. To address these gaps, we recommended that VA clearly articulate in its appeals plan how VA will monitor and assess the new appeals process compared to the legacy process, including specifying a balanced set of goals and measures—such as timeliness goals for all VBA appeals options and Board dockets, and measures of accuracy, veteran satisfaction, and cost—and related baseline data. Articulating a balanced set of goals that cover key aspects of managing appeals is important to avoid promoting skewed behaviors (e.g., favoring timeliness over accuracy) and to fully understanding performance. In its progress reports, VA addressed some but not all aspects of this recommendation (see table 1). VA has made progress in monitoring performance and addressing workload changes in its new and legacy appeals processes, but still lacks a complete set of balanced goals and measures. As we noted in our July 2018 testimony, VA has developed sensitivity models and other analyses to monitor and forecast future VBA and Board workloads, production, and staffing requirements to help VA manage the legacy and new appeals processes. However, VBA and the Board have yet to specify a complete set of balanced goals for monitoring the performance of the new appeals processes. According to the November 2018 progress report, the Board plans to develop timeliness goals after VA fully implements the new appeals process. Until VA fully develops a set of balanced goals and measures, the agency risks not fully understanding how well the reforms are performing. Regarding comparing the performance of the new and legacy appeals processes, VA has previously reported that the agency plans to implement the reporting requirements in section 5 of the Act. This section requires VA to report performance measures related to, among other things, timeliness, productivity, and outcomes, without specifying whether or how VA should compare performance of the new versus legacy processes. In November 2018, VBA and Board officials told us they intend to use timeliness and productivity metrics from section 5 to compare the two processes. However, in its updated plans to date, VA has been reporting average timeliness of decisions made to date under RAMP—VA’s test of the two VBA options—without reporting the average time cases are pending. Moreover, VA has not been reporting timeliness data on both decisions and pending cases according to the month that they entered into RAMP, which present a more balanced indication of performance and trends. In November 2018 VBA and Board officials told us they would consider reporting timeliness using a monthly cohort that reflects when appeals were filed. VBA and Board officials also said they have taken steps to collect, through surveys, comparable information on veterans’ satisfaction with the new and legacy appeals processes. According to VBA and Board officials, they have pre-tested the surveys—which is considered a best practice by survey methodologists—and are coordinating the survey efforts with one another. VBA and Board officials also told us that the agency will report on accuracy and outcomes (grants and denials of claims) in the new process. However, they also stated that these measures would not provide a fair comparison with the legacy process because the Act eliminated several of the requirements formerly required in the legacy appeals administrative processes. Although VA officials said they would develop a plan for comparing the performance of the two appeals processes after the new process is fully implemented, they did not indicate how soon they would do so. Developing such a plan would better position the agency to fully understand whether the new process is an improvement. VA Has Augmented Its Master Schedule to a Limited Extent Our March 2018 report identified elements of a high-quality and reliable implementation schedule that were missing from VA’s master schedule for appeals reform. Specifically, we reported that VA’s high-level master schedule—which the agency included with its November 2017 plan—did not (1) include all key activities; (2) show which activities must finish prior to the start of other activities, or the amount of time an activity could be delayed before the delay affects VA’s estimated implementation date; (3) reflect interim goals and milestones for monitoring implementation; or (4) assign resources for activities. We recommended that VA augment the master schedule for its appeals plan to reflect all activities—such as modifications to information technology systems—as well as assigned responsibilities, interdependencies, start and end dates for key activities for each workgroup, and resources. These steps establish accountability and reduce overall risk of implementation failures. In response to our recommendation, the Board, VBA and other VA administrations made progress over time with developing and integrating underlying plans into the integrated master schedule (IMS) in spring and summer 2018. According to VA officials, VA set a baseline schedule for implementing appeals reform in response to the potential February 2019 implementation date established in the Act. Since November 2017, VA’s plan and progress reports have stated that VA uses an agency-wide governance structure to coordinate implementation, and regularly uses the schedule as a management tool for monitoring progress on appeals reform. For example, the Board’s project manager meets regularly with those responsible for major activities to check progress, including weekly meetings with leadership, and identifies and corrects issues related to schedule execution. In October 2018, VA provided us with lower-level schedules and information that allowed us to conduct a more detailed assessment of VA’s IMS against applicable best practices criteria. The six criteria we assessed lower-level schedules against were: Capturing all activities: schedule should reflect all activities necessary to perform work to accomplish a project’s objective. Sequencing activities: activities should be logically sequenced in the order they are to be carried out so that critical program dates can be met. Assigning resources: schedule should reflect all resources necessary to complete work, verify whether resources will be available, and identify any constraints. Verifying horizontal and vertical traceability: schedule should be rational and logically sequenced, account for interdependencies among activities, and provide a way to evaluate the current status (horizontal traceability). Also, the various levels of a schedule— summary, intermediate, and detailed—should be consistent with one another and enable different teams to work to the same schedule expectations (vertical traceability). Updating the schedule using actual progress and logic: maintain and continually update the schedule to reflect a realistic forecast of start and end dates of activities. Maintaining a baseline schedule: use original configuration of the program plan as a point of comparison for the current plan to manage scope, timeframes, and required resources. We found that, while VA has made progress with providing more detail, its master and underlying schedules only minimally met sound practices for project management. Specifically, as with our March 2018 assessment, we found that the schedule does not contain enough detail to manage the work or provide a realistic representation of the resources and time needed for this project. For example, the schedule did not contain a work breakdown structure that defines the work, activities, and resources necessary to accomplish implementation. Moreover, half of all the remaining activities are missing logic that shows which activities must finish prior to the start of other activities. In addition, the schedule contains an invalid critical path, meaning that the schedule does not present the amount of time that key activities could be delayed before such delays affect VA’s estimated implementation date. Without a valid critical path, management cannot focus on activities that will detrimentally affect the key program milestones and deliveries if they slip. To address our March 2018 recommendation, VA would need to ensure that all activities are accounted for, that scheduled activities appear in the correct order, that resources are properly allocated, that all activities appear on the critical path, and that a schedule risk analysis accounts for all risks. We provide a more detailed explanation of our assessment results in appendix I. In addition, establishing an overly optimistic schedule can reduce capacity for carrying out a project and potentially create pressure to sacrifice the quality of work activities to meet deadlines. Moreover, many of VA’s activities are slated to be concurrently completed just before implementation, posing a significant risk to implementing reform in February. For example, according to VA’s schedule, the agency needs to complete 117 activities after January 1, 2019. Further, other VA efforts to redesign or update key aspects of VA’s disability compensation process—including the Veterans Benefits Management System (VBMS)—were not driven by robust, comprehensive planning and did not achieve their schedule goals. While VA intends to start full implementation in February, we do not know the extent to which the lack of a robust schedule poses risks to successful and smooth implementation. Even if taking corrective actions to address our findings may not be feasible before February, incorporating such lessons learned into future project planning could help VA improve its project scheduling capabilities. VA Has Addressed Many, but Not All Key Risks to Implementation In our March 2018 report, we found that VA’s appeals plan could more fully assess key risks related to implementing the new appeals process. In particular, we found that VA’s plan did not include testing of new Board options or clearly define how it would assess the RAMP test of the VBA- only options before implementing them more broadly. Further, we reported that VA’s plan had not comprehensively reflected key risks because the agency had not established a complete and balanced set of goals and measures, which are a necessary pre-condition to effectively assessing risk. We recommended that VA ensure that the appeals plan more fully addresses risk associated with appeals reform by, for example, assessing risks against a balanced set of goals and measures, articulating success criteria and an assessment plan for RAMP, and testing or conducting sensitivity analyses of all five appeals options before fully implementing the new appeals process. In its progress reports, VA took many steps to address our recommendation, although key steps are remaining for VA to better assess risks associated with implementing appeals reform and managing appeals workloads in the legacy process (see table 2). Sound redesign and change management practices both suggest that tests be rigorously monitored and evaluated and that further roll-out occur only after an agency takes any needed corrective action and determines that the new process is achieving previously identified success criteria. Until VA takes these remaining steps, it may not have comprehensively addressed key risks to better position the agency for successful implementation of appeals reform. In conclusion, VA is undertaking an ambitious effort to reform its disability appeals process—while onboarding hundreds of new staff and implementing new technology—that will affect the lives of hundreds of thousands of veterans with disabilities for years to come. Consistent with our prior recommendations, VA has made concrete progress to improve its planning for disability appeals reform while it attends to legacy appeals. Efforts such as resuming sensitivity analysis to monitor workloads and testing VBA and Board appeals options will provide useful information to guide VA through the uncertainty often associated with process change. However, VA has reported it plans to fully implement the new disability appeals process in February 2019 even though it has yet to fully address our recommendations. While fully implementing our recommendations prior to February 2019 may not be feasible, doing so would better position VA to ensure successful implementation. Nevertheless, VA should still work to increase clarity around its plans prior to fully implementing reform. Moreover, many of the principles of sound planning practices that informed our recommendations remain relevant during process change. By continuing to improve its approach to performance measurement, scheduling, and risk management, even after implementation, VA could better ensure that the new process meets veterans’ needs. Chairman Roe, Ranking Member Walz, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact Elizabeth H. Curda at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include James Whitcomb (Assistant Director), Juaná Collymore, Michele Grgich, Sara Pelton, and Rachel Pittenger. In addition, key support was provided by Susan Aschoff, Mark Bird, Alex Galuten, Jason Lee, Sheila R. McCoy, Almeta Spencer, and Walter Vance. Appendix I: Assessment of the Extent to Which VA Followed Aspects of Scheduling Leading Practices For this testimony, we assessed the steps that the Department of Veterans Affairs (VA) has taken to address our March 2018 recommendations and what aspects remain unaddressed, including the extent to which VA is using sound practices for scheduling key projects. In summary, we identified several areas where VA’s most recent schedule falls short of sound practices. Further incorporating sound practices into future project planning could help VA improve its project scheduling capabilities. We reviewed VA’s integrated master schedule (IMS) for the appeals reform effort and underlying sub-schedules to assess them against 6 of the 10 best practices, which we determined most relevant to our March 2018 recommendation that VA augment its master schedule for VA’s appeals plan to reflect all activities—such as modifications to information technology systems—as well as assigned responsibilities, interdependencies, start and end dates for key activities for each workgroup, and resources, to establish accountability and reduce the overall risk of implementation failures. Specifically, we analyzed the following related scheduling best practices: (1) Capturing all activities, (2) Sequencing all activities, (3) Assigning resources to all activities, (4) Verifying that the schedule can be traced vertically and horizontally, (5) Updating the schedule using actual progress and logic and (6) Maintaining a baseline schedule. We assessed VA’s lower-level schedules against these 6 best practices by: Checking for specific problems that could hinder the schedule’s ability to respond to changes. For example, we: o Examined if there are any open-ended activities (i.e., activities with no predecessor and/or successors), o Searched for activities with poor logic: For example, Start to Start successor only or Finish to Finish predecessor only which represent dangling logic, or Logic on summary tasks rather than attached to detailed tasks (summary tasks are for organizing the schedule and should not drive the logic). o Looked for activities with constraints which keep the schedule rigid (e.g., start no earlier than, finish no later than, etc.), o Determined if activities were resource loaded—which helps to cost out the schedule—and examine whether resources are over- allocated or not available when needed, o Examined the schedule’s critical path to determine whether or not it was reliable and logical, o Examined schedule float and determined if it was reasonable, and o Examined whether the schedule was baselined, its status cycle, and what deviations there were from the original plan. We also determined if there were any actual start or finish dates recorded in the future and whether there was any broken logic between planned tasks. We also interviewed VA officials responsible for managing the schedule. We scored each scheduling leading practice on a five-point scale: “not met”, “minimally met”, “partially met”, “substantially met” and “fully met.” We determined the characteristic assessment rating by assigning each best practice rating a number and taking the average. Our resulting conclusions based on this assessment are as follows: VA’s project schedule minimally meets the best practice of capturing all activities. The schedule does not have well-defined start and finish milestones and there is not a project work breakdown structure (WBS) or corresponding WBS dictionary to define the work for each WBS element. We were not able to independently verify contractor work or major handoffs and deliverables in the schedule. In addition, there were activities with duplicate names, which could make communication difficult between VA teams, particularly between team members who are responsible for updating and integrating multiple schedules. VA’s project schedule minimally meets the best practice of sequencing activities. There are issues with missing dependencies, dangling activities, summary links, constraints and lags that affect the schedule meeting this best practice. Specifically, of the remaining activities, 55 percent have missing logic, over 12 percent are dangling, 42 percent have date constraints and 4 percent have leads assigned. When activities are not correctly linked, the program cannot use the integrated master schedule (IMS) to identify disconnects or hidden opportunities and cannot otherwise promote efficiency and accuracy or control the program by comparing actual to planned progress. When this happens, the schedule will not allow a sufficient understanding of the program as a whole, and users of the schedule may lack confidence in the dates and the critical path. VA’s project schedule minimally meets the best practice of assigning resources. While the schedule contains ‘Task Owner’ assignments, the Task Owner information has no effect on the durations or forecasted start and finish dates of detailed activities. Information on resource needs and availability in each work period assists the program office in forecasting the likelihood that activities will be completed as scheduled. If the current schedule does not allow insight into the current or projected allocation of resources, then the risk of the program’s slipping is significantly increased. VA’s project schedule minimally meets the best practice of verifying the schedule is traceable horizontally and vertically. There was no evidence in the schedule of hand-offs within the schedule—that is givers and receivers are easily identifiable in the schedule. We were unable to determine the relationship between lower-lever activities in the project schedule and higher-level activities and milestones in the management briefs provided to us. Specifically, we could not map the activities in the briefs to activities in the schedule. This inconsistency also prevented the verification of dates between the project schedule and higher-level management documents, even with documents that were provided from the same month as the October schedule. Products and outcomes were not easily traced through the sequencing of effort in the project schedule. In both cases the schedule did not respond appropriately to “shocks”; that is, greatly increasing the durations of some activities to increase the overall time required to complete the project did not affect the dates of key milestones. The duration increase of each activity did not affect the overall time line because the activity in question had a constraint that would not allow the project to appropriately extend. VA’s project schedule minimally meets the best practice of updating the schedule using progress and logic. Date anomalies, such as planned dates in the past or actual dates in the future, were found. The schedule was not current as of the date delivered to GAO. While officials report that they update the schedule regularly, a schedule narrative document does not accompany the schedule update that would detail changes to the current schedule and describe information such as the status of key milestone dates, changes in network logic, and a description of the current critical path(s). VA’s project schedule minimally meets the best practice of maintaining a baseline schedule. Officials said that the baseline schedule is the basis for performance measurement. But while baseline start and baseline finish dates were provided in the initial schedule, its activities were too high level, obfuscating the calculation of detail variances in subsequent schedules. There is also no evidence of a schedule basis document, which would include a general overview of the purpose of the schedule, other key basis information such as an overview of assumptions, rationale for durations specific to the CMR schedule, and required software settings. There is also no evidence of performance measuring. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study VA's disability compensation program pays cash benefits to veterans with disabilities connected to their military service. In recent years, veterans who appealed VA decisions on their claims have waited an average of 3 years. The subset of appeals resolved by the Board of Veterans Appeals—a separate VA agency that provides a higher level of appeals review—took on average 7 years to resolve. The Veterans Appeals Improvement and Modernization Act of 2017 makes changes to VA's current (legacy) process, giving veterans options to have their claims reviewed by VA or to appeal directly to the Board. The Act requires VA to submit to Congress and GAO a plan for implementing a new appeals process (which VA submitted in November 2017) and periodic progress reports (which VA submitted in February, May, August, and November 2018). The Act also includes a provision for GAO to assess VA's original plan. In March 2018, GAO found that VA could help ensure successful implementation of appeals reform by addressing gaps in planning and made four recommendations, with which VA agreed. This testimony focuses on the steps VA has taken to address GAO's recommendations, what aspects remain unaddressed, and risks these gaps pose for implementation. For this statement, GAO reviewed VA's updated plans, assessed VA's schedules against best practices, interviewed VA officials and reviewed information they provided about steps taken to implement GAO's recommendations. What GAO Found In a March 2018 report, GAO made four recommendations to address planning gaps in the Department of Veterans Affairs' (VA) November 2017 plan for changing its appeals process for disability compensation claims. Since then, VA has updated its appeals reform plan and taken steps to address aspects of these recommendations, but further steps could enhance its readiness for implementation: Address all legally required elements . VA's November 2017 plan did not address one and only partially addressed four of 22 elements required by the Veterans Appeals Improvement and Modernization Act of 2017 (Act); GAO recommended VA fully address all 22. As of November 2018, VA addressed one element related to projecting productivity and took steps to partially address the other four. VA is still missing information the agency needs to certify that it has the resources needed to successfully implement appeals reform. Articulate plans for performance monitoring and assessment . GAO recommended VA clearly articulate how it will monitor and assess the new appeals process relative to the legacy process, including, for example, specifying timeliness goals for the five new appeals options, and measures for decision accuracy in processing appeals. As of November 2018, VA officials stated their intention to use productivity, timeliness, accuracy, and veteran satisfaction metrics to assess the new versus the legacy appeals processes. However, VA has yet to specify a complete set of goals or measures for monitoring and assessing the relative efficacy of the new process or articulate detailed steps and timeframes for establishing them. Augment master schedule . GAO recommended VA augment its master schedule for appeals reform to reflect sound practices for guiding implementation of reform. Although VA's updated schedule reflected progress since VA's original 2017 plan, it still did not fully meet sound practices for project management. For example, the schedule does not appropriately define the work, activities, and resources necessary to accomplish appeals reform implementation. Without following sound practices, it is unclear whether the schedule poses risks to successful implementation of appeals reform. Address risk fully . GAO recommended that VA's plan more fully address risks in implementing a new appeals process by, for example, testing all appeals options prior to full implementation. As of November 2018, VA took many steps to address risks, although opportunities exist to better assess them. For example, although VA has used lessons learned from tests to update the implementation process, it has not fully tested all aspects nor has it developed mitigation strategies for all identified risks, such as veterans appealing to the Board at higher rates than expected. Until VA takes these remaining steps, it may not have sufficiently accounted for key risks in implementing the new process.
gao_GAO-20-174
gao_GAO-20-174_0
Background Detecting IDT Refund Fraud Is Challenging Over the past decade, our prior work has highlighted the evolving nature of individual IDT refund fraud and the challenges IRS faces in keeping up with fraudsters’ tactics. Since 2015, our biennial High-Risk Report has highlighted the challenges associated with IDT refund fraud, the actions IRS needs to take to address them, and the cybersecurity issue of protecting PII amid large-scale data breaches. These challenges are relevant to business IDT and further compounded by the complexity of the business tax environment. According to IRS officials, this complexity stems, in part, from the number of business types or structures, the various taxes that businesses pay, and the different tax forms businesses must file. Further, many businesses file tax returns throughout the year, unlike individual taxpayers who generally file income tax returns once a year. These factors make detecting, researching, and resolving potential business IDT cases more challenging than individual IDT cases. When establishing a business, a business owner must determine the structure of the business for tax purposes, among other things, and may link business entities together in networks with multiple tiers. In addition, unlike individuals, businesses are required to pay different types of taxes depending on the business structure. For example, C corporations and S corporations pay income tax, and may also pay employment taxes and excise taxes on certain products and services such as fuel. Businesses are required to file different forms for each type of tax and may also file forms to claim various tax credits. Table 1 provides examples of business types and associated tax forms, volume, and total refunds for fiscal year 2018. IRS officials said that the complexity of the business tax environment makes it difficult for tax examiners to distinguish between true business IDT and frivolous tax arguments or noncompliance, such as incorrect or missing information on a form. Officials also noted that fraudsters may be attracted to the potential large payout associated with business tax refunds. According to IRS data, the average 2018 tax refund for corporations was about $286,200 and about $24,700 for estates and trusts. In contrast, the IRS Data Book, 2018 reports that the average individual tax refund was about $2,900. Further, business IDT may also lead to other types of tax fraud. In addition to filing false business returns seeking a refund, fraudsters may use stolen EINs and business information to support an individual income tax refund scheme. For example, fraudsters may file fraudulent Forms W- 2, Wage and Tax Statement with information on fictitious employees. These forms could then be used to file fraudulent individual tax returns seeking refunds. Business IDT Can Occur in Two Ways According to IRS, there are two ways a fraudster can commit business IDT, both of which involve the fraudulent use of the EIN. 1. Obtain an existing EIN. In this scenario, a fraudster obtains federal tax information from an existing business (see fig. 1). The business may be active or dormant, meaning that the business owner has not filed a tax return for at least two tax periods. The fraudster then uses the EIN and other key business information to file a fraudulent business return, such as Form 1120. 2. Fabricate an EIN. In this scenario, a fraudster steals the identifying information of an individual, such as a Social Security number and uses it to apply for an EIN. The fraudster would then use the fabricated EIN to complete and file false business returns. Federal Agencies Are Required to Identify, Assess, and Manage Fraud Risks In June 2016, Congress passed and the President signed into law the Fraud Reduction and Data Analytics Act of 2015 (FRDAA), which created requirements for agencies to establish financial and administrative controls for managing fraud risks. These requirements are aligned with leading practices outlined in our Fraud Risk Framework. In addition, guidance from OMB affirms that managers should adhere to the leading practices identified in the framework. The Fraud Risk Framework provides key components and leading practices for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The framework consists of four primary components of fraud risk management: commit, assess, design and implement, and evaluate and adapt, as shown in figure 2. Specifically, the components call for agencies to (1) commit to combatting fraud by creating an organizational culture conducive to fraud risk management, (2) plan regular fraud risk assessments and assess risks to determine a fraud risk profile, (3) design and implement a strategy with specific control activities to mitigate assessed fraud risks, and (4) evaluate outcomes using a risk-based approach and adapt activities to improve fraud risk management. According to the Fraud Risk Framework, the four components are interdependent and mutually reinforcing. For example, fraud response efforts can inform preventive activities, such as using the results of investigations to enhance fraud detection efforts. We have previously reported that preventive activities generally offer the most cost-efficient use of resources, since they enable managers to avoid a costly and inefficient “pay-and-chase” model. The framework also reflects ongoing activities for monitoring and feedback that apply to all four components. IRS Uses Fraud Filters and Collaborates with External Partners to Detect Business IDT IRS uses computerized checks, or fraud filters, to screen incoming tax returns for known or suspected characteristics of fraud. As of September 2019, IRS had implemented 19 unique fraud filters that assess incoming returns on certain business and employment tax forms. These fraud filters help IRS determine if an incoming return exhibits suspicious characteristics. IRS also cross-references these returns against lists of taxpayer identification numbers previously involved in data breaches and at greater risk of tax-related identity theft. IRS officials stated that they plan to implement additional fraud filters for three employment tax forms for the 2020 filing season. Our analysis of IRS’s data shows that from January 2017 to August 2019, business IDT fraud filters stopped about 188,500 incoming business returns as potential IDT, claiming $47.6 billion in refunds. Of these, IRS performed in-depth research on about 182,700 returns claiming $47.3 billion in refunds. IRS determined that about 77 percent of these cases (140,100 cases) claiming $38.3 billion in refunds were not business IDT while about 4 percent (7,900 cases) were confirmed business IDT claiming $384 million in fraudulent refunds. The remaining cases were still under review as of August 2019. However, as we discuss later in this report, these estimates do not capture the full size and scope of business IDT. In addition to developing fraud filters, IRS has established more advanced fraud detection efforts through the Return Review Program (RRP). As of September 2019, IRS was developing and testing fraud detection models in RRP for certain business tax forms. IRS officials said they intend to develop additional models, such as those to address fuel tax credit fraud and entity fabrication. Officials also noted that they will continue to rely on fraud filters to detect potentially fraudulent business returns, even after expanding RRP’s functionality. Further, IRS’s broader fraud detection efforts include working with external partners. For example, IRS collaborates with states and industry partners through the Security Summit Business IDT sub-workgroup. This group has identified business-related data elements that are captured during the tax filing process and analyzed for potential suspicious patterns that could indicate business IDT. During the 2018 filing season, IRS analyzed 37 data elements from incoming business tax returns and 10 data elements on incoming employment tax returns, including, for example, characteristics of the computer used to submit the return. IRS officials also stated that they are working directly with tax practitioners to help improve the quality of the data they collect to better inform future business IDT fraud filters and models. In addition, in December 2017, IRS initiated a pilot project with the Alabama Department of Labor to help detect and prevent business IDT. IRS officials stated that they send the department a data extract on all newly issued EINs from the prior month. The state performs research on these businesses and, in turn, sends IRS a list of businesses that it has determined to be fraudulent. As a result, IRS is able to deactivate the fraudulent EINs before the fraudster files a false business, employment, or individual tax return claiming a refund. This allows IRS to reject returns associated with the fraudulent EINs. According to IRS data, in 2018 IRS identified about 3 percent (1,343 out of 53,826) of new EINs in Alabama as fraudulent. The early results of this collaborative effort indicate that this project shows promise, and IRS officials stated that they are working to determine if they can expand the initiative to other states. IRS Has Taken Some Steps to Identify Business IDT Risks, but Efforts Are Not Fully Aligned with Selected Fraud Risk Management Leading Practices IRS Has Developed an Organizational Culture to Help Combat Fraud, but Lacks a Designated Entity to Oversee Business IDT Efforts One component of our Fraud Risk Framework calls for agencies to create an organizational culture conducive to combating fraud. Such a culture can be created through “tone at the top,” whereby senior-level staff demonstrate commitment to integrity and combating fraud, and actions that involve all levels of the agency in setting an antifraud tone that permeates the organization. In addition, the Fraud Risk Framework calls for agencies to designate an entity to lead fraud risk management activities. Among other things, the designated entity should have defined responsibilities and the necessary authority to perform its role, including managing a fraud risk assessment process and coordinating antifraud activities across the program. Our prior work has shown that when agencies formally designate an entity to design and oversee fraud risk management activities, their efforts can be more visible across the agency, particularly to executive leadership. Consistent with the Fraud Risk Framework, IRS leadership has demonstrated a commitment to identifying and combating overall IDT refund fraud. For example, the agency has recognized the broad and evolving challenge of IDT refund fraud in its fiscal year 2018–2022 strategic plan. Also, as previously discussed, IRS has expanded its fraud detection activities to prevent payment of fraudulent refunds, including refunds on business-related returns. In addition, our 2019 High-Risk Report noted that IRS took significant actions to facilitate information sharing with states and industry partners through the Identity Theft Tax Refund Fraud Information Sharing and Analysis Center. Further, IRS has implemented agency-wide antifraud efforts, including bringing officials together from across the organization to discuss potential fraud risks. These efforts have helped to foster an antifraud tone across IRS, according to IRS officials. At the business unit level, four IRS entities have responsibility for detecting, preventing, and resolving business IDT, as described below. However, IRS has not designated a lead entity to design and oversee business IDT fraud risk management activities across the agency, including a fraud risk assessment, consistent with leading practices. During our interviews with IRS, we found that IRS officials were knowledgeable about the business IDT policies, processes, and outcomes in their individual unit. However, none of the entities has defined responsibilities and the necessary authority to manage fraud risk across the business units. Further, no one we spoke with could articulate an agency-wide view of the problem and its potential impact on IRS. Return Integrity and Compliance Services (RICS) is responsible for detecting potential fraud on incoming business tax returns during the “pre-refund” phase (i.e., the period from when IRS accepts the return but before it issues a refund). About 20 RICS and Integrity and Verification Operations tax examiners are responsible for researching taxpayer accounts to confirm whether or not business IDT occurred. Tax examiners are also responsible for resolving cases to both prevent IRS from paying out fraudulent refunds and ensure that legitimate taxpayers’ returns are released for processing. RICS refers cases to other IRS units if the case shows other signs of fraud, such as a frivolous return. Accounts Management (AM) is responsible for researching and resolving potential business IDT cases identified during the “post- refund” phase (i.e., after a refund has been paid). AM customer service representatives perform in-depth account research and work with taxpayers to determine if business IDT has occurred. In cases of confirmed business IDT, AM corrects related account errors and enters appropriate IDT markers on the taxpayer’s account. According to IRS officials, about five AM staff work on business IDT cases one day a week or as needed. Criminal Investigation (CI) investigates large-scale tax schemes and other financial fraud, including fraud related to IDT. Office of Research, Applied Analytics and Statistics (RAAS) is responsible for supporting RICS and other business units in identifying and developing various business IDT fraud detection capabilities. RAAS also performs analyses to help IRS determine how best to proceed with other fraud detection and prevention efforts. IRS officials stated that representatives from the four business units meet regularly to share information on cases and discuss challenges. Further, IRS officials stated that the IDT Executive Steering Committee—which last met in October 2018—is responsible for providing general oversight and guidance to business units working on IDT-related efforts. However, our review of several sets of Committee meeting minutes indicates that while RICS has briefed committee members on the status of various business IDT efforts, they have not specifically discussed business IDT program priorities, potential fraud risks, or resources. When asked why IRS has not designated an entity to be responsible for overseeing business IDT fraud risk efforts, IRS officials said its business IDT efforts may not require additional oversight because they are significantly smaller than IRS’s individual IDT efforts in terms of both case volume and number of employees. They also said that the business IDT efforts are relatively new. However, with no more than 30 IRS employees working on business IDT issues, each business unit is mainly focused on day-to-day operations. The absence of an entity to lead business IDT fraud risk efforts may contribute to the issues we identify later in this report related to identifying and assessing business IDT fraud risks consistent with leading practices and delays in resolving business IDT cases. The Fraud Risk Framework’s leading practices provide flexibility in structuring the designated entity to best support an agency’s fraud risk management efforts. For example, leading practices note that the designated entity could be an individual or a team, and can vary depending on factors like existing organizational structures and expertise within the agency. In addition, employees across an agency or program, as well as external entities, can be responsible for the actual implementation of fraud controls. For example, IRS could designate one business unit as a lead entity, or leverage existing cooperative relationships between RICS, AM, CI, and RAAS to establish a business IDT leadership team with defined responsibilities and authority for managing fraud risk. A lead entity could help provide a strategic direction, coordination across business units, and oversight for managing IRS’s business IDT fraud risks. Further, without a designated entity, it is not clear which entity would be responsible for assessing business IDT risks and documenting the results, consistent with leading practices. These activities are important to combat the evolving threat of business IDT. IRS Has Not Developed a Business IDT Fraud Risk Profile IRS Has Not Developed a Fraud Risk Profile Based on Assessed Business IDT Risks The Fraud Risk Framework calls for agencies to regularly plan and perform fraud risk assessments to determine a risk profile. Fraud risk assessments that align with the Fraud Risk Framework involve (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and impact of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the results (see fig. 3). Such a risk assessment provides the detailed information and insights needed to create a fraud risk profile, which, in turn, is the basis for creating an antifraud strategy for the program. IRS has taken preliminary steps to understand fraud risks associated with business IDT through data analysis efforts and internal discussions with subject matter experts. However, IRS has not fully identified and assessed fraud risks to business IDT consistent with leading practices. These practices include identifying and assessing the likelihood of inherent fraud risks, determining a fraud risk tolerance, and examining the suitability of existing fraud controls to determine if they appropriately address identified risks. IRS business units use current and prior year tax return data and information on known business IDT threats to improve existing fraud detection efforts and develop new efforts. For example, RICS and RAAS officials stated that they regularly collaborate to discuss the feasibility of new fraud filters and identify and prioritize analyses on business IDT data. This effort has resulted in IRS business units identifying 38 discrete projects to, for example, analyze existing fraud filter performance and understand business tax return filing behaviors. RICS officials stated they typically identify two to three projects to begin each year, resources permitting. In addition, IRS officials stated that at the end of each filing season, they review and analyze confirmed business IDT cases to identify any new patterns or trends that may be useful for enhancing existing fraud filters and developing fraud detection models in RRP. Further, RAAS has performed ad hoc data analyses, such as on the characteristics of fabricated entities, to help understand potential risks to the business tax environment. While these are positive steps, IRS has not assessed business IDT fraud risks consistent with leading practices in the Fraud Risk Framework. For example, IRS has not identified and documented inherent fraud risks in the business tax environment, or assessed the likelihood of their occurrence and impact on IRS—the first two steps of a fraud risk assessment process. Further, our review of past GAO, Treasury Inspector General for Tax Administration (TIGTA), and National Taxpayer Advocate reports identified issues that pose inherent risks to IRS’s business IDT efforts. These risks include weaknesses with correspondence-based authentication, EIN vulnerabilities, and the high false detection rates for IDT fraud filters. We consider these to be inherent risks due to the complex nature of the business tax environment and IRS management’s overall limited response to them. Weaknesses with correspondence-based authentication. To help verify whether a suspicious business tax return is legitimate, IRS’s business IDT procedures rely on correspondence-based authentication. This involves the taxpayer answering several brief, written questions about the business and sending this information to IRS via mail. IRS officials stated that they believe correspondence-based authentication is no less secure than other forms of authentication, such as having business owners verify their identity in-person at a Taxpayer Assistance Center or authenticating via telephone. However, unlike other forms of authentication, correspondence-based authentication is inherently less secure because it may not require the taxpayer to verify their identity using a government-issued form of identification. Consequently, IRS has less assurance that the person is the actual business owner and the return in question is legitimate. In June 2018, we reported that IRS had not performed risk assessments to identify, assess, and mitigate risks associated with correspondence- based authentication because it did not have a policy that requires regular assessments and timely mitigation of identified issues. Therefore, without a policy for conducting risk assessments for correspondence- based authentication and a plan for performing an assessment, IRS may underestimate known risks and overlook emerging threats to the tax environment. We recommended that IRS establish a policy for conducting such risk assessments and develop a plan for performing them. IRS agreed with our recommendations and, as of November 2019, had developed a draft policy for conducting risk assessments. However, IRS had not yet developed a plan for performing these assessments. IRS officials stated that they intend to address these recommendations by May 2020. EIN vulnerabilities. In February 2018, TIGTA identified concerns with IRS’s EIN application process and made 18 recommendations, including that IRS improve processes to ensure that the applicant meets the requirements for obtaining an EIN and implement policies to help detect potential abuse of the online EIN application system. IRS agreed with 15 of TIGTA’s recommendations and, as of September 2019, IRS reported that it had addressed 11 recommendations. The four unaddressed recommendations aim to improve data collection and validation in the EIN system, which could help IRS identify suspicious applications. IRS officials stated that these improvements are on hold due to limited resources and competing priorities. In addition, characteristics of the EIN may make it inherently risky and susceptible to fraudsters. According to IRS, a business’s EIN is not considered PII and is not required to be protected like a Social Security number. This may make it easier for a fraudster to obtain an existing EIN and file a fraudulent business tax return. In addition, we have previously reported that fraudsters may target paid preparers, tax software providers, and other third parties to steal taxpayer data to commit IDT refund fraud or other types of financial crimes. These data may include existing EINs or the necessary information to obtain a new EIN, making it easier for fraudsters to file fake business returns. IRS officials stated that they recognize the potential risk of the EIN application process, but must balance the needs of legitimate businesses against IRS’s responsibility to detect and prevent fraud. Officials noted that they have security measures in place to detect potentially suspicious activity in the online EIN application and fraud filters to detect when taxpayers file a return with a dormant EIN. A fraud risk assessment consistent with leading practices would help IRS establish a risk tolerance for the EIN process and determine if its existing fraud controls are sufficient to address the vulnerabilities inherent to the EIN application process. High false detection rates for IDT fraud filters. The National Taxpayer Advocate’s 2018 Annual Report to Congress noted that one of IRS’s most serious problems is a high false detection rate in its fraud detection systems. In general, the false detection rate is the number of legitimate returns selected by the IRS as potentially fraudulent, divided by the total number of returns selected as potentially fraudulent. The National Taxpayer Advocate noted that IRS’s false positive rate for individual IDT filters was 63 percent in 2018. The high rate contributed to increased processing times and delays in issuing refunds for legitimate returns. It also created additional work for IRS. Similarly, our data analysis of BMFIC data shows that IRS’s business IDT fraud filters had about an 85 percent false detection rate for returns screened by fraud filters from mid- January 2017 to December 2018. In September 2019, IRS officials described several factors contributing to the high false detection rate for business IDT fraud filters. These factors include taxpayers and tax preparers failing to update key information with IRS, cross-referenced fraud filters triggering other filters, and changes in taxpayer filing behaviors due to new tax laws. The officials said they are working to reduce the false detection rate. While it is reasonable to expect fraud filters will catch some legitimate returns, IRS has not conducted a risk assessment—or developed a fraud risk tolerance—consistent with leading practices. Determining a fraud risk tolerance would help officials determine how best to balance the risks of missing fraudulent returns with the risks of flagging legitimate returns. Doing so may also help IRS prioritize any needed improvements to existing filters. According to the Fraud Risk Framework, a fraud risk assessment is the basis for developing an antifraud strategy. Among other things, an antifraud strategy considers the benefits and costs of control activities to address risks, such as the inherent business IDT risks described above, and other risks facing the program. As of July 2019, IRS’s Wage and Investment division had identified the overall threat of business IDT as one of 12 risks it is currently facing. However, IRS’s risk documentation does not include important components of a fraud risk assessment consistent with GAO’s Fraud Risk Framework. Specifically, the documentation does not include information on the likelihood or impact of each risk, IRS’s risk tolerance, or clear plans or responsibilities for mitigating risks. A business IDT fraud risk assessment with these key items would position IRS to develop a fraud risk profile and an antifraud strategy for business IDT going forward. In addition, officials from IRS’s Office of the Chief Risk Officer stated that consistent with the Fraud Reduction and Data Analytics Act of 2015 (FRDAA), the agency compiles an annual enterprise-wide fraud risk report based on program-level risks that IRS business units identify and monitor. The Office of the Chief Risk Officer’s October 2019 report acknowledges business IDT as one of 11 enterprise fraud risks for 2019– 2020. A fraud risk assessment and a fraud risk profile on business IDT consistent with leading practices would also help support IRS’s broader efforts to report and monitor enterprise-wide fraud risks. IRS officials stated that they have not performed a formal fraud risk assessment or developed a fraud risk profile for business IDT because they have directed their resources toward identifying and addressing fraud that is occurring right now and improving fraud detection efforts. When asked whether they had plans to further identify and assess inherent fraud risks for business IDT—the first step of the fraud risk assessment process—IRS officials said they thought that the costs of identifying and assessing inherent risks of business IDT would likely outweigh the benefits given the relatively low volume of confirmed business IDT cases, compared with individual IDT refund fraud. Without assessing inherent risks, determining the likelihood, impact, and IRS’s tolerance for each risk, and examining the suitability of existing fraud controls, IRS lacks reasonable assurance that it is aware of the most significant fraud risks facing business IDT. Such an analysis would also help IRS determine whether additional fraud controls are needed and whether to make adjustments to existing controls. Further, without this critical information, IRS will be unable to develop a fraud risk profile consistent with leading practices. A fraud risk profile for business IDT may help IRS make better informed decisions about allocating resources to combat business IDT and minimize financial losses. Consistent with our Fraud Risk Framework, a fraud risk profile that considers the likelihood and impact of fraud risks, IRS’s tolerance for risk, and the suitability of existing fraud detection activities is critical for developing an antifraud strategy and ensuring that IRS has an effective approach to addressing risks to business IDT. Collecting Additional Data Could Help IRS Estimate the Size and Scope of Business IDT The Fraud Risk Framework states that managers may conduct quantitative or qualitative assessments, or both, to help determine the likelihood and impact of inherent fraud risks on the program’s objectives and help estimate fraud losses and frequency. Further, federal internal control standards call for program managers to use quality information to achieve their objectives, address relevant risks, and communicate that information as necessary to internal and external stakeholders. As of September 2019, IRS was collecting fraud filter data for some, but not all, business-related forms that may be susceptible to business IDT. Our analysis of IRS’s data shows that for 2018, business IDT fraud filters covered about 88 percent of business tax forms claiming a refund (14.0 million out of 15.9 million returns) and nearly all employment tax forms claiming a refund (30.7 million out of 31.0 million returns). IRS officials stated that since 2016, they have incrementally implemented business IDT fraud filters for the most commonly filed forms. We recognize that IRS has made progress in implementing filters for commonly filed forms and that the deceptive nature of fraud makes developing accurate fraud estimates challenging. However, our analysis shows that IRS has not developed business IDT fraud filters for at least 25 additional business-related tax forms. In 2018, these forms represented about $10.4 billion in refunds. As a result, IRS is not able to analyze data from these forms for emerging fraud patterns or schemes. Further, while current business IDT fraud filters cover the most commonly filed forms, IRS has not assessed which remaining forms or fraud scenarios pose the greatest risk to IRS and taxpayers. IRS also has not determined a risk tolerance for existing fraud filters, and whether the benefits of expanding existing filters outweigh the risks of flagging legitimate returns. Given the complexity of business tax forms and the evolving nature of fraud schemes, IRS’s existing fraud filters may not be sufficient to detect different business IDT scenarios. For example, IRS has implemented two fraud filters related to business tax credits, but they are each limited to a specific scenario. TIGTA has previously reported that tax credit forms have been found to be attractive to fraudsters. For example, in 2015, TIGTA reported that fraudsters have targeted individual tax credits when filing a fraudulent tax return to increase their refund. In September 2019, TIGTA reported that IRS lacked systematic controls to identify or prevent fraudulent use of an electric motor vehicle tax credit which is available to individuals and businesses. Without additional data on business IDT, IRS cannot estimate the full size and scope of this problem. As we have previously reported, IRS’s annual Identity Theft Taxonomy (Taxonomy) is a valuable tool to inventory, characterize, and analyze available individual IDT refund fraud data and to assess the performance of IRS’s individual IDT refund fraud defenses. Following each filing season, IRS estimates the volume of returns and associated dollar amounts on attempted and prevented individual IDT refund fraud, and on refunds it paid to fraudsters. While we recognize there may be differences in how IRS estimates the extent of individual versus business IDT, the Taxonomy is a useful framework to understand the data IRS needs to estimate the size and scope of business IDT. For example, the Taxonomy estimates the number of identified individual IDT refund fraud cases where IRS prevented or recovered the fraudulent refunds (e.g., returns caught by fraud filters or suspicious refunds returned by banks). In December 2018, IRS developed a draft plan for an initial business IDT taxonomy based on two business tax forms on which IRS has collected data since 2016. IRS officials stated that they intend to begin preliminary work on this effort in December 2019. However, these efforts will be limited until IRS collects additional data. IRS officials stated that they are committed to better understanding business IDT and expanding their fraud detection and data collection efforts. However, officials said that doing so depends on the availability of resources to develop and test new fraud filters prior to each filing season. IRS may address these constraints by, for example, determining which forms or fraud scenarios pose the greatest risk for business IDT based on a fraud risk assessment and profile. This would include determining a risk tolerance for business IDT on these forms and prioritizing new filters or filter enhancements based on its risk assessment. Having additional data to better estimate the size and scope of business IDT is critical in helping IRS understand how fraudsters are evading IRS defenses. Additionally, such data will help IRS identify unknown business IDT fraud risks, allocate limited resources, assess the suitability of its existing fraud control activities, and develop tools such as a business IDT taxonomy. Further information on the size and scope of business IDT could better position IRS to assess the risk of business IDT on tax administration and inform the Congress and the public about the risk. IRS Has Procedures for Resolving Business IDT Cases, but Has Not Established Customer Service-Oriented Performance Goals IRS has established procedures for resolving business IDT cases in its Internal Revenue Manual (IRM) and officials described general guidelines for resolving both pre-refund and post-refund business IDT cases. However, IRS does not resolve all cases within these guidelines due to various challenges IRS could potentially address, such as correspondence-based authentication; and challenges which are more difficult to address, such as the overall complexity of business IDT cases. In addition, we found that a lack of customer service-oriented performance goals for resolving cases may also contribute to delays. Key IRS documents highlight both a commitment to combating IDT refund fraud and improving customer service for taxpayers by, for example, reducing case resolution time frames through new technologies, among other things. In addition, Office of Management and Budget guidance highlights that federal program and project managers have an obligation to ensure that their programs deliver efficient and effective services to the public. This includes assessing how well a program is working to achieve intended results, and delivering customer service to align with the program’s goals. Our review of IRS documentation found that business units have developed procedures to manage and resolve business IDT cases identified during different stages of the tax return process. For example, during the pre-refund stage, RICS notifies business taxpayers via mail if their return shows signs of potential IDT refund fraud and has been held for review. Similarly, when a taxpayer notifies IRS about potential IDT refund fraud during the post-refund stage, Accounts Management (AM) may require the taxpayer to submit a form describing how and when the fraud occurred. IRS business units have also established procedures for conducting in-depth research on taxpayer accounts to determine if a case is business IDT or another type of fraud. However, RICS and AM have had some difficulty in resolving cases within their respective guidelines, as described below. Pre-refund cases. In regards to pre-refund business IDT, cases are generally to be resolved within 90 days, according to IRS’s IRM and agency officials. RICS officials stated that they aim to meet this guideline because it provides enough time to reach the correct taxpayer via mail and for the taxpayer to respond. However, RICS has been challenged in resolving cases within 90 days. Our analysis of pre-refund business IDT cases opened from mid-January 2017 through December 2018 shows that RICS did not meet this guideline for about 87 percent of cases, including open cases. RICS also took between 6 months to 2 years to resolve about 29 percent of cases (see fig. 4). Further, our analysis found that this delay was consistent across case outcomes. On average, RICS took 136 days to resolve cases of confirmed business IDT (7,248 cases) and 171 days to resolve cases determined not to be business IDT (58,279 cases). As of August 2019, IRS had not resolved 4,649 cases which had been open for an average of 383 days. RICS officials identified several reasons for the delay in resolving pre- refund cases, including ones rooted in business IDT policies and procedures. Specifically, officials stated that communicating with the taxpayer via correspondence is the primary driver of delays in resolving cases. RICS officials stated that mail-based authentication generally takes more time because letters can get lost, thrown away, or not reach the right person. RICS officials stated that in March 2018, they began making two attempts to correspond with a business with a potentially suspicious return before closing a case, rather than one attempt. RICS made this change because taxpayers were taking longer than 45 days to respond to the letter, often after RICS had closed the case as a nonresponse. Officials stated that while they are aware of IRS’s other methods of authenticating taxpayers for individual IDT refund fraud, such as by phone or in person, they have not explored similar options for the business IDT program. As we reported in June 2018, IRS uses a risk-based approach to determine the ways in which a taxpayer can authenticate his or her identity and what data are required during the authentication process. High risk interactions include those when a taxpayer accesses prior year tax information and other PII, while lower risk interactions include a taxpayer paying a bill online. According to IRS officials, as the risk level of taxpayer interactions increases, the authentication process becomes more rigorous. This approach minimizes risk to both the taxpayer and IRS. In addition, officials identified other challenges that contribute to delays, including incorrect information on the business taxpayer’s account, nonresponses to authentication requests, and the complexity of business IDT cases, which may be more difficult to address. RICS officials noted that taxpayers do not always update the business’s responsible party with IRS when they sell or transfer a business to someone else. This can make it more difficult for IRS to contact the taxpayer when their return has been selected for review. RICS officials stated that IRS reminds business taxpayers to check and update their information each year to avoid unnecessary delays in processing tax returns; however, IRS does not require taxpayers to make updates. IRS officials also stated that a business’s failure to respond to mail-based authentication requests contributes to case resolution delays. Finally, RICS officials noted that the inherent complexity of the business IDT environment may require RICS staff to research cases across multiple IRS business units or refer cases outside of RICS, which can contribute to delays. Post-refund cases. Our review of AM procedures and discussions with officials indicate that post-refund business IDT cases are generally to be resolved within 6 months. AM officials stated they established this guideline for individual IDT refund fraud cases and extended it to business IDT cases when the program started in 2016. We analyzed post-refund cases that AM opened from July 2016 (when IRS began collecting data) through December 2018. We found that AM resolved about 84 percent of post-refund cases within 6 months. However, about 17 percent of these cases—including open cases—took more than 6 months to resolve (see fig. 5). Similar to RICS officials, AM officials cited several reasons for case resolution delays, including the complexity of the business tax environment and the need to research associated businesses, employment, and individual tax returns. AM officials also noted challenges inherent to the case research process, including that staff often pursue multiple lines of inquiry to determine a case outcome. This may involve referring cases to other business units if, for example, AM staff do not have access to a specific IRS system to complete their research. Finally, AM officials stated that AM staff do not always recognize business IDT cases and may initially classify them as an individual IDT case, which results in delays. To help address this issue, AM officials stated that management periodically reviews business IDT operations, and provides refresher training in areas where staff did not follow procedures consistently. While RICS and AM officials have stated that they have general guidelines for resolving business IDT cases, they have not established customer service-oriented performance goals. We have previously found that a fundamental element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance, including customer service standards, and to measure progress toward those goals. Standards that include customer service-oriented performance targets or goals allow agencies to define, among other things, the level, quality, and timeliness of the service they provide to their customers. In the context of IRS’s business IDT efforts, a customer service-oriented goal could be, for example, to resolve a certain percentage of cases within a specific timeframe. This is particularly important for IRS because one of its strategic goals is to empower customers to meet their tax obligations by providing exceptional customer service. Identifying and implementing methods to address challenges that IRS can control—such as reliance on correspondence-based authentication— could help IRS improve its timeliness in resolving business IDT cases and address its overall strategic objective to reduce case resolution time frames. It is also consistent with OMB guidance to deliver efficient and effective services to the public. Further, establishing customer service- oriented performance goals could help IRS measure progress, identify opportunities for improvement, and communicate reasonable time frames for resolving cases to taxpayers. Case resolution performance goals may also help reduce costs to the Treasury. Specifically, IRS has a legal obligation to pay interest on refunds issued after 45 days from the due date of the tax return. This requirement includes incoming tax returns that IRS holds for review for potential business IDT but then later releases for processing. Specific and relevant performance goals for both pre-refund and post-refund cases may help IRS balance its efforts to protect revenue against the burden on legitimate taxpayers and additional costs to the Treasury. Conclusions IRS has recognized business IDT as a growing threat to both taxpayers and tax administration. The complexity of the business tax environment— including different business types and taxes that businesses must pay— makes detecting, researching, and resolving potential business IDT cases more challenging for IRS compared with individual IDT cases. IRS has taken important steps to prevent business IDT, including using fraud filters to screen incoming business returns on selected forms and collaborating with state and industry partners to identify and respond to potentially suspicious activity. IRS leadership has demonstrated an overall commitment to identifying and combating IDT refund fraud. However, IRS has not designated a lead entity to design and oversee business IDT fraud risk management activities consistent with leading practices. A lead entity could also help IRS ensure its business IDT activities are better coordinated to combat the evolving threat of business IDT. Further, while IRS has taken some steps to understand business IDT fraud risks, it has not developed a fraud risk profile based on an assessment of inherent risks, the likelihood and impact of risks, IRS’s risk tolerance, and an evaluation of existing fraud controls. Assessing inherent fraud risks, such as those that we highlighted—correspondence-based authentication, vulnerability of EINs, and a high false detection rate for IDT fraud filters—would help IRS to establish a fraud risk tolerance and form the basis for an antifraud strategy. IRS has made progress in detecting and preventing business IDT by implementing fraud filters and collecting data on six business-related tax forms. However, without a risk profile, IRS does not have assurance that its existing filters mitigate inherent risks. For example, risks may also be associated with at least 25 other tax forms, and IRS has not determined which forms or fraud scenarios pose the greatest risk to IRS and taxpayers based on an analysis of risk. Collecting additional data by implementing new fraud filters would better position IRS to estimate the full size and scope of business IDT. IRS’s planning documents articulate a commitment to reducing case resolution time frames and improving customer service, but RICS and AM have been delayed in resolving business IDT cases due to various challenges. Identifying and implementing ways to address the challenges IRS can control, such as its methods for taxpayer authentication, and establishing customer service-oriented case resolution performance goals could help IRS better serve taxpayers and minimize additional costs to the Treasury. Recommendations for Executive Action We are making the following six recommendations to IRS: The Commissioner of Internal Revenue should designate a dedicated entity to provide oversight of agency-wide efforts to detect, prevent, and resolve business IDT, consistent with leading practices. This may involve designating one business unit as a lead entity or leveraging cooperative relationships between business units to establish a business IDT leadership team. This entity should have defined responsibilities and authority for managing fraud risk. (Recommendation 1) The Commissioner of Internal Revenue should develop a fraud risk profile for business IDT that aligns with leading practices. This should include (1) identifying inherent fraud risks of business IDT, (2) assessing the likelihood and impact of inherent fraud risks, (3) determining fraud risk tolerance, and (4) examining the suitability of existing fraud controls. (Recommendation 2) The Commissioner of Internal Revenue should develop, document, and implement a strategy for addressing fraud risks that will be identified in its fraud risk profile. (Recommendation 3) The Commissioner of Internal Revenue should ensure that IRS collects additional data on business IDT by identifying and implementing new fraud filters consistent with its fraud risk profile. This should include prioritizing IDT filters for tax forms determined to be most at risk based on an analysis of risk tolerances. (Recommendation 4) The Commissioner of Internal Revenue should identify and implement methods to address delays in resolving business IDT cases due to correspondence-based authentication. This could involve using different methods for taxpayer authentication based on the risk level of the return. (Recommendation 5) The Commissioner of Internal Revenue should establish customer service-oriented performance goals for resolving business IDT cases. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to IRS for review and comment. In written comments, which are summarized below and reproduced in appendix II, IRS’s Deputy Commissioner for Services and Enforcement agreed with five of our six recommendations and neither agreed nor disagreed with one of our recommendations. IRS agreed with our four recommendations to better identify, assess, and manage business IDT fraud risks consistent with leading practices in our Fraud Risk Framework. IRS agreed to designate a dedicated entity to provide oversight of agency-wide business IDT efforts and stated that it will determine the appropriate oversight structure and scope of authority. IRS also agreed with our recommendations to, consistent with leading practices, develop a business IDT fraud risk profile; develop, document, and implement a strategy for addressing fraud risks; and implement and prioritize new fraud filters consistent with its fraud risk profile. IRS did not provide details on the actions it plans to take to address these recommendations. In its written comments, IRS stated that formally implementing leading practices in the Fraud Risk Framework may be helpful, but noted that it has consistently completed business IDT fraud risk assessments and developed risk profiles. However, during our review, IRS did not provide evidence that it had taken such actions. Figure 3 in our report outlines leading practices for performing a fraud risk assessment and developing a risk profile. For example, regarding the leading practice to identify and assess inherent fraud risks, IRS stated that it has found that the risks associated with in-person or telephone authentication are higher for business IDT than correspondence-based authentication. However, we could not verify this assertion, as IRS did not provide evidence during our audit that it had assessed the risks of different authentication options for business taxpayers. Further, IRS stated that our report does not acknowledge that multiple individuals may be authorized to act on behalf of a business, including authenticating a potentially suspicious tax return. We have added this information to our report. IRS also stated that our report implies that it would be acceptable for a percentage of potentially fraudulent returns to be filed, unchecked, solely to reduce false detections or business costs. However, as we indicate in our report, fraud risk tolerance does not mean IRS management tolerates fraud, or that it needs to eliminate controls to detect and prevent fraud. Rather, it means that IRS management accepts a certain amount of risk, based on its assessment of the likelihood and impact of the fraud. Determining a fraud risk tolerance would help IRS management establish appropriate and cost-effective controls that are commensurate with the fraud risk. Relatedly, we agree with IRS’s statement that IDT victims suffer significant financial, social, and emotional hardships. We have updated the report’s introduction to acknowledge these hardships. In addition, IRS stated that its work on business IDT filters is more robust than stated in our report. Our report recognizes various IRS efforts to improve business IDT fraud detection and prevention, including efforts to refine its fraud filters. However, having fraud filters does not preclude IRS from identifying and assessing other potential fraud risks. Further, IRS cannot accurately determine the suitability of its business IDT filters—or other controls—without first identifying inherent fraud risks, assessing the likelihood and impact of those risks, and determining a fraud risk tolerance. Additionally, IRS did not provide evidence that it has examined the suitability of other antifraud controls, including controls to prevent fraudsters from obtaining new EINs using stolen information. IRS neither agreed nor disagreed with our recommendation to establish customer service-oriented performance goals for resolving business IDT cases. However, IRS stated that it will review its customer service- oriented performance goals and modify them, as warranted, to address the resolution of business IDT cases. Doing so would meet the intent of our recommendation. In its written comments, IRS stated that our report does not fully address obstacles that prevent timely case resolution. We have revised our discussion of pre-refund cases to more clearly identify nonresponses from taxpayers as a cause for delays. IRS also said our methodology for determining the time to close business IDT cases does not adequately consider the impact of nonresponses on the agency’s ability to close cases in a timely manner. We have added a note to figure 4 to acknowledge the challenge of nonresponses. However, IRS did not provide evidence during the audit that it collects data on how long a case is suspended while it waits for the taxpayer to respond—information that would provide insight into the challenges associated with resolving business IDT cases in a timely manner. As agreed with your offices, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Commissioner of Internal Revenue and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe the Internal Revenue Service’s (IRS) efforts to detect business identity theft refund fraud (business IDT), (2) evaluate the extent to which IRS’s efforts to prevent business IDT are consistent with selected fraud risk management leading practices, and (3) assess IRS’s efforts to resolve business IDT cases. In this report, business IDT refers to the fraudulent use of both business and employment tax forms. Both of these types of forms require an Employer Identification Number (EIN) when filing with IRS, and a fraudster can file these forms to obtain a refund. To address all of our objectives, we reviewed our prior reports on individual identity theft refund fraud and the Treasury Inspector General for Tax Administration’s (TIGTA) prior reports on business IDT. We also interviewed IRS officials from business units responsible for detecting, preventing, and resolving business IDT cases, specifically from Return Integrity and Compliance Services (RICS), Accounts Management (AM), and Criminal Investigation (CI). In December 2018, we visited IRS’s campus in Ogden, Utah, to interview officials responsible for IRS’s business IDT efforts and to observe how RICS and AM staff process and research business IDT cases using IRS information technology systems and tools. To describe IRS’s current processes to detect business IDT refund fraud, we reviewed documentation describing the business IDT fraud filters IRS implemented from 2017 through 2019, including the logic for each filter and the forms to which they apply. In addition, we analyzed data from IRS’s Dependent Database (DDb) on business IDT fraud filter results for applicable incoming business and employment tax returns IRS received from mid-January 2017 through mid-August 2019. This was the most recent, complete, and available set of data at the time of our review. This analysis showed the volume of returns selected by IRS’s business IDT fraud filters by form, tax processing year, and associated refund amount. We also analyzed data from IRS’s Business Master File Identity Check (BMFIC) system—RICS’s case management system for business IDT returns flagged by DDb—for cases opened from mid-January 2017 through mid-August 2019. These were the most complete set of data available at the time of our review. Our analysis of BMFIC data showed the number of returns that RICS researched as potential business IDT, the outcome of the case, and associated refund amounts. For the purpose of analysis and reporting, we grouped business IDT case outcomes into three categories: confirmed business IDT, not business IDT, and open/unresolved. We assessed the reliability of data from these systems by: (1) testing key data elements, including checks for missing, out-of-range, or logically inaccurate data; (2) reviewing documents for information about the data and IRS’s systems; and (3) interviewing officials knowledgeable about the data to discuss any limitations. We determined that these data were sufficiently reliable to describe the volume of incoming returns stopped by business IDT fraud filters, associated refunds, and the outcome of business IDT cases. To understand IRS’s efforts to collaborate with external partners to detect and prevent business IDT, we interviewed IRS and state officials from the Security Summit’s Business IDT sub-workgroup and reviewed IRS’s 2018 report which analyzed business-related data elements from incoming tax returns. We also interviewed IRS officials about a pilot program with the Alabama Department of Labor to help detect and deactivate potentially suspicious EINs established in that state. For context, we obtained information from January to December 2018 from IRS on the performance of this pilot, including the number of EINs identified as fraudulent. To evaluate the extent to which IRS’s efforts to prevent business IDT are consistent with selected fraud risk management leading practices, we reviewed the Fraud Reduction and Data Analytics Act (FRDAA) of 2015 and leading practices outlined in A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). We generally focused our review on the first two components of the Fraud Risk Framework: (1) commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management, and (2) plan regular fraud risk assessments and assess risks to determine a fraud risk profile. We reviewed agency documents and information obtained from interviews, as described below, and compared them against leading practices identified in the Fraud Risk Framework related to these two components. We reviewed IRS’s most recent strategic planning documents related to reducing fraud, IRS organizational charts, and relevant Internal Revenue Manual (IRM) sections on business IDT operations and procedures. We interviewed officials from RICS, AM, CI, and the Office of Research, Applied Analytics, and Statistics (RAAS) to understand each business unit’s respective role in detecting, preventing, and resolving business IDT cases and the extent to which business units work together on day-to-day and longer-term efforts. In addition, we reviewed IRS reports on business IDT case workload. We also reviewed meeting notes from IRS’s IDT Executive Steering Committee (July and October 2017, and January and October 2018) to understand the extent to which IRS’s executive-level groups are, for example, involved in helping guide business IDT efforts or made aware of business IDT challenges. We interviewed officials from RICS, AM, CI, and RAAS and reviewed documentation on IRS’s efforts to identify and assess business IDT fraud risks. These included reviewing RAAS’s analyses on business IDT fraud filter performance, descriptions of potential new fraud filters that IRS may implement in the future, and the Wage and Investment Division’s risk register. We also interviewed officials from IRS’s Office of the Chief Risk Officer to understand IRS’s efforts to compile and report on enterprise-wide fraud risks and agency efforts to develop an antifraud culture. Further, we reviewed documentation related to three inherent fraud risks to business IDT that we identified in the course of our work: correspondence-based authentication, EIN vulnerabilities, and high false-detection rates for IDT fraud filters. This included reviewing prior GAO, TIGTA, and National Taxpayer Advocate reports and the status of open recommendations, and relevant IRM sections. We reviewed the methodologies of these reports and found them reasonable for the purpose of describing the inherent risks related to business IDT. In addition, we identified a false detection rate for business IDT fraud filters based on BMFIC cases opened from mid-January 2017 through December 2018. To do so, we compared the number of cases IRS determined were not business IDT, relative to the total number of cases. We did not include BMFIC cases from 2019 because at the time of our analysis, about 27 percent of those cases were unresolved. We also assessed the extent to which IRS is positioned to estimate the size and scope of business IDT. To do so, we reviewed documents and information on IRS’s efforts to collect quality data on incoming business and employment returns. We compared these efforts to leading practices associated with the first two components of the Fraud Risk Framework and Standards for Internal Control in the Federal Government related to using quality information. Specifically, we determined what proportion of incoming business and employment tax forms filed in 2018 would have been screened by business IDT fraud filters, by tax form type. We also reviewed a preliminary plan and interviewed RAAS and RICS officials on their efforts to develop a business IDT taxonomy. To assess IRS’s efforts to resolve business IDT cases, we reviewed IRS procedures for managing, researching, and resolving pre-refund and post-refund business IDT cases. We interviewed officials from RICS and AM to understand the rationale behind their respective current case resolution time frames, and potential reasons for case resolution delays. We compared RICS and AM’s efforts to resolve business IDT cases against Office of Management and Budget guidance on program management and providing customer service. To determine RICS’s performance in resolving business IDT cases identified during the pre- refund phase, we analyzed 181,032 cases from BMFIC, described above. Specifically, we calculated the duration between when RICS opened the case in BMFIC to when the case was closed. In addition, we determined how many cases in RICS’s inventory were open at the time of our analysis in August 2019. For these open cases, we manually added the date we received the data as the date the case was closed. This was an indicator of the minimum amount of time RICS could have taken to close these cases. For this analysis, we did not include cases opened and closed in 2019 because we wanted to ensure there was sufficient time for RICS to research and close a case. We determined that cases opened by the end of December 2018 gave both RICS and AM (discussed below) enough time to resolve a case. In addition, we identified an anomaly in RICS’s 2019 cases. IRS officials stated that a new fraud filter inaccurately flagged incoming returns on one form, and IRS released these returns. Our analysis of RICS’s data showed that these returns accounted for about 65 percent of closed cases in 2019, and that they were resolved in an unusually short time frame (fewer than 45 days) thus skewing the overall data. We also did not include 1,679 cases that were opened and closed in zero or fewer days. To determine AM’s performance in resolving business IDT cases identified during the post-refund phase, we analyzed 1,997 relevant business IDT cases from IRS’s Correspondence Imaging System (CIS) that AM opened from July 2016 through December 2018. As discussed earlier, we did not include cases opened and closed in 2019 to allow AM enough time to research and resolve a case. We calculated the duration between when AM opened the case in CIS to when the case was closed. We also determined how many cases in AM’s inventory were open at the time of our analysis. For these open cases, we manually added the date we received the data as the date the case was closed. This was an indicator of the minimum amount of time AM could have taken to close these cases. We assessed the reliability of the CIS data by reviewing relevant documents, testing key data elements, and interviewing knowledgeable IRS officials. We determined that the data from CIS was sufficiently reliable to determine how long it took AM to resolve post- refund business IDT cases during this time period. We conducted this performance audit from July 2018 to January 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Shannon Finnegan (Assistant Director), Heather A. Collins (Analyst-in-Charge), Ann Czapiewski, Michele Fejfar, Robert Gebhart, Tonita Gillich, Bethany Graham, James Andrew Howard, Krista Loose, Jungjin Park, Bryan Sakakeeny, and Rebecca Shea made significant contributions to this report.
Why GAO Did This Study Business IDT is an evolving threat to both taxpayers and IRS and if not addressed can result in large financial losses to the government. The risk of business IDT has increased due to the availability of personally identifiable information and general ease of obtaining business-related information online. This makes it more difficult for IRS to distinguish legitimate taxpayers from fraudsters. GAO was asked to review IRS's efforts to combat business IDT. This report (1) describes IRS's current efforts to detect business IDT, (2) evaluates IRS's efforts to prevent business IDT against selected fraud risk management leading practices, and (3) assesses IRS's efforts to resolve business IDT cases. GAO reviewed IRS documents and business IDT fraud detection data, evaluated IRS's efforts to combat business IDT against two components of GAO's Fraud Risk Framework , analyzed case resolution data, and interviewed IRS officials. What GAO Found The Internal Revenue Service (IRS) has efforts in place to detect business identity theft refund fraud (business IDT), which occurs when thieves create, use, or try to use a business's identifying information to claim a refund. IRS uses computerized checks, or fraud filters, to screen incoming returns. From January 2017 to August 2019, IRS researched about 182,700 returns stopped by business IDT fraud filters. IRS determined that about 77 percent of returns (claiming $38.3 billion) were not business IDT and about 4 percent of returns (claiming $384 million) were confirmed business IDT. As of August 2019, IRS was reviewing the remaining returns. The Fraud Reduction and Data Analytics Act of 2015 created requirements for agencies to establish financial and administrative controls for managing fraud risks. These requirements are aligned with leading practices outlined in GAO's A Framework for Managing Fraud Risks in Federal Programs ( Fraud Risk Framework) . IRS has taken steps to understand fraud risks associated with business IDT but has not aligned its efforts with selected components within the Fraud Risk Framework . First, IRS leadership has demonstrated a commitment to identifying and combating overall identity theft refund fraud, but has not designated a dedicated entity to design and oversee business IDT fraud risk management efforts agency-wide. This is because the program is relatively new. Without designating an entity to help guide agency-wide business IDT fraud risk efforts, it is not clear which entity would be responsible for assessing business IDT risks and documenting the results. Second, IRS has not conducted a fraud risk assessment or developed a fraud risk profile for business IDT consistent with the Fraud Risk Framework's leading practices. Doing so would help IRS determine the likelihood and impact of risks, the level of risk IRS is willing to tolerate, and the suitability, costs, and benefits of existing fraud risk controls. IRS officials stated that they have not formally performed a fraud risk assessment or developed a risk profile because they have directed their resources toward identifying and addressing business IDT that is occurring right now and improving fraud detection efforts. Documenting a risk profile would also help IRS determine whether additional fraud controls are needed and whether to make adjustments to existing controls. Third, IRS has not assessed which business-related tax forms or fraud scenarios pose the greatest risk to IRS and taxpayers. Current business IDT fraud filters cover the most commonly filed tax forms; however, IRS has not developed fraud filters for at least 25 additional business-related forms that may be susceptible to business IDT. Without additional data on business IDT, IRS cannot estimate the full size and scope of this problem. IRS has procedures for resolving business IDT cases and has described general guidelines for resolving business IDT cases, but it does not resolve all cases within these guidelines. Further, IRS has not established customer service-oriented performance goals for resolving business IDT cases, which is inconsistent with federal guidance. Establishing performance goals may help IRS better serve taxpayers and minimize additional costs to the Treasury. What GAO Recommends GAO is making six recommendations, including that IRS designate a dedicated entity to manage its business IDT efforts, develop a fraud risk profile consistent with leading practices, implement additional fraud filters consistent with the profile, and establish customer service-oriented performance goals for resolving business IDT cases. IRS agreed with five recommendations. IRS neither agreed nor disagreed with our recommendation to establish customer service-oriented performance goals, but stated it would take actions consistent with the recommendation.
gao_GAO-19-526
gao_GAO-19-526_0
Background Disaster Response Roles and Responsibilities Disaster response can involve many federal, state, territorial, tribal, private sector, and voluntary organizations. The National Response Framework describes how the federal government, states and localities, and other public and private sector institutions should respond to disasters and emergencies. For example, state, local, tribal, and territorial governments are to play the lead roles in disaster response and recovery. Local emergency agencies—police, firefighters, and medical teams—are to be the first responders. In serving individuals who have disabilities and others who have access or functional needs, disaster responders at all levels are responsible for ensuring compliance with any applicable requirements for equal opportunity and non-discrimination. Federal agencies become involved in responding to a disaster when effective response and recovery are beyond the capabilities of the state and local governments. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) authorizes federal funding and support to assist states and localities in responding to a disaster. This federal support is available under the Stafford Act when the President declares a major disaster or emergency in response to a request by the governor or by the chief executive of a tribal government. Under the National Response Framework, DHS is the federal agency with primary responsibility for coordinating disaster response, and within DHS, FEMA has lead responsibility. In addition to DHS, at least 29 other federal agencies carry out disaster assistance programs and activities. The National Response Framework identifies 15 emergency support functions (ESFs)—such as communication, transportation, and energy— and designates a federal department or agency as the coordinating agency for each function. Under the National Response Framework, FEMA is designated as the coordinating agency for ESF-6, which includes mass care, emergency assistance, temporary housing, and human services. The National Response Framework also designates primary and support agencies for each ESF. Both FEMA and the Red Cross are the primary agencies for ESF-6. As co-primary agencies, FEMA and the Red Cross are responsible for working closely to coordinate mass care and related services across sectors, including identifying resource needs, organizations with mass care capacity to address those needs, and establishing strategies to address resource gaps (see fig. 1). According to ESF-6, Red Cross also provides technical assistance to FEMA and serves as its principal mass care subject matter expert. The Red Cross works with FEMA to provide such assistance to state and local partners, according to FEMA. In addition, the Red Cross and FEMA facilitate the mobilization of resources and coordination within the whole community for the provision of mass care services. The Red Cross role in ESF-6 has shifted over time. At the time of Hurricane Katrina, Red Cross was a primary agency, but in the 2008 update to ESF-6 it became a support agency. However, in a 2013 update, Red Cross was shifted back to the primary agency role and given new responsibilities such as working with FEMA to identify available mass care capacity, anticipate mass care requirements, and establish strategies to address gaps in coordination. These responsibilities, among others, remain in effect under the current ESF structure. FEMA and Red Cross coordinate mass care with the support of other federal agencies such as USDA, the Department of Health and Human Services, and the Department of Defense (DOD), as well as voluntary organizations and partners at the state and local levels. There are also over a dozen federal agencies named as having supporting roles in ESF- 6 (see app. I for a list of ESF-6 support agencies). For example, DOD and its Army Corps of Engineers provides construction and engineering support for temporary housing and sheltering, including inspecting shelter facilities to ensure accessibility and suitability. In addition, ESF-6 names over 50 members of the National Voluntary Organizations Active in Disaster (NVOAD) that provide a wide range of services in support of mass care and other ESF-6 activities, including the Salvation Army, Southern Baptist Convention Disaster Relief, and Feeding America. State and local governments are vital to mass care provision and assessing their own communities’ response capabilities. According to ESF-6, local government agencies coordinate with voluntary organizations and the private sector to coordinate activities that meet immediate needs of disaster survivors. When those needs exceed local resources, the state may provide additional support. When these resources are insufficient, federal assistance may be requested through the FEMA regional office. Complex and Concurrent 2017 Hurricanes We found in 2018 that FEMA faced a number of challenges that slowed and complicated its response efforts to the 2017 hurricanes, especially Hurricane Maria in Puerto Rico. The sequential and overlapping timing of the three hurricanes strained staffing resources and created logistical challenges in deploying additional assistance (see fig. 2). In particular, FEMA had already deployed staff and resources to support the response efforts for Hurricane Harvey in Texas when the other major hurricanes made landfall shortly thereafter. Moreover, FEMA’s response efforts in Puerto Rico and the U.S. Virgin Islands were complicated by a number of factors, including their distance from the continental United States and limited local preparedness for a major hurricane. We have previously reported that there is increasing reliance on the federal government for disaster assistance as the number of natural disasters increases and that costs will likely continue to rise as the climate changes. FEMA identified key findings related to mass care in its After-Action Report for the 2017 hurricanes, noting differences in shelter populations across the states, as well as the duration of shelter stays (see fig. 3). FEMA also reported facing challenges transitioning survivors out of group shelters in a timely fashion. Capabilities Assessment In order to qualify for federal emergency preparedness funding, states and eligible urban areas (grantees) are required to regularly submit information to FEMA on their ability to respond to a disaster. Specifically, grantees first identify their own capability targets—such as for sheltering disaster victims—through the Threat and Hazard Identification and Risk Assessment, and then assess their progress toward these targets annually in the Stakeholder Preparedness Review (capabilities assessments). In fiscal year 2018, FEMA awarded $402 million to states and territories through the State Homeland Security Program, and $580 million to urban areas through the Urban Area Security Initiative, both of which require grantee capability assessments. FEMA provides guidance and technical assistance to state and local partners in their self-assessment efforts. According to officials, FEMA does not conduct its own evaluations of state, local, and voluntary organizations’ capabilities. FEMA and Red Cross Coordinated Mass Care for 2017 Hurricanes but Some Needs Went Unfulfilled Co-location Helped FEMA and Red Cross Facilitate Mass Care Coordination after Disasters FEMA and Red Cross established joint operation centers where they co- located with key partners such as the Salvation Army and NVOAD for each of the 2017 hurricanes, which facilitated coordination of shelter, feeding, and supply distribution. In addition to co-locating at FEMA’s National Response Coordination Center in Washington, D.C., FEMA, the Red Cross, and key mass care partners also co-located in state and local emergency operations centers (see fig. 4). Our prior work has found co-location of staff enhances interagency collaboration. Co-location contributed to relationship-building that facilitated communication and coordination of mass care services, according to FEMA, Red Cross, and emergency management officials in all four states we visited. See figure 5 for examples of how various agencies and sectors prepared food and supplies for mass care operations. Co-location meant workers could communicate face-to-face, as key partners needed to collaborate and communicate resource requests to FEMA and other agencies. In the U.S. Virgin Islands, DOD provided airplanes that enabled workers to fly between the islands to attend face- to-face meetings, according to FEMA regional officials. According to officials in two states we visited, this type of face-to-face communication facilitated building relationships. Moreover, officials in one state told us that co-location enabled them to communicate survivor needs directly to FEMA, which could then provide assistance. This was especially critical when power and cell phone service were out, particularly in Puerto Rico and the U.S. Virgin Islands, which experienced prolonged power outages and disabled electronic communications. Officials from federal agencies and the Red Cross described some additional benefits of co-location: USDA Food and Nutrition Service (FNS) officials said co-located ESF- 11 (Agriculture and Natural Resources) staff in the National Response Coordination Center provided food inventories to staff at the ESF-6 desk. Red Cross officials said they were able to quickly obtain supply trucks after Hurricane Harvey in Texas because the Red Cross had representatives at FEMA’s National Response Coordination Center. As we previously reported, DOD provided high-water vehicles, amphibious vehicles, and boats to transport supplies for the Red Cross and support FEMA logistics efforts. Officials in one state noted that in-person communication was especially useful for coordinating mass care when FEMA’s on-line system for submitting resource requests could not be used (see text box). Web Emergency Operations Center Resource requests can be communicated through the Federal Emergency Management Agency’s (FEMA) Web Emergency Operations Center (WebEOC), an electronic system that processes and tracks resource requests from state or local governments. WebEOC supports emergency management processes and functions by providing a real-time operating picture for FEMA headquarters, regions, and federal, state, local, and tribal strategic partners. In 2015, the Department of Homeland Security’s Office of Inspector General (OIG) found that WebEOC was not sufficiently integrated with key agency systems and could cause delays in providing disaster assistance. In 2017, WebEOC was used in two of our four selected states, and a predecessor system to WebEOC was used by one of FEMA’s regional offices, according to officials in these areas. WebEOC was useful in tracking resource requests, but in-person communication was more helpful for coordinating mass care, according to FEMA regional officials. In cases where staff could not access WebEOC, requests to FEMA were presented on paper, according to state officials. In 2018, FEMA reported that it had provided every state with FEMA WebEOC accounts so state users could submit resource requests directly to FEMA. WebEOC also allows FEMA to share aggregated data, such as shelter counts and feeding information, according to FEMA officials. FEMA, Red Cross, and Other Agencies Faced Mass Care Challenges, and Some Needs Were Unmet Federal officials and partners in Texas, Florida, Puerto Rico, and the U.S. Virgin Islands described many challenges they encountered in coordinating mass care. While the concurrence and intensity of the 2017 hurricanes presented many unforeseen challenges, several state and local governments and voluntary organizations told us about issues related to mass care coordination and planning. As a result, some supply distribution, sheltering, and feeding needs went unmet. Miscommunication: Miscommunication among disaster workers affected supply distribution. For example, FNS officials reported challenges with delivering baby formula for about 28,000 infants in Puerto Rico through FEMA. One shipment of baby formula was lost and discovered frozen and unusable in Puerto Rico because FEMA officials were not aware that the products had been delivered, according to FNS’ 2018 After-Action Report. The report also stated that some perishable infant formula and food remained at a port in Florida several weeks after delivery. FEMA officials told us they shipped nearly 400 containers of infant formula and food in the first 3 months after Hurricane Maria, but that competition for port clearances made it challenging to coordinate, prioritize, and track supplies. As a result, some who needed these supplies may not have received them. FEMA officials also noted that they believe survivor needs were met by a combination of disaster relief supplies and the restoration of capacity at grocery stores. According to FNS’2018 After- Action Report, their officials met with FEMA and completed training on FEMA’s logistics system in 2018 to be able to better track future shipments of these products. Insufficient shelter staff: In Texas and Florida, emergency managers we spoke with described having unprecedented numbers of residents needing shelters but not enough staff initially to operate them. To address this gap, they said they relied on members of the state National Guard or local government and community organizations to staff shelters, but in some instances, shelters continued to have insufficient numbers of workers. To improve shelter staffing for future disasters, emergency managers in Florida told us they are working on training additional county employees to serve as shelter staff. Serving individuals who have disabilities: Public shelters faced challenges in some cases serving individuals who have disabilities, as we previously reported. For example, we reported in 2019 that some individuals who have disabilities faced challenges accessing services from local shelters, including restrooms. In another example, the lack of a quiet space in public shelters for individuals with autism negatively impacted their mental health, according to officials from an advocacy group. Extensive damage to hurricane shelters: In Texas, Puerto Rico, and the U.S. Virgin Islands, Hurricanes Harvey, Maria, and Irma damaged many buildings planned for use as hurricane shelters, according to emergency management and local government officials in these areas. As a result, some remaining shelters were at maximum capacity. In some cases, survivors and staff had to relocate to alternate sites during the hurricanes. For example, an arena in Humacao, Puerto Rico, and a Department of Human Services building in the U.S. Virgin Islands served as shelters when intended shelter buildings were destroyed by Hurricanes Maria and Irma, respectively (see fig. 6). Damaged roads and communications infrastructure: Damaged and flooded roads and the affected terrain in all four states contributed to challenges in distributing supplies, especially in Puerto Rico and the U.S. Virgin Islands. In Puerto Rico, FEMA received complaints from municipalities that food was not reaching neighborhoods in need. Impassable roads and no ability to communicate challenged FEMA’s plans, which had designated certain partners to distribute meals. Several weeks after Hurricane Maria hit, FEMA redesigned its distribution strategy, which included identifying the most vulnerable municipalities and having liaisons from the Puerto Rico Emergency Management Agency and the municipalities help coordinate the distribution. This enabled food to reach neighborhoods in need. Insufficient supplies: According to Puerto Rico Department of Education officials, FEMA was initially reluctant to provide water to schools serving as shelters because the schools were supposed to have their own water supply from the Puerto Rico Department of Education’s warehouses. However, Puerto Rico Department of Education officials said they only had enough water for shelter residents for 30 days. The agency requested help to meet additional needs, but FEMA did not have enough water or food boxes to help supplement the schools’ supply. There were several thousand people sheltered in the schools, but according to these officials, the Puerto Rico Department of Education was responsible for providing food and water to survivors whether or not they were shelter residents. Once the Puerto Rico Department of Education officials met with FEMA and demonstrated their need for water, they were able to secure supplies from FEMA. Early relocation of survivors to hotels: In Texas, the early relocation of survivors from shelters to eligible hotels under FEMA’s Transitional Sheltering Assistance program challenged mass feeding operations, according to two Texas emergency management officials and representatives of two voluntary organizations. As a result, some survivors did not receive food assistance, as described below. FEMA’s Transitional Sheltering Assistance program and the Department of Agriculture’s Disaster Supplemental Nutrition Assistance Program (D- SNAP), while not considered to be a central part of mass care under the National Response Framework, provide assistance to survivors after disasters and provide services that may intersect with mass care activities. According to officials in Texas and Florida, some aspects of how these programs were implemented contributed to unmet needs. Transitional Sheltering Assistance program: After the initial response effort ends and mass shelters close, FEMA’s Transitional Sheltering Assistance program is intended to provide short-term sheltering assistance to survivors who are still unable to return home. States request FEMA approval for Transitional Sheltering Assistance when they determine there is a need for short-term assistance. According to officials in Texas, the Transitional Sheltering Assistance program was activated earlier than they expected before mass shelters closed, resulting in survivors leaving early to stay in program-eligible hotels. According to these officials, the early activation resulted in the inability to track where survivors were located and where survivors needed assistance. According to an official at a voluntary organization, survivors in program-eligible hotels were going without food and some were eating coffee grounds in their hotel rooms because they had no food and no money to purchase food. Officials from state agencies and voluntary organizations that could provide assistance told us they could not get information from the hotels about how many survivors were guests at specific hotels, due to the hotels’ reluctance to provide guests’ information. When voluntary organizations tried to set up feeding operations at hotels, some hotels did not want the organizations to set up feeding operations on hotel premises, according to organizational representatives. One state official also said some hotels did not allow food distribution because of concerns about food sitting in rooms or the hotels’ preference that their guests use their restaurant facilities. D-SNAP: D-SNAP provides temporary food assistance for households affected by a natural disaster. D-SNAP usually begins after grocery stores have re-opened and families are able to purchase and prepare food at home. USDA’s FNS offers guidance to states that choose to operate a D-SNAP program on where and how to operate D-SNAP registration sites, including guidance on serving individuals who have disabilities and the elderly. For example, FNS guidance states that D- SNAP registration sites should offer extra cooling measures in a special waiting area for individuals who have disabilities and the elderly, and move these individuals to the front of regular registration lines. FNS’ After-Action Report identified, and state and county officials in Texas and Florida said they observed, D-SNAP registration sites that did not appropriately serve elderly individuals or those who have disabilities, such that some elderly survivors fainted while waiting in the heat. In one state we visited, officials from a local voluntary organization said the state government did not work with community-based groups to identify local D-SNAP registration sites. As a result, D-SNAP registration sites did not align with where survivors needed assistance, and according to these officials approximately 50,000 applicants came to one site and were turned away after waiting for hours in the heat. To help address these challenges, some elderly individuals and individuals with disabilities in Florida were allowed to register for D-SNAP over the phone in December 2017 and in May 2018, according to a state official. Coordination Efforts Do Not Include Specific Agreements and Regular Evaluation While the National Response Framework indicates that many agencies participating in disaster response formalize their responsibilities in written agreements, we found that key mass care partners did not have such agreements or that they did not clearly outline responsibilities at the time of the 2017 hurricanes. Although Red Cross has written agreements with some state and local partners, counties we visited in Florida, Texas, and the U.S. Virgin Islands—states where Red Cross shelters disaster victims—did not have written agreements that clearly specified what mass care services would be provided by the Red Cross. In Florida, several counties we visited did not have formal agreements with the Red Cross during the 2017 hurricane season. In lieu of a formal agreement, one of the counties had an email from the Red Cross stating that the Red Cross could support one of 15 shelters, according to officials and documents we reviewed. In some cases, even when written agreements were established, there were still unclear roles and expectations. For example, another Florida county did have an agreement in place, but county officials said they found out after the 2017 hurricane season started and shortly before Hurricane Irma that the Red Cross could support only eight shelters—a substantial decrease from previous years. Further, when counties did have written agreements with the Red Cross, the agreements did not always clearly define responsibilities. The agreements also did not specify how and at what point sheltering and feeding needs and capabilities should be communicated by the Red Cross to counties, which exacerbated challenges in providing these services after the hurricanes. After the 2017 hurricane season, officials in three states we visited said they have been working toward clarifying responsibilities in written agreements. Red Cross officials also said they have been developing letters of intent with local government partners since 2017, which describe what services can be provided by the Red Cross in these localities. However, our review of some of these new finalized agreements found they lack consistency and detail in what each of the parties can deliver regarding sheltering, feeding, and supply distribution. For example, Red Cross’ agreement with one Florida county specifies it can operate two shelters for about 1,000 residents, while its agreement with another county states it will “support shelters as resources allow.” Red Cross officials said written agreements may be difficult to change as needs and capabilities change over the course of the response to a disaster. Outside of written agreements, Red Cross officials said they collaborate with government agencies in other ways, such as participating in mass care exercises to create a shared understanding of mass care roles and work on jointly-developed response plans. Red Cross officials also told us that they need to be clearer with local jurisdictions about what they can and cannot provide, and that they need to reach mutual understanding with local governments about shared planning assumptions, such as the peak shelter population and what the Red Cross could provide within specified timeframes. According to Red Cross officials, neither they nor local governments established clear expectations in the past. In August 2017, the Red Cross launched a nationwide readiness initiative focusing on mass care planning discussions with local governments. This initiative also includes clarifying planning assumptions with local governments on a recurring basis. FEMA provides some guidance to states and localities about how to effectively coordinate with mass care partners, as well as a training course that encourages establishing written agreements. FEMA’s training materials for the mass care planning and operations course describe the differences in types of agreements that states and localities might establish with mass care partners, and specifically suggest defining the roles and responsibilities of each party. In addition, FEMA has helped developed tools for stakeholders to use when specifically coordinating mass care operations, such as the Multi-Agency Feeding Support Plan Template. This tool guides states, voluntary organizations, and other partners to clearly establish roles and responsibilities related to specific aspects of feeding, including the delivery of supplies and networking with other organizations to identify unmet needs. FEMA officials noted that all of their mass care templates encourage this type of planning for roles and responsibilities. However, FEMA guidance and training materials do not suggest detailing the specific responsibilities of each entity for mass care services in the written agreements. For example, the guidance does not explicitly prompt states and localities to use their written agreements to specifically establish how much shelter and feeding assistance an agency, government, or organization can provide. Our prior work has found that clarifying responsibilities through written agreements is critical to effective interagency collaboration. When an agency, government, or organization does not specifically indicate how much shelter and feeding assistance it can provide in a disaster, its partners may have unfounded expectations. For example, in Texas, officials in one city said when one large mass shelter first opened, there were only a small number of Red Cross volunteers, which was insufficient to operate and manage a shelter with tens of thousands of survivors; this was short of city officials’ understanding that Red Cross would fully staff the location from the beginning. Without further guidance from FEMA on how to establish effective written agreements, unmet expectations between state and local partners and voluntary organizations may persist and place disaster survivors at risk. Our prior work has also found that federal agencies engaged in collaborative efforts need to create the means to evaluate their activities in order to identify areas for improvement. In addition, federal internal control standards state that management should establish an organizational structure, assign responsibility, and delegate authority to key roles in order to achieve objectives. Moreover, the organizational structure should be evaluated periodically in order to meet the objectives and adapt to new situations. FEMA is responsible for coordinating and supporting the federal response to major disasters and relies significantly on the Red Cross as its co-primary agency under ESF-6. While FEMA and the Red Cross conduct after-action reviews following certain major disasters, including for the 2017 hurricane season, these reviews are focused on response and recovery efforts and do not include a broader review of roles and responsibilities of the co-primary agencies. Based on its findings on the 2017 hurricane season, FEMA called for some revisions to the National Response Framework and ESF annexes related to coordination across sectors. Accordingly, FEMA is currently revising the framework, which is considered a living document to be regularly reviewed to reflect experience gained from its use. However, FEMA has not proposed revisions to ESF-6 as part of its current review of the National Response Framework and ESF annexes. Specifically, FEMA has not reviewed whether the current structure of ESF-6 leadership roles and responsibilities is best suited for coordinating mass care, or whether there are responsibilities that should be shifted. ESF-6 is unique among ESFs in that it has a voluntary organization serving as a co-primary agency. Further, the Red Cross’ role under ESF-6 has changed multiple times since Hurricane Katrina. According to FEMA officials, FEMA is not required to review ESF-6 leadership roles and responsibilities, and instead focuses on the overall improvement of mass care delivery, including mass care activities and services. However, FEMA’s ESF Leadership Group noted that it was not always clear which agency that is part of an ESF is best suited to carry out a task. Evaluating collaborative efforts can help key decision makers within the agencies obtain feedback for improving both policy and operational effectiveness. Moreover, the National Response Framework is considered a living document, and DHS plans regular reviews to evaluate consistency with existing and new policies, evolving conditions, and the experience gained from its use. As we have previously reported, in disasters in which the federal government is involved, the extent and effectiveness of the Red Cross’s activities could have a direct impact on the nature and scope of the federal government’s activities. Given the challenges experienced with mass care during the response to the 2017 hurricanes, FEMA is missing an opportunity to identify areas for improvement and strengthen interagency coordination by not reviewing ESF-6 leadership roles and responsibilities. Pre-existing Relationships Facilitated Mass Care Coordination, but Some Community Groups Were Not Integrated with Response Efforts Many FEMA, Red Cross, local government officials, and representatives from local voluntary organizations we interviewed emphasized the importance of pre-existing relationships among established partners in coordinating mass care during the 2017 hurricanes. Relationships between these established mass care partners were often formed during non-disaster periods through regular conference calls and mass care training exercises. For example, officials in all four state emergency management departments we visited described positive relationships developed with FEMA staff through regular joint training exercises. FEMA’s Voluntary Agency Liaisons (VALs) help facilitate relationships between FEMA and established mass care partners. For example, VALs serve as contacts for non-governmental organizations active in disasters on a routine basis and during disaster response. In one FEMA regional office, officials said VALs serve as mass care specialists and regularly participate in calls with mass care partners. While such pre-existing relationships among established mass care partners facilitated mass care coordination, officials from voluntary organizations that did not have pre-existing relationships—unaffiliated organizations—reported challenges connecting with established mass care organizations, such as FEMA and the Red Cross, to share knowledge that could have informed response efforts. During the 2017 hurricane response, officials from unaffiliated organizations such as local advocacy groups and faith-based organizations told us they experienced challenges sharing critical information regarding needs, resources, and capabilities with established mass care organizations. These coordination challenges affected their ability to provide mass care services to certain populations. For example: A group of community organizations in Florida representing low- income and migrant populations had information on the location of people needing assistance, but reported difficulties in locating FEMA and Red Cross officials with whom to share that information. Representatives of a community group that assists victims of domestic violence in the U.S. Virgin Islands said there was no centralized way to share critical information and no plan for how to best address the issues facing these survivors. For example, they said the Red Cross had mapped damaged areas but was not sharing that information with community groups that could have provided assistance. This group said these maps could have been used to help locate people who were at particular risk. Red Cross officials stated that they experienced challenges in sharing damage assessment information in the U.S. Virgin Islands due to technology issues, which prevented them from being able to share these data securely with other organizations. Representatives from several faith-based organizations in multiple states told us they had food, water, and supplies, as well as local knowledge of need. Two of these representatives said FEMA and the Red Cross did not share information with them as to where they had already distributed supplies. This information was important so as to not duplicate efforts and to ensure those who still needed supplies were not overlooked, according to these representatives. Some migrant populations in all four areas we visited were hesitant to seek or receive assistance from federal, state, and local government agencies due to their undocumented immigration status, according to emergency management officials and community group representatives. Officials from multiple local voluntary organizations said they knew where migrant populations were located and what types of assistance they needed; they were trusted by these populations, but had difficulty finding FEMA or Red Cross representatives for sharing this information. Established mass care partners, including FEMA and the Red Cross, may not share information with unaffiliated organizations due to concerns about privacy, according to officials. Local governments also may not receive such information, because FEMA shares it with the states and the states are responsible for determining when to share it with local governments, according to FEMA officials. Local governments and unaffiliated organizations told us, however, that they do not need personally identifiable information, and that aggregated information about overall resource needs in certain locations would be sufficient for their purposes. For example, county officials in two states told us it was difficult to get FEMA data that would have helped them target areas for assistance, including those that other agencies might not have been able to reach. Similarly, the leader of a group that coordinates local voluntary organizations said they only needed aggregate-level data to identify needs in different counties. In addition, the Red Cross told us that mass care partners could access certain information from their RC View portal, which provides situational awareness information that supports resource requests and needs assessments. However, the Red Cross did not share such information with all its partners during the 2017 hurricanes because the technology was not yet ready. As of May 2019, Red Cross officials told us they are working on providing access to their RC View portal for several key partners, and that they intend to expand access to RC View to additional organizations in the future. ESF-6 states that Red Cross, in conjunction with FEMA, will facilitate the mobilization of private sector partners for the provision of mass care services. FEMA’s most recent strategic plan emphasizes the importance of a whole community approach to disaster response because individuals and local communities are the true first responders in a disaster. FEMA guidance states that the integration of non-traditional responders (which may include unaffiliated organizations) providing mass care services may be necessary during severe disasters. Federal internal control standards also emphasize the importance of communicating externally to key stakeholders. By not engaging in information sharing with unaffiliated organizations, FEMA and the Red Cross may miss opportunities to more accurately and efficiently coordinate mass care. As a result, those in need may not receive critical assistance in a timely way. Red Cross’ Training for Staff Deployed to Disaster Areas Red Cross provides training for its staff and volunteers deployed to disaster areas. This training includes information on the area of deployment, the nature of the disaster, and any cultural sensitivities they need to be aware of, according to Red Cross officials. However, unfamiliarity with local traditions and norms challenged Red Cross personnel when they arrived at disaster sites in 2017, and some local governments and community groups said this affected mass care coordination. Red Cross officials said they initially did not have enough Spanish speakers in Puerto Rico during the response to Hurricane Maria, for example. To address this need, they used Spanish-speaking workers from the International Red Cross community in Mexico and South America to assist with mass care coordination, according to Red Cross officials. As a result of challenges encountered during the 2017 hurricane season, Red Cross officials said that they have made changes to their approach intended to increase their engagement with the Latino community. This effort includes having materials translated into Spanish. To counter concerns among some disaster survivors about providing immigration status information, Red Cross officials said they have taken steps to clarify that the Red Cross does not collect this information. FEMA Did Not Collect Key Information on Capabilities of Mass Care Partners Prior to the 2017 Hurricanes and its Updated Approach Has Limitations Mass Care Capabilities Data Collected by FEMA Were Not Useful for the 2017 Hurricane Response, but FEMA is Making Changes Information on the mass care capabilities of state and local jurisdictions that FEMA collected in 2016 and 2017 was not specific enough to aid the agency in its response to the 2017 hurricanes, according to FEMA’s After- Action Report and agency officials. The reporting process at the time of the 2017 hurricanes did not require grantees to report specific estimates of their current capabilities for providing mass care, which resulted in an incomplete picture of capabilities. With regard to mass care capabilities, FEMA did not ask grantees to report the number of people they could shelter, or how long they could maintain sheltering operations. For example, one state affected by the 2017 hurricanes identified gaps in the state’s capability to provide cots, blankets, laundry facilities, kitchens, and shelter facilities, but did not quantify the shortfall in its assessment submitted in December 2016. In addition, it was optional for grantees to describe deficiencies in their mass care capabilities at the time of the 2017 hurricanes, according to FEMA officials. One grantee affected by the 2017 hurricanes had indicated in its assessment from December 2016 that there were gaps in several mass care capabilities, such as shelter equipment and training for family reunification. However, this grantee chose not to include an additional description of what those gaps were. As a result of these limitations, FEMA and its grantees did not have specific information on state, territorial, and urban mass care capabilities or gaps at the time of the 2017 hurricanes. Officials from several states told us they were not aware of capabilities assessments being used during the response to the 2017 hurricanes, but some said this information could have been useful. For example, an official in one state said the information could be used for resource targeting. In submissions from the year following the 2017 hurricanes, 35 state and territorial grantees did not provide gap descriptions for mass care, which were optional at the time. According to FEMA, the agency recognized the limitations of the capabilities assessment data it had been collecting and began revising its methodology prior to 2017. FEMA’s After-Action Report for the 2017 hurricanes stated that one reason the agency began revising its capabilities assessment methodology was to provide more actionable information to use during response. Revisions were implemented for the 2018 reporting period that could result in FEMA collecting more specific and descriptive data on mass care capabilities, such as the number of people for whom the grantee can provide shelter, food, water, and relocation assistance as part of mass care (see table 1). FEMA’s 2018 guidance encouraged grantees to use a standardized format developed by FEMA, which allows grantees to insert community- specific numbers into a template when they report capability targets and estimates. The new standardized format also generates a quantitative statement of a grantee’s capability gaps (see table 2). Other new changes in FEMA’s revised approach will also allow the agency to collect more specific information on mass care capabilities. For example, starting in 2018, grantees were required to: Report the extent to which capabilities have been lost, built, or sustained over the previous year. Describe intended approaches for addressing capability gaps and sustaining capabilities built, including investments in resources. Describe the extent to which funding sources contributed to building or sustaining capabilities and improving disaster outcomes. Rate their level of confidence (1-5 scale) in the accuracy of their capability assessment for each target. These data elements have the potential to inform both disaster planning and response operations. FEMA’s Updated Approach to Collecting Mass Care Capabilities Data Does Not Require Input from Key Mass Care Providers FEMA revised its methodology for collecting capabilities assessment data in 2018, but it does not collect key information that could better inform its mass care planning. FEMA does not specifically require grantees to solicit the input of key partners in assessing mass care capabilities, according to officials, even though mass care generally depends on the work of such organizations. For example, the Salvation Army and the Southern Baptist Convention Disaster Relief often play key roles in mass care feeding, and the Red Cross manages sheltering in many locations, but they are not always included in mass care capabilities assessments submitted by grantees. FEMA officials told us that the new methodology should naturally foster engagement between grantees and their stakeholders, which should provide a better understanding of local capabilities for sheltering and feeding. According to these officials, under the new framework, FEMA requires grantees to report the number and type of government agencies and nongovernment organizations that participated in estimating capabilities (see fig. 7). However, by not requiring that grantees solicit input from organizations that provide mass care, or that grantees name specific organizations in their submissions, FEMA may rely on capabilities assessments developed without consultation with voluntary organizations providing key mass care services. We found that two of the six grantees included in our review did not report participating with the Red Cross, faith-based organizations, or other VOAD groups, in their 2018 assessments. An official from one of these jurisdictions confirmed that they had never reached out to voluntary organizations to take part in the assessment process, due to staff turnover and lack of time, despite relying on these organizations for providing mass care. An official from another jurisdiction said it is detrimental not to have voluntary partners’ input when preparing capabilities assessments because these partners are critical to providing mass care and play vital roles in disaster response. According to FEMA’s guidance, all organizations—not just government agencies—should be involved in preparedness efforts, and grantees should involve stakeholders throughout the process. FEMA’s guidance encourages a whole-community approach in which grantees include community stakeholders and subject-matter experts in estimating capabilities. Further, federal internal control standards emphasize the importance of designing systems for obtaining information that help an agency achieve its objectives. Without including key mass care providers when estimating capabilities and naming them in their capabilities assessments, grantees and FEMA may not collect reliable mass care capability estimates, or know who to contact in response to a disaster. States and localities may not be able to efficiently allocate their own resources to areas of unmet need and may be more reliant on outside resources during disaster response, which could have implications for the allocation of federal resources. FEMA Does Not Have a Systematic Process to Provide Feedback to Grantees on their Mass Care Capabilities Assessments FEMA reviews grantees’ capabilities assessments using standard checklists, but does not have a systematic process for providing feedback to grantees on their submissions in order to improve the usefulness of the information in them. FEMA officials use the checklists to assess the completeness and reasonableness of the submissions. Specifically, FEMA regional officials use the checklists to look for outliers, inconsistencies, invalid information, and inputs that to do not align with FEMA guidance or information that does not pass a “common sense” check. For example, one 2018 checklist we reviewed included comments from FEMA that the grantee’s capabilities assessment was only partially “complete and reasonable” because it showed no gaps for most capabilities, which might suggest that the targets it set are too low. FEMA officials told us that if the checklist identifies shortcomings in a grantee’s assessment, the regional office will send the assessment back to the grantee and communicate what needs to be changed. However, regional offices vary in their approaches to following up with grantees to obtain more information when potential issues are identified, and FEMA has not provided them with written guidance to standardize this feedback process. FEMA officials from two regional offices told us that the headquarters and regional preparedness divisions discussed follow-up protocols by phone, but they did not provide documentation that identified conditions or considerations for when to follow up with grantees or provide feedback. As a result, grantees may not receive consistent feedback from FEMA on their assessment of mass care capabilities and the information provided may remain incomplete. Rather than systematically providing feedback on the content of capabilities assessments, FEMA officials told us that they focus on identifying areas in which they can provide support to grantees. Their view is that communities know more about their own capabilities than the federal government does, and that it would not be appropriate to suggest major changes to the submitted assessments. Officials from one FEMA region said they view these submissions as self-assessments that are used for maintaining relationships with states and to help states better understand their capabilities and gaps. Officials also said that the FEMA regional office or the national preparedness office, or both, examine grantees’ disaster scenarios described in the assessments, the grantees’ self-assessed scores, and areas of grantee strengths and weaknesses to determine how FEMA can better support them. FEMA officials said they also phone grantees after each submission cycle to discuss challenges, including how to improve FEMA’s technical assistance and support, and how to make the process more useful for grantees. State officials we spoke to said that especially since the 2017 hurricanes, they have received more upfront guidance from FEMA than previously. Generally, FEMA’s support to grantees includes published guidance, annotated examples, technical assistance webinars, and a help desk for phone and email assistance. In 2018, FEMA also began piloting readiness visits where FEMA regional officials met with state and local grantees to discuss capability gaps identified in their assessments, according to officials. However, officials from three of the six grantees included in our review said that they did not receive key feedback from FEMA about their mass care capabilities assessments that would have been useful. An official in one state said it did not receive helpful feedback from FEMA prior to the 2017 hurricane season and, in particular, the official would have liked FEMA to confirm whether the state had completed its assessment correctly and completely, or if other information was needed. Officials from another state said that they did not receive any substantive feedback on their 2017 assessment. Officials from one urban area grantee said they did not receive technical feedback on areas of least readiness, and noted it would be helpful if FEMA could provide insight on the information provided in cases where the grantee had assigned a low confidence level in its capability assessment. Officials from four of the six grantees we spoke with said they would like additional clarity about the process from FEMA. For example, one state official said that understanding how FEMA uses capabilities information would have helped the grantee know how to improve its responses; get other agencies to participate more in the process; and solicit better, more tailored information from partners. This official noted that FEMA addressed this issue in 2019 by sharing more information about how it uses capabilities information. An official from another state said the state preparedness office would like input about how to obtain information from other agencies and how to assess capabilities at the local level. FEMA has an opportunity to use its review of capability assessments to improve its ability to assist with future disasters. After reviewing the 2018 submissions that used the new methodology, FEMA officials told us they are planning to develop criteria for evaluating future submissions and establish a regular process for providing feedback. By not systematically following up with grantees thus far, FEMA limits the extent to which it can build and supplement the emergency preparedness capabilities of these grantees. According to FEMA, it routinely analyzes capabilities assessment information for this purpose. FEMA has a strategic goal that involves supporting emergency managers in building the capacity to self- evaluate, monitoring the completion of improvement actions, and sharing insights. Providing feedback to grantees, including on the effective use of capability assessments as well as potential pitfalls, may help grantees develop their capability assessments and inform plans for how FEMA and the grantee will respond to disasters. Without clear protocols for providing feedback, grantees and FEMA may not possess complete, accurate, and reliable information on communities’ mass care capabilities, which will limit the effectiveness of the capability assessment process in contributing to the goal of national preparedness. Conclusions The 2017 hurricane season presented unprecedented challenges for mass care service providers, and for survivors in Florida, Puerto Rico, Texas, and the U.S. Virgin Islands. While many partners coordinated extensively on the mass care response to 2017 hurricanes, unmet needs in sheltering, feeding, and supply distribution should spur FEMA and the Red Cross to consider the sufficiency of current agreements, especially with state and local governments. In particular, the 2017 hurricanes highlighted the importance of state and local governments understanding the services that mass care providers can deliver, particularly when disasters are severe or overlapping. Without FEMA providing more targeted guidance to help states and localities develop specific written agreements with voluntary organizations providing mass care services, expectations for what these organizations can provide may be unclear, putting disaster victims at risk. Moreover, without proactively considering the roles and responsibilities that the federal disaster framework establishes for agencies and organizations coordinating mass care, DHS lacks assurance that responsibilities are assigned to the entities best suited to carry them out. In addition, mass care coordination efforts during the 2017 hurricane season illustrated the importance of appropriately sharing information about capabilities and resources as part of advance preparation. During a disaster, local community groups are often the most informed about where needs exist, but also may not be connected with established mass care partners. Further leveraging community groups could prove vital for meeting mass care needs in a large-scale disaster, especially for the most vulnerable populations. FEMA does not explicitly require grantees to involve key mass care providers in their capabilities assessments. This may make it difficult for grantees to be well informed as to what they are actually capable of delivering locally. Further, FEMA has not documented a consistent, systematic approach to following up with partner governments on their reporting of mass care capabilities, while some grantees have said that additional feedback would be useful for preparedness and response efforts. As a result, some grantees may be ill-prepared to meet the mass care needs of the public during future disasters. Recommendations for Executive Action We are making a total of six recommendations, including the following recommendation to the Secretary of Homeland Security: To strengthen the mass care response to future disasters, the Secretary of Homeland Security should direct FEMA to periodically review the current structure of ESF-6 leadership roles and responsibilities for coordinating mass care. (Recommendation 1) In addition, we are making the following four recommendations to the FEMA Administrator: To better clarify what mass care services voluntary organizations can provide, especially for severe or overlapping hurricanes, FEMA should strengthen its guidance to state and local governments to emphasize the importance of clearly defining roles and responsibilities related to mass care when state and local governments develop written agreements with partner organizations. This could include creating a guidance document or memo that calls attention to the issue and brings together existing resources, such as the Multi-Agency Feeding Plan Template and training materials, in a comprehensive and accessible manner. (Recommendation 2) To ensure assistance reaches all survivors, FEMA should develop mechanisms for the agency and its partners to leverage local community groups, such as conducting regular outreach to communicate and share aggregate information with these groups. (Recommendation 3) To ensure more accurate mass care capability assessments, FEMA should require grantees to solicit capabilities information from key mass care service-delivery providers in making capability estimates and identify these providers in their submissions. (Recommendation 4) To build the emergency preparedness capabilities of grantees, FEMA should develop systematic, documented protocols to determine the conditions under which it will follow up and provide feedback to grantees about mass care capability assessments. (Recommendation 5) We are also making the following recommendation to the American Red Cross: To ensure assistance reaches all survivors, Red Cross should develop mechanisms for it and its partners to leverage local community groups, such as conducting regular outreach to communicate and regularly share aggregate information with these groups. (Recommendation 6) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this report to DHS and the American Red Cross (Red Cross) for review and comment. DHS and American Red Cross provided written comments, which are reproduced in appendices II and III, and described below. In addition to its formal letter, DHS provided technical comments, which we incorporated as appropriate. We also provided relevant excerpts of the draft report to third parties, such as state and local government agencies and voluntary organizations we interviewed. These third parties provided technical comments, which we incorporated as appropriate. In its formal letter, DHS concurred with four of our recommendations and did not concur with one recommendation. Specifically, DHS and FEMA did not concur with our recommendation that FEMA should require grantees to include key mass care service-delivery providers in making capability estimates and identify these providers in their submissions. The letter noted the importance of involving stakeholders and subject matter experts at multiple levels of government and across sectors in order to develop complete and accurate assessments. However, DHS and FEMA said that requiring communities to include the key mass care providers in capabilities assessments is not the most effective approach for achieving this outcome. Because grantees cannot control which partners participate, DHS and FEMA said implementing this recommendation would increase the burden on grantees and could put certain communities at a disadvantage. In addition, DHS and FEMA said that because capabilities assessments are not limited to mass care, such a requirement may have unintended consequences for other partners. Instead, the letter stated that FEMA plans to continue working with the mass care community to identify the best solution, including encouraging collaboration at all levels of government. We modified our recommendation to address their concern. Specifically, we clarified that FEMA should require grantees to solicit information from key mass care partners and to identify these partners in their submission. This change acknowledges that grantees cannot compel partners to participate, but they can, at a minimum, invite such partners to participate in the process. We continue to believe that grantees should be required to make an effort to include mass care providers in developing their mass care capability assessments, as this is vital for developing high quality assessments. FEMA has emphasized the importance of having an active relationship and ongoing communication with key partners before disasters strike. In its Strategic Plan, FEMA states that pre-disaster coordination and communication among partners is critical to improve response and recovery outcomes. Thus, we do not believe it would be an undue burden to reach out to such partners as part of the capability assessment process. With regard to the remaining recommendations, DHS and FEMA described steps they have taken or plan to take to address the issues raised. While DHS concurred with recommendation 1 to direct FEMA to periodically review the ESF-6 leadership roles and responsibilities, the department considers this issue to be resolved because FEMA routinely conducts after-action reports and recently established a working group focused on performance metrics and corrective actions. We agree that these actions are important parts of effectively overseeing and evaluating ESF activities and results. While these efforts may address the responsibilities of ESF agencies, they may overlook the overall leadership roles of ESF agencies. In order to fully implement the recommendation, DHS and FEMA would also need to establish a process for reviewing the structure of ESF leadership roles on a regular basis. In concurring with recommendation 3, DHS and FEMA detailed several approaches they use to connect with local resources, including collaborating with VOAD groups at national, state, and local levels, and indicated that they consider this recommendation already implemented. Given the information gathered from several unaffiliated organizations in areas affected by the 2017 disasters, it is clear there is more work to be done in terms of sharing critical information about mass care needs and resources. Therefore, we continue to encourage FEMA to develop additional mechanisms to enhance outreach to organizations that may not be aware of existing approaches such as collaboration with the VOAD groups. Red Cross agreed with our recommendation to leverage local community groups through outreach and information-sharing. Red Cross noted several ongoing activities to engage such community groups and said the organization intends to continue expanding outreach, data-sharing, and engagement initiatives. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, American Red Cross, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: National Response Framework Emergency Support Function #6 (June 2016 version) Agencies and Responsibilities Department of Homeland Security - Federal Emergency Management Department of Homeland Security - Federal Emergency Management Support Agencies with Roles Directly Related to Mass Care (Feeding, Sheltering, Supply Distribution, and Family Reunification): Corporation for National and Community Service Department of Defense/ U.S. Army Corps of Engineers Department of Health and Human Services Department of Homeland Security Department of Veterans Affairs National Center for Missing & Exploited Children National Voluntary Organizations Active in Disaster (National VOAD) Appendix II: Comments from the Department of Homeland Security Appendix III: Comments from the American Red Cross Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Scott Spicer (Assistant Director), Amy Moran Lowe (Analyst-in-Charge), Grace Cho, and Michael Walton made key contributions to this report. Also contributing to this report were Joel Aldape, Aditi Archer, Susan Aschoff, James E. Bennett, Deirdre Gleeson Brown, Alicia Cackley, Sarah Cornetto, Elizabeth Curda, Chris Currie, Kelly DeMots, Erin Guinn-Villareal, Camille Henley, Denton Herring, Sara Schibanoff Kelly, James Lawson, Matthew T. Lowney, Sheila R. McCoy, Jean McSween, Amanda R. Parker, Sara Pelton, Brenda Rabinowitz, Michelle Sager, Brian Schwartz, Almeta Spencer, Manuel Valverde, Jr., and Su Jin Yon.
Why GAO Did This Study Three catastrophic hurricanes affected more than 28 million people living in Texas, Florida, Puerto Rico, and the U.S. Virgin Islands in 2017. Hurricanes Harvey, Irma, and Maria—which all made landfall within four weeks—caused a combined $265 billion in damage, and led to unprecedented demands for food and shelter, according to FEMA. FEMA and the Red Cross are the primary agencies responsible for coordinating mass care under the federal disaster response framework. GAO was asked to review their efforts. This report examines (1) FEMA's and the Red Cross' coordination of mass care in response to the 2017 hurricanes, and (2) FEMA's support and use of assessments of mass care capabilities for the 2017 hurricanes. GAO reviewed relevant federal laws, federal frameworks, and written agreements between federal, state, or local governments and various voluntary organizations providing mass care services. GAO also interviewed state, territorial, local, and voluntary organization officials in Florida, Puerto Rico, Texas, and the U.S. Virgin Islands; as well as officials from Red Cross, FEMA, other relevant federal agencies, and voluntary organizations. What GAO Found Following the three major U.S. hurricanes in 2017, disaster relief efforts of the Federal Emergency Management Agency (FEMA) and the American Red Cross (Red Cross) benefitted from locating key partners in the same place. In-person coordination was critical to maintaining communication in Puerto Rico and the U.S. Virgin Islands given the prolonged power outages and damage to public structures (see photo). However, some needs related to mass care—such as shelter, food, and supply distribution—were unmet. For example, local officials in Texas said flooded roads prevented trucks from delivering supplies. Providers encountered challenges in part because state and local agreements with voluntary organizations did not always clearly detail what mass care services could be provided. Additionally, FEMA guidance and training materials do not explicitly encourage states and localities to include in their written agreements the specific assistance each agency or organization can provide. This limits the benefits of mass care coordination and may put disaster victims at risk. State, territorial, and local grantees of federal disaster preparedness grants are required to regularly submit information on their capabilities to FEMA, and FEMA has provided related guidance and technical assistance. However, the information some grantees provided to FEMA was not specific enough to aid its response in 2017. Moreover, FEMA does not require grantees to specify the organizations providing mass care services in their capabilities assessments. Also, FEMA does not have systematic protocols for providing feedback to grantees to improve their assessments. These limitations hinder FEMA's efforts to strengthen emergency preparedness. What GAO Recommends GAO is making six recommendations, including that FEMA emphasize the importance of defining roles and responsibilities in its guidance to states and localities, require them to solicit information from key mass care providers in assessing capabilities, and develop protocols for providing feedback to grantees on capability assessments. FEMA agreed with all but one of GAO's recommendations; GAO maintains its recommendations are valid.
gao_GAO-19-398T
gao_GAO-19-398T_0
Background Most private employers subject to Title VII of the Civil Rights Act of 1964 with 100 or more employees, and all federal contractors who have 50 or more employees and meet certain other requirements, must submit data to the EEOC on the racial/ethnic and gender characteristics of employees by occupations for a range of industries, including financial services. Employers are required to submit these data to EEOC every year using the EEO-1 report. EEOC requires employers to use the North American Industry Classification System to classify their industry. Under this system, the financial services industry includes the following five sectors: Credit intermediation and related activities (banks and other credit institutions), which include commercial banks, thrifts, and credit unions; Securities and other activities, which includes firms that bring together buyers and sellers of securities and commodities and offer financial advice; Insurance firms and agents that provide protection against financial risks to policyholders; Funds and trusts, which include investment trusts and holding Monetary authorities, including central banks. Beginning in 2007, EEOC changed its requirements for reporting data on managers. Specifically, employers were required to report separately on senior-level management positions rather than combining data on senior- level managers with data for first- and mid-level managers, as had been the practice until 2007. Employers are required to review EEOC guidance describing the two management positions and determine how their firm’s job positions fit into these classifications. In a January 2005 report, we identified a set of nine leading practices that should be considered when an organization is developing and implementing diversity management. They are (1) commitment to diversity as demonstrated and communicated by an organization’s top leadership; (2) the inclusion of diversity management in an organization’s strategic plan; (3) diversity linked to performance, making the case that a more diverse and inclusive work environment could help improve productivity and individual and organizational performance; (4) measurement of the impact of various aspects of a diversity program; (5) management accountability for the progress of diversity initiatives; (6) succession planning; (7) recruitment; (8) employee involvement in an organization’s diversity management; and (9) training for management and staff about diversity management. In 2017, we reported that industry representatives confirmed that these nine practices are still relevant. Management-Level Diversity Trends in the Financial Services Industry Showed Little or No Increase from 2007 through 2015 Since 2007, Management- Level Representation Increased Marginally for Minorities and Remained Unchanged for Women As we reported in November 2017, at the overall management level, representation of minorities in the financial services industry increased from 2007 through 2015, though representation varied by individual minority groups (see fig. 1). Specifically, minorities’ representation in overall management positions increased by 3.7 percentage points. Asians had the largest gains since 2007, increasing their representation among managers from 5.4 percent to 7.7 percent. Hispanics made smaller gains; their representation among managers increased from 4.8 percent to 5.5 percent. In contrast, the proportion of African-Americans in management positions decreased from 6.5 percent to 6.3 percent. Representation of minorities also increased between different levels of management from 2007 through 2015 (see fig. 2). Minority representation among first-and mid-level managers increased by 3.7 percentage points. In contrast, representation of minorities among senior- level management increased at a slower pace during this period (1.7 percentage points). Minority representation among senior-level managers remained considerably lower than among first- and mid-level managers. Among first- and mid-level managers, representation of Asians experienced the largest increase from 2007 through 2015 (2.6 percentage points). Hispanic representation increased by less than 1 percentage point, while African-American representation slightly decreased by 0.3 percentage point. In addition, among senior-level managers, representation of each racial and ethnic group changed by less than 1 percentage point. We also reported in November 2017 that representation of women at the overall management level had generally remained unchanged. From 2007 through 2015, women represented about 45 percent of overall management. Representation of each racial and ethnic group varied by gender during this time period. For example, among minority women, African-American women consistently had the highest representation in overall management (about 4 percent of managers per year). Among minority men, Asian men consistently had highest representation in overall management (3.1 percent to 4.6 percent of all managers). The proportion of men and women within various levels of management remained unchanged from 2007 through 2015, though there were some increases in the representation of both minority women and minority men. During this timeframe, women represented around 48 percent of first-and mid-level managers and about 29 percent of senior-level managers. Among first- and mid-level management positions, the representation of minority women increased by 1.6 percentage points and the representation of minority men increased by 2.2 percentage points (see fig. 3). Among senior-level management positions, representation of minority women and minority men increased by smaller amounts (0.3 percentage points and 1.5 percentage points, respectively). Certain Financial Sectors Are More Diverse Than Others In November 2017, we reported that management-level diversity varied across sectors within the financial services industry. Minorities’ representation in overall management increased in all four sectors of the financial services industry (see fig. 4). For example, representation of minorities in the banks and other credit institutions sector increased by 3.1 percentage points and 4.3 percentage points in the funds and trusts sector. Also, the representation of minorities in overall management was consistently the greatest in the banks and other credit institutions and lowest in the insurance sector. The representation of women in overall management also varied by financial services sector (see fig. 5). The insurance sector consistently had the highest proportion of women in management positions, followed by banks and other credit institutions. The proportion of women in management decreased in each sector except for the insurance sector where it increased by 1.9 percentage points from 47.7 percent to 49.6 percent. Management-Level Representation of Minorities Increased with Firm Size Our November 2017 report found that the representation of minorities in overall management positions increased as firm size (number of employees) increased, whereas the representation of women in management generally remained the same across firm size. More specifically, in 2007, the representation of minorities in overall management was nearly 5 percentage points greater in firms with 5,000 or more employees compared to firms with 100–249 employees. By comparison, in 2015, the representation of minorities in overall management was about 6 percentage points greater in firms with 5,000 or more employees compared to firms with 100–249 employees. Across firms of different sizes, the representation of women in management positions in 2015 was generally the same as it was in 2007. Financial Services Sector Trends Have Similarities and Differences Compared To Professional Services and Overall Private Sectors Our November 2017 report found that from 2007 through 2015, representation of minorities in all levels of management increased in the financial services sector, the professional services sector, and the overall private sector. However, among first- and mid-level managers, representation of minorities increased at a lower rate in the financial services sector during this time period (3.7 percentage points) than in the professional services sector (7.5 percentage points) and slightly lower than the overall private sector (3.8 percentage points) . In addition, the financial services sector generally had a greater proportion of women in management compared to the overall private sector and professional services sector. For example, women represented 36.7 percent and 38.2 percent of first- and mid-level managers in the professional services sector and overall private sector, respectively, in 2015. As previously mentioned, women represented about 48 percent of first- and mid- level managers in the financial services sector from 2007 through 2015. External and Internal Potential Talent Pools for Financial Services Positions Are Diverse Potential employees for the financial services industry can come from a range of academic and professional backgrounds. Financial firm representatives we spoke to for our November 2017 report told us that undergraduate or graduate degrees are an important consideration for employment. Some firm representatives also told us that while graduates with Master of Business Administration (MBA) degrees are an important pool of talent, firms seek students with a variety of degrees. We also found that from 2011 through 2015, about one-third of the external pool of potential talent for the financial services industry—that is, those obtaining undergraduate or graduate degrees—were racial/ethnic minorities (see fig. 6). Additionally, rates of attainment of bachelor’s, master’s, and MBA degrees by racial/ethnic minorities all increased during this time period. For example, minorities’ representation among those who attained an MBA increased from 35.6 to 39.2 percent. Furthermore, from 2011 through 2015, minority women consistently earned a greater proportion of master’s and MBA degrees compared to minority men. Additionally, we found that from 2011 through 2015, a majority of those obtaining undergraduate or graduate degrees have been women (see fig. 7). For example, women consistently earned about 58 percent of bachelor’s degrees, just over 60 percent of master’s degrees, and about 45 percent of MBA degrees during this time period. As we reported in November 2017, the internal pool of potential talent for the financial services industry is known as the “internal pipeline” of staff that could potentially move into management positions. There are two nonmanagement job categories in the financial services sector that are considered to be part of the internal pipeline: professional and sales positions. From 2007 through 2015, EEOC data show that minorities’ representation in professional and sales positions had changed over time, but had generally been greater than minorities’ representation in overall management positions. Similarly, EEOC data over the same timeframe show that representation of women in professional positions in the financial services industry had generally been greater than women’s representation in overall management. For example, from 2007 through 2015, women consistently represented about 50 percent of all employees in professional positions and about 45 percent of overall management. The percentage of women in sales positions in the financial industry had generally been lower, at about 40 percent. Financial Services Firms and Others Described Workforce Diversity Challenges and Practices to Address Them Representatives from financial services firms and organizations that advocate for women or racial/ethnic minorities who we spoke to for our November 2017 report described a variety of challenges to recruiting a diverse workforce for the financial services sector. These challenges included negative perceptions of the financial services industry that might discourage potential candidates and a lack of awareness of career paths in the industry. Research we reviewed and representatives we spoke with identified several practices believed or found to be effective for recruiting women and racial/ethnic minorities, which included: Recruiting students from a broad group of schools and academic disciplines. Representatives from three firms stated that they were increasingly hiring and interested in recruiting students from a variety of academic disciplines, such as liberal arts or science and technology. For example, representatives from one firm said that they were interested in candidates with critical thinking skills, and that technical skills could be taught to new employees. Additionally, representatives from several firms noted the importance of recruiting at a broad group of schools, not just a small number of elite universities. Offering programs to increase awareness of careers in financial services. Several representatives of financial firms told us that they had established relationships with high school students to expose diverse students to the financial services field. For example, representatives from one firm described a program that pairs high school students with a mentor from the firm. Additionally, a 2016 consulting firm report on women in financial services organizations in 32 countries found that a majority of asset managers who were interviewed thought it was important for financial services firms to educate students about careers in financial services. Financial services firms and other sources also noted challenges to retaining women and racial/ethnic minorities. For example, some representatives of financial firms noted that employee resistance, particularly from middle-managers, poses a challenge to diversity efforts. In addition, officials from some organizations we interviewed noted that unconscious bias can negatively affect women and minorities. As we noted in our November 2017 report, according to reports on diversity, representatives from financial services firms and other stakeholders, certain practices that may help improve the retention of women and racial/ethnic minorities, included: Establishing management-level accountability. Representatives from three financial services firms told us that management should be held accountable for workforce diversity goals. For example, two representatives discussed the use of a “diversity scorecard,” which is a set of objectives and measures derived from a firm’s overall business strategy and linked to its diversity strategy. Additionally, one firm representative noted that tying senior managers’ compensation to diversity goals had been an effective practice for retaining women and minorities. Researchers have noted that efforts to establish organizational responsibility for diversity have led to the broadest increases in managerial diversity. Assessing Data on Workforce Diversity. Financial services firms and organizations we talked to generally agreed that assessing demographic data to understand a firm’s diversity is a useful practice. All of the financial services firms we interviewed agreed on the importance of analyzing employee data. Several firms stated that it is important for organizations to understand their progress on workforce diversity–and, if data trends indicate problems, such as retention issues, they then can take steps to address them. Representatives of firms and organizations that advocate for diversity differed on the benefits of making demographic data public. Representatives of one organization said requiring businesses to be transparent about their workforce data creates incentives to improve the diversity of their workforce. However, representatives of two financial firms expressed concerns that publicly disclosing firm-level employee characteristics would not be beneficial to businesses. For example, one representative noted that publicly disclosing that firms are not diverse could damage their reputation and make improvement of workforce diversity more difficult. In closing, I would like to thank you for the opportunity to discuss trends in management-level diversity in the financial services industry. I look forward to working with this subcommittee on these important issues. Chairwoman Beatty, Ranking Member Wagner, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information on this testimony, please contact Daniel Garcia- Diaz at (202) 512-8678 or GarciaDiazD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Lisa Moore (Assistant Director), Christine Ramos (Analyst in Charge), Kay Kuhlman, Jill Lacey, Tovah Rom, Jena Sinkfield, and Tyler Spunaugle. Related GAO Products Financial Services Industry: Trends in Management Representation of Minorities and Women and Diversity Practices, 2007—2015. GAO-18-64. Washington, D.C.: November 8, 2017. Investment Management: Key Practices Could Provide More Options for Federal Entities and Opportunities for Minority- and Women-Owned Asset Managers. GAO-17-726. Washington, D.C.: September 13, 2017. Corporate Boards: Strategies to Address Representation of Women Include Federal Disclosure Requirements. GAO-16-30. Washington, D.C.: December 3, 2015. Federal Home Loan Banks: Information on Governance Changes, Board Diversity, and Community Lending. GAO-15-435. Washington, D.C.: May 12, 2015. Diversity Management: Trends and Practices in the Financial Services Industry and Agencies after the Recent Financial Crisis. GAO-13-238. Washington, D.C.: April 16, 2013. Federal Reserve Bank Governance: Opportunities Exist to Broaden Director Recruitment Efforts and Increase Transparency. GAO-12-18. Washington, D.C.: October 19, 2011. Financial Services Industry: Overall Trends in Management-Level Diversity and Diversity Initiatives, 1994—2008. GAO-10-736T. Washington, D.C.: May 12, 2010. Financial Services Industry: Overall Trends in Management-Level Diversity and Diversity Initiatives, 1993—2004. GAO-06-617. Washington, D.C.: June 1, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The financial services industry is a major source of employment that affects the economic well-being of its customers and the country as a whole. As the makeup of the U.S. workforce continues to diversify, many private sector organizations, including those in the financial services industry, have recognized the importance of recruiting and retaining minorities and women in key positions to improve business or organizational performance and better meet the needs of a diverse customer base. However, questions remain about the diversity of the workforce in the financial services industry. This statement is based on GAO's November 2017 report on changes in management-level diversity and diversity practices in the financial services industry. This statement summarizes (1) trends in management-level diversity in the financial services industry, (2) trends in diversity among potential talent pools, and (3) challenges financial services firms identified in trying to increase workforce diversity and practices they have used to address those challenges. What GAO Found In November 2017, GAO reported that overall management representation in the financial services industry increased marginally for minorities and remained unchanged for women from 2007 to 2015. Similar trends also occurred at the senior-level management of these firms. For example, women represented about 29 percent of senior-level managers throughout this time period. As shown below, representation of minorities in senior management increased slightly, but each racial/ethnic group changed by less than 1 percentage point. The diversity of overall management also varied across the different sectors of the financial services industry. For example, the banking sector consistently had the greatest representation of minorities in overall management, whereas the insurance sector consistently had the highest proportion of women in overall management. As GAO reported in November 2017, potential employees for the financial services industry, including those that could become managers, come from external and internal pools that are diverse. For example, the external pool included those with undergraduate or graduate degrees, such as a Master of Business Administration. In 2015, one-third of the external pool were minorities and around 60 percent were women. The internal talent pool for potential managers included those already in professional positions. In 2015, about 28 percent of professional positions in financial services were held by minorities and just over half were held by women. Representatives of financial services firms and other stakeholders GAO spoke to for its November 2017 report described challenges to recruiting and retaining members of racial/ethnic minority groups and women. They also identified practices that could help address those challenges. For example, representatives from several firms noted that an effective practice is to recruit and hire students from a broad group of schools and academic disciplines. Some firms also described establishing management-level accountability to achieve workforce diversity goals. Firm representatives and other stakeholders agreed that it is important for firms to assess data on the diversity of their employees but varied in their views on whether such information should be shared publicly.
gao_GAO-19-493
gao_GAO-19-493_0
Background History and Purpose of Great Lakes Pilotage Act The Great Lakes Pilotage Act of 1960 established the system of compulsory pilotage on the Great Lakes. Senate committee reports accompanying the legislation indicate pilotage requirements in the Great Lakes were established because they were viewed as essential to helping ensure maritime safety. The committees also recognized that international coordination between the United States and Canada would be required at a federal level and the act specifically precludes any state, municipality, or local authority from regulating any aspect of pilotage in the waters of the Great Lakes-Seaway. Overview of the Great Lakes Pilotage System All oceangoing commercial vessels are required to use U.S. or Canadian registered pilots during their transit through regulated waters of the Great Lakes-Seaway. Generally, these vessels are assigned a U.S. or Canadian pilot depending on (1) the order in which they transit a particular area of the Great Lakes-Seaway and (2) their destination port(s). Vessels do not choose which pilot they receive. The U.S. waters of the Great Lakes-Seaway are divided into three pilotage districts, each operated by an association of independent pilots certified by the Coast Guard (see figure 1). The registered pilots only operate within their designated district and do not cross district boundaries. If a vessel needs to cross a district boundary to reach the next port, there will be a change of registered pilots at predetermined locations. Each pilotage district is further divided into “designated” and “undesignated” areas. Designated areas of the Great Lakes-Seaway include areas that are generally more challenging to navigate and require pilots to be fully engaged in the navigation of vessels in their charge at all times. In undesignated areas, which are generally open bodies of water, pilots are required to be “on board and available to direct the navigation of the vessel at the discretion of and subject to the customary authority of the master.” Given the size of the Great Lakes-Seaway, and depending on the port calls planned, registered pilots can be onboard vessels for multiple days. This contrasts with marine pilot transits in most U.S. coastal waters that may be just a few miles each way. Commercial vessels transiting the Great Lakes-Seaway are also generally smaller than many of the vessels that operate at coastal ports. As a result, pilotage fees typically represent a greater proportion of the vessel costs than many larger commercial vessels operating in coastal waters. Federal Roles and Responsibilities Pursuant to the Great Lakes Pilotage Act of 1960, the Coast Guard regulates the operation of U.S. pilotage services and establishes the rates they may charge. These rates are to be established through a full rulemaking process at least every 5 years, but must be reviewed and adjusted on an annual basis. The rate-setting process currently includes a 10-step methodology generally designed to account for the estimated annual revenues needed by registered U.S. Great Lakes pilots to provide pilotage services and total vessel traffic expected in each of the three U.S. pilotage districts. (See appendix I for further details on the pilotage rate-setting methodology.) Among other regulatory roles, the U.S. Coast Guard is also responsible for developing competency standards for pilot training and issuing pilot registrations, providing oversight of the pilot associations, and determining the total number of authorized pilots operating in the U.S. waters of the Great Lakes-Seaway. For the 2019 shipping season, 54 U.S. pilots were authorized to serve the Great Lakes- Seaway. The Great Lakes Pilotage Advisory Committee (GLPAC) was established in November 1998 to provide advice and make recommendations to the Coast Guard on matters relating to Great Lakes pilotage. The GLPAC, which meets at least once annually, is comprised of seven members that include the presidents of the three U.S. Great Lakes-Seaway pilotage districts; three members that represent the ports, shipping industry, and vessel operators, respectively; and one member with a finance and accounting background that is selected by unanimous vote of the other six members. 2016 Pilotage Rate Increase and Subsequent Litigation The number of U.S. pilots in the Great Lakes-Seaway decreased from 44 in 2007 to 36 in 2014, which, according to the Coast Guard, resulted in pilot shortages and contributed to shipping delays. In 2016, the Coast Guard initiated a number of changes to its pilotage rate-setting methodology that were intended, in part, to provide sufficient pilot compensation to attract, hire, and retain appropriate numbers of qualified Great Lakes pilots. As shown in Figure 2, after continuing to increase between 2014 and 2016, hourly rates for U.S. pilotage services in 4 of the 6 pilotage areas of the Great Lakes-Seaway were reduced for the 2017 shipping season. Since 2017, they have increased by about 10 percent annually. According to the Coast Guard, these hourly rates are intended to generate the revenues needed to cover the annual operating expenses of the pilot associations; compensate working pilots; maintain infrastructure, such as pilot boats and dispatch equipment; and train new pilots. In May 2016, shipping industry stakeholders filed a complaint in the U.S. District Court for the District of Columbia contesting specific elements of the Coast Guard’s 2016 rate-setting methodology. In November 2017, the court dismissed 3 of the 5 original claims and found for the industry plaintiffs for the two remaining claims. In March 2018, the court remanded the matter to the Coast Guard to address those two claims while leaving the 2016 rule in place. In November 2018, a coalition of shipping industry stakeholders filed an additional complaint challenging the underlying data and decision-making process used by the Coast Guard for determining the 2018 Great Lakes pilotage rates. This case is still pending before the court. The Coast Guard Uses Several Mechanisms to Obtain Stakeholder Input on the Great Lakes Pilotage Program, and Stakeholders Have Raised a Variety of Issues for Consideration The Coast Guard uses several mechanisms to obtain stakeholder input on the Great Lakes Pilotage Program, which stakeholders have used to raise a number of issues to the Coast Guard’s attention. Some of the mechanisms are more formal and include obtaining stakeholder input on proposed rule changes and at annual meetings, while other mechanisms are informal and are employed on an as-needed basis. Since 2016, shipping industry stakeholders and pilots have identified a number of issues, or suggestions, they would like to see integrated within the Great Lakes Pilotage Program. Issues identified by shipping industry stakeholders relate, in large part, to the financial impacts associated with the Coast Guard’s methodology for calculating pilotage rates, as well as other areas where enhanced transparency or oversight is suggested. Issues identified by pilots and their representatives include updating the list of “designated waters” to include areas like Great Lakes ports and addressing changes that may be needed to respond to the increasing volume and variety of vessels needing Great Lakes pilotage services, such as cruise ships. The Coast Guard Uses Several Mechanisms for Obtaining Stakeholder Input The Coast Guard uses several mechanisms to obtain stakeholder input on the Great Lakes Pilotage Program. Formal mechanisms include obtaining stakeholder comments during the rulemaking process and soliciting input during annual meetings of the Great Lakes Pilotage Advisory Committee. According to the Coast Guard, additional inputs are also provided more informally during ad-hoc communications and operational coordination efforts. Rulemaking Process The federal rulemaking process represents a key mechanism by which the Coast Guard obtains stakeholder input regarding proposed changes to annual rates pilots may charge for services. Pursuant to the Administrative Procedure Act, the Coast Guard publishes a notice of proposed rulemaking in the Federal Register and allows a minimum of 30 days for public comment on any applicable changes to the rate-setting methodology and proposed pilotage rates. According to Coast Guard Great Lakes Pilotage Program officials, public participation is essential to the rulemaking process and they consider all comments and information received. In the final rule published to the Federal Register, the Coast Guard summarizes the nature of the public comments received on the notice of proposed rulemaking and characterizes how the comments were incorporated into the final rule, as applicable. For example, the 2018 Final Rule summarizes the comments received in eight different categories, including pilot compensation benchmarks and staffing model calculations. According to Coast Guard officials, they have historically received about five to seven comments each year. However, they received nearly 60 comments regarding the proposed rulemaking in 2016 given the broader scope of revisions and the higher rate of pilot compensation proposed in that year. Great Lakes Pilotage Advisory Committee As previously stated, the GLPAC is to meet at least once annually to provide advice and make recommendations to the Coast Guard on matters relating to Great Lakes pilotage. This committee is governed by the Federal Advisory Committee Act, which calls for a published agenda, public participation, and a written transcript of the proceedings. Our review of 2017 and 2018 GLPAC meeting transcripts indicate the meetings were well-attended and provided a venue for sharing a variety of ideas and perspectives; as well as for providing specific input to the Coast Guard. In addition to the annual GLPAC meetings, Coast Guard officials also noted that GLPAC members participate in scheduled phone calls to discuss pertinent matters—such as a discussion of executive orders or revised regulations—on an as-needed basis. According to the Coast Guard, since 2013 there have been up to three GLPAC meetings per year, ranging in length from 5 hours to 2 days. Coast Guard Great Lakes Pilotage Program officials also stated that GLPAC recommendations from the 2014 meeting were a key input for many of the rate-setting methodology changes implemented in 2016. Although the Coast Guard is not required to implement them, program officials commented that considerable weight is given to GLPAC-issued recommendations. At the September 2018 meeting, the Committee developed three recommendations addressing issues related to the billing dispute process and issuance of temporary registrations to applicant pilots. According to the Coast Guard, these recommendations are still being considered for future action. Ad-Hoc Communications Coast Guard program officials reported that they have extensive ad-hoc communications with shippers, pilots associations, and their Canadian counterparts to coordinate pilot assignments and help reduce vessel traffic delays on the Great Lakes-Seaway. These stakeholders corroborated their communications with the Coast Guard during our meetings with them. Other venues for information sharing and stakeholder interaction identified by Coast Guard officials include visits to the pilots’ offices to perform oversight functions, meetings with shipping industry representatives and Canadian counterparts (Great Lakes Pilotage Authority) at maritime meetings and conventions; as well as interactions with Coast Guard officials from District 9 (Cleveland, OH), which is responsible for broader Coast Guard activities in the Great Lakes-Seaway. According to these Coast Guard program officials, operational coordination and routine meetings with stakeholders provide ongoing opportunities to obtain input on the Great Lakes Pilotage Program and help inform potential changes that may be needed. Shipping Industry Stakeholders and Pilots Have Identified a Number of Issues in Recent Years Since 2016, when the Coast Guard implemented several significant programmatic changes, shipping industry stakeholders and pilots have identified a number of issues. Collectively, these issues have been the subject of discussion during annual GLPAC meetings, documented in written comments submitted as part of the annual rulemaking process, and included in supplemental correspondence to the Coast Guard and Members of Congress. Issues Identified by Shipping Industry Stakeholders Issues identified by shipping industry stakeholders relate, in large part, to the financial impacts associated with the Coast Guard’s methodology for calculating pilotage rates, as well as other areas where enhanced oversight is suggested. The key issues cited by shipping industry stakeholders in recent years generally fall into four categories: (1) financial oversight and cost accounting, (2) vessel traffic estimates, (3) pilot compensation and staffing, and (4) billing and dispute resolution. Some of these issues remain the subject of ongoing litigation initiated by a coalition of shipping industry stakeholders against the U.S. Coast Guard. (See appendix II for additional details on selected issues identified by shipping industry stakeholders, including a summary of the specific claims that are in litigation). Financial oversight and cost accounting. Since 2016, shipping industry stakeholders have cited several issues regarding the timeliness and transparency of financial information provided by the U.S. pilot associations that is used during the rulemaking process. These issues include a request for disclosure of individual pilot compensation levels, and additional clarification and transparency regarding the use of the pilot districts’ working capital funds. For example, shipping industry representatives claim that disclosure of individual pilot compensation levels would help ensure that compensation practices remain fair and are not a disincentive to attracting and retaining Great Lakes pilots. At the September 2018 GLPAC meeting, a pilots’ representative noted that this information was previously provided for District 1, but was eliminated due to concerns that the data could be used out of context. For example, this individual stated that although all pilots in his association generally receive the same rate of pay, some may obtain higher annual compensation because of additional days worked. According to Coast Guard officials, they do not collect or retain individual compensation data on pilots; however, they do review such data during visits to the pilot associations’ offices to help ensure fair compensation practices. Vessel traffic estimates. In 2016, the Coast Guard began using a 10- year rolling average of Great Lakes-Seaway vessel traffic volumes to estimate projected vessel traffic for each district in the coming year as part of its annual pilotage rate-setting calculations. According to the Coast Guard, this change was implemented to help reduce rate volatility and remedy traffic overestimates that occurred in the past, largely based on shipping industry projections. However, given the increasing volume of vessel traffic on the Great Lakes-Seaway since the 2008-2009 recession, shipping industry stakeholders contend that the 10-year rolling average represents a significant underestimate of vessel traffic volume. For example, in the 2017 shipping season, vessel traffic in 5 of the 6 pilotage areas of the Great Lakes-Seaway exceeded the estimates (calculated using a 10-year rolling average) by over 25 percent. According to its 2018 Notice of Proposed Rulemaking, the Coast Guard noted that use of the rolling average will result in pilots taking in more revenue than projected in some years, and in other years will result in less revenue. Coast Guard officials believe that, over the long term, this methodology will help ensure infrastructure is maintained and that pilots receive adequate compensation and rest between assignments to enhance pilot retention. Shipping industry organizations challenged the Coast Guard’s use of 10 years of traffic data in the complaint filed with the U.S. District Court for the District of Columbia in November 2018, and that case is ongoing. Pilot compensation and staffing needs. The data sources and methodology used by the Coast Guard to develop a target compensation benchmark for U.S. Great Lakes pilots have been subject to ongoing disagreement among pilots and shipping industry stakeholders for several years. Since 2016, the Coast Guard has used two primary data sources as a basis for comparison—the average compensation of Canadian Great Lakes-Seaway pilots, and compensation data for first mates on domestic Great Lakes vessels (lakers). Shipping industry stakeholders identified concerns with some of the specific adjustments made by the Coast Guard related to both of these data sources and filed complaints in 2016 and 2018 in federal court contesting the Coast Guard’s methodology. A related issue identified by shipping industry stakeholders concerns the number of average pilot working days the Coast Guard uses to determine the number of pilots needed each season. For example, the Coast Guard uses 270 working days as a baseline to calculate pilot compensation figures, but uses 200 working days to calculate staffing requirements so as to account for a 10-day per month rest standard for pilots. The Coast Guard states that this 10-day rest standard is not a requirement and generally does not apply during the busiest times of the season. During the busiest time, pilots generally remain available to work additional days to service the increased vessel traffic on the Great Lakes-Seaway. The 2018 complaint filed by shipping industry stakeholders includes a claim challenging the Coast Guard’s use of a 270-working day assumption, and that case is ongoing. Billing and dispute resolution. Other issues cited by shipping industry stakeholders pertain to billings from pilot associations and the Coast Guard’s dispute resolution process. The primary billing issues cited by shipping industry stakeholders since 2016 include an increase in the number of tug boats requested, as well as cases where double pilotage was employed that shipping industry officials did not believe were necessary. In the case of tug boat usage, pilot representatives acknowledged that there may have been an increase in tug boat usage, but they noted that they do not have any financial incentive to call for the use of tug boats and they only request them, in coordination with the shippers’ agents, when they deem them necessary. According to Great Lakes Pilotage Program officials, the Coast Guard routinely reviews inquiries from shippers on this issue, but noted that decisions to use tug boats remain safety decisions that are made between the vessel operators and the Great Lakes pilots. In contrast, authorizations for double pilotage are provided on a case-by-case basis by the Director of the Great Lakes Pilotage Program. According to the Coast Guard, there were instances in which pilot associations charged for double pilotage without obtaining authorization from the Director and, in such instances, the Coast Guard has ruled in favor of vessel operators with regard to billing disputes. Both of these issues were topics addressed at the September 2018 GLPAC meeting, as well as discussion regarding reasonable time frames for filing billing disputes. According to Great Lakes Pilotage Program officials, some disputes were filed after an extended period of time had elapsed, making it more difficult to adjudicate the issues. For this reason, the Coast Guard reported that it is considering introducing a maximum amount of time allowable for vessel operators to initiate a billing dispute, and corresponding time frames for pilot associations and the Coast Guard to respond and adjudicate, respectively. Issues Identified by Great Lakes Pilots and Their Representatives Issues raised by Great Lakes pilots and their representatives generally include the following categories: (1) recognition of the pilots’ unique qualifications and role, (2) review of “designated waters,” and (3) review of protocols for vessel priorities. Recognition of pilots’ unique qualifications and role: Representatives of the U.S. Great Lakes pilots state that the shipping industry remains overly focused on pilotage costs and may fail to recognize the unique qualifications that registered Great Lakes pilots possess and the fundamental public interest the pilots serve by ensuring the safety of vessel navigation and environmental protection on the Great Lakes- Seaway. The pilots noted that, in addition to the often challenging weather conditions they face, they also serve a security role in that they may be the only U.S. citizen on board to provide situational awareness to U.S. authorities in the event of any suspicious activities given that foreign vessels in the Great Lakes-Seaway can travel close to major infrastructure and U.S. cities. The pilots also stated that it can be easy for the shipping industry to select individual routes and billings to make a case that U.S. pilots charge significantly more than their Canadian counterparts, but they contend that is not an accurate picture of actual system-wide costs. Review of designated waters: Great Lakes pilots commented that “designated water” determinations have not been reviewed for over 50 years and they should be reassessed. In particular, pilots note that increases in the volume and variety of vessels; as well as expanded port infrastructure on the Great Lakes-Seaway since establishment of the Great Lakes Pilotage Program in 1960, warrant the consideration of additional areas as “designated waters,” which are generally more challenging to navigate and require registered pilots to be in full navigational control of the vessels at all times as they transit these designated areas. For example, pilots contend that the Straits of Mackinac and all ports on the Great Lakes-Seaway should be considered designated waters. Coast Guard officials reported that it is their understanding that masters are already relying on pilots to direct navigation in waters such as the Straits of Mackinac. Additionally, the officials stated that the Coast Guard does not have the authority to make these designation changes through regulation; rather, such revisions require a presidential declaration. Review of protocols for vessel priorities: Great Lakes pilots also commented that increases in the volume and variety of vessel traffic on the Great Lakes-Seaway in recent years may necessitate a review of the first-come, first-served standard for assigning pilots to vessels. For example, the pilots note that plans for increasing the volume of cruise ships on the Great Lakes-Seaway may require adjustments to the priority process for assigning pilots given that cruise ships are generally on fixed itineraries and tight timelines. This issue was discussed at the 2018 GLPAC meeting and is the subject of ongoing discussions among the Coast Guard and Great Lakes-Seaway stakeholders. Stakeholder-Identified Alternatives to the Current Structure and Governance of the Great Lakes Pilotage System Entail Potential Tradeoffs Some shipping industry stakeholders, and a recent report commissioned by the Conference of Great Lakes and St. Lawrence Governors and Premiers, have suggested that it is time to evaluate potential governance alternatives to help ensure the Great Lakes pilotage system is efficient, cost-effective, and better serves the needs of the maritime shipping industry and the public. Some of the proposed alternatives include changes that could be implemented within the existing governance system, such as the consolidation of the three U.S. pilotage districts and a review of some pilotage requirements. Other changes, such as transferring the pilotage rate-setting function from the Coast Guard to another entity, would entail more sweeping reforms and require statutory changes. Finally, some proposals, such as the introduction of competitive pilotage services, would reflect an even more significant change from the existing model of Great Lakes pilotage consisting of federal oversight and economic regulation of independent pilot associations, known as a regulated monopoly. District Consolidation and Review of Some Pilotage Requirements District Consolidation Some shipping industry stakeholders and the report commissioned by the Conference of Great Lakes and St. Lawrence Governors and Premiers suggest that consolidation of the three existing U.S. Great Lakes-Seaway pilotage districts might help reduce administrative costs. According to these sources, such a consolidation could also limit the complexity associated with vessel agents and shippers interacting with multiple pilot associations over the course of a single journey on the Great Lakes- Seaway. Apart from consolidating all three of the existing districts into one, industry stakeholders did not identify any other proposed alternatives for changing the existing district boundaries. According to representatives of the Great Lakes pilots, the expansive area of the Great Lakes-Seaway and natural geographic boundaries lend themselves to maintaining the three pilot associations. The pilot representatives also noted that if the districts were to be consolidated, shippers and agents would lose some degree of localized service currently provided by each district, such as knowledge of local conditions and transit times. It remains unclear to what extent cost savings could be realized through consolidation of the three existing U.S. pilotage districts. According to the pilot association presidents, there are relatively few administrative and support staff employed for such a large geographic area and some perform multiple functions. Specifically, the pilots reported that, collectively, there were 23.5 administrative positions (non-pilots), comprised mostly of 8.5 seasonal dispatchers and 10 pilot boat operators. Assuming that existing pilot boat operations would generally remain consistent following district consolidation, administrative and dispatch services represent the principal source of potential cost savings. Based on our review of the Canadian Great Lakes Pilotage Association (GLPA) model, which operates a single, consolidated administrative office, it is not clear that the number of administrative staff, including dispatchers, would be reduced after consolidation of the three U.S. pilotage districts and associations. For example, during the 2018 shipping season, the Canadian Great Lakes Pilotage Association included 21 administrative positions, of which 10 were designated as dispatchers— which is similar in proportion to the existing U.S. Great Lakes Pilotage dispatcher distribution. It is also important to note that even with a potential consolidation of administrative functions within one U.S. pilotage district; pilots would still be limited to operating within the geographic area where they are licensed. According to pilots and Coast Guard program officials, cross- licensing is generally not feasible for multiple waterways between districts given the extent of local specialized training and knowledge required and is not practiced anywhere else in the United States or the Great Lakes- Seaway. Review of Some Pilotage Requirements Some shipping industry stakeholders state that a broader review of Great Lakes pilotage requirements may be necessary, particularly the compulsory use of pilots in “undesignated” or open areas of the Great Lakes. According to these stakeholders, such a review is warranted given the significant technology improvements that have occurred since initial passage of the Great Lakes Pilotage Act in 1960. Any proposed changes to the existing pilotage requirements could not be implemented through Coast Guard regulatory changes and would require legislative changes or a presidential declaration. Although a significant portion of a Great Lakes-Seaway vessel transit may occur in “undesignated” open waters, the Coast Guard and pilots’ representatives cited several logistical challenges that would likely occur if pilotage requirements in these areas were revised or eliminated. For example, if a pilot was not on board a vessel in open waters, there likely would be no way to get one on board in the event of severe weather, equipment failure, or other emergency. In addition, the officials noted that if a pilot did not remain on board the vessel for the entire transit, one would still be required to navigate the vessel in and out of each port destination. This would entail additional costs for picking up the disembarking pilot and transporting the pilot to a designated shore location and then later to transport another pilot to the vessel to navigate into port. These additional pilot transfers may require the acquisition of additional pilot boats, which are generally customized and can cost in excess of $1 million. Alternately, each individual port could employ its own registered pilot and make the necessary infrastructure investments, including pilot boats and related dispatch equipment, but the result could be an overall increase in the number of pilots operating in the system, which could also increase pilotage costs. Finally, an increasing number of vessels that otherwise are not compelled to use pilots (e.g., domestic oil tankers) are requesting pilotage services due, in part, to requirements by insurance providers. Because of this increase in the requests for pilotage services, a change in open water pilotage requirements may not result in a reduction in the number of pilots required in some areas of the Great Lakes-Seaway. Transfer of the Pilotage Rate-Setting Function from the Coast Guard to a Different Entity Establish a Great Lakes Pilotage Advisory Board to Assist with Rate-Setting The report commissioned by the Conference of Great Lakes and St. Lawrence Governors and Premiers cites an opportunity for enhanced input into the governance process through the establishment of an advisory board or other oversight mechanism, such as those used commonly in state pilotage commissions nationwide. According to the report, such a mechanism would provide for increased industry participation in the governance process beyond the consultative inputs currently available through the GLPAC and rulemaking processes, and could include responsibility for the pilotage rate-setting function. The principal advantage cited for this increased level of participation would be to better align pilotage services with user needs. Under this proposal, an advisory board would be formed and the board members would be involved in the full range of pilotage governance functions as generally provided by state pilotage commissions. These responsibilities commonly include safety oversight and related functions, such as selecting individuals for admission into the training program, overseeing the training process, issuing licenses, investigating accidents or pilot complaints, taking disciplinary actions, and establishing pilotage rates. All of these activities are current regulatory functions performed by the Coast Guard and statutory changes would be required to designate a new pilotage regulatory body and delineate these responsibilities. Given that stakeholders we met with generally do not advocate for transferring any of the safety oversight and related regulatory functions from the Coast Guard, for the purposes of this report we will focus on the potential tradeoffs associated with having an advisory board formed that would only take responsibility for the Great Lakes pilotage rate-setting function from the Coast Guard. With regard to the rate-setting function, the introduction of an advisory board to determine pilotage rates may not improve one of the core issues cited by both shipping industry and pilot stakeholders at the most recent GLPAC meeting that was held in September 2018. That is, no matter what entity has responsibility for pilotage rate-setting—a new advisory board or the Coast Guard—such an entity would face similar rate-setting challenges posed by the competing interests of pilots and shipping industry representatives. Further, according to a recent report reviewing the pilotage system in the state of Washington, proposed changes to pilotage rates are often evenly split between shipping industry representatives and pilot representatives and final determinations routinely come down to committee chairpersons or independent board members, sometimes without full transparency regarding how decisions were reached. In contrast, the current GLPAC process provides for considerable input by committee members, stakeholder and public participation, and is documented through publicly available transcripts. Coupled with the rulemaking requirements that incorporate public review and comments, we found that the existing mechanisms represent a fairly transparent system of pilotage rate-setting as compared to the process used by some coastal states. Establish an Independent Rate-Setting Entity One variation used in some U.S. coastal states to help overcome the challenge of competing stakeholder interests during the pilot rate-setting process is the establishment of an independent rate-setting entity, similar to a public utility commission. In fact, one of the principal recommendations in the Washington report was to transfer the rate- setting function from the state pilotage commission to an independent utility and transportation commission in an effort to establish a more clearly defined, rigorous, and transparent process with enforceable timelines. In many respects, we found that the Coast Guard is currently performing this independent function as its rate-setting process includes many of the characteristics identified as a best practice, such as a defined methodology, clear data submission and review process, and the absence of any direct material interest in the outcome of the rate determinations. While individual stakeholders may not agree with the specific inputs and assumptions used by the Coast Guard, the current process is generally transparent and provides an opportunity for informed stakeholder feedback and identification of any grounds on which they can choose to take legal action. Transfer Pilotage Rate-Setting Authority to Another Federal Entity Another option presented by various stakeholders is to transfer pilotage rate-setting authority to another federal entity. Under this scenario, the Coast Guard would retain its jurisdiction over safety and related regulatory functions, but responsibility for pilotage rate-setting would be transferred to another federal entity. One specific entity that has been identified as a potential replacement for the Coast Guard is the Saint Lawrence Seaway Development Corporation (SLSDC). According to some stakeholders we spoke with, the SLSDC would have more of a vested interest in ensuring that pilotage rate changes consider the potential impact of such changes on the viability of commercial shipping in the Great Lakes-Seaway. SLSDC representatives declined to comment specifically on this proposal, but they cited historical precedent to indicate that if SLSDC were statutorily required to assume pilotage rate-setting responsibilities, additional staffing resources would likely be needed. It should be recognized that shipping industry and pilotage stakeholders will continue to have vested interests in each of the rate-setting inputs and assumptions that are used to determine pilotage rates and some degree of contention is likely to remain no matter the entity responsible. In addition, pilots’ representatives previously filed a complaint regarding the transfer of pilotage rate-setting authority from the Coast Guard to the SLSDC in the 1990s, and they told us that they continue to oppose such a move. According to pilot representatives, they are concerned with a potential transfer of the pilotage rate-setting function to SLSDC given its role in trade promotion, which could potentially affect SLSDC’s ability to remain fully independent in this role. Whether the Coast Guard maintains responsibility for pilotage rate-setting or that function is transferred to another federal entity like SLSDC, the continued role of a federal entity in performing the pilotage rate-setting process would ensure that Administrative Procedure Act requirements still apply, thereby retaining transparency and providing stakeholders and the public an opportunity for review and comment. While there may be some potential for redundancy or increased administrative burden on the pilot associations if the safety oversight and pilotage rate-setting functions were split between the Coast Guard and another federal entity, similar division of responsibilities currently exist in the handful of states that use an independent rate-setting entity, such as a public utility commission. It is the Coast Guard’s position that authorizing two federal agencies to oversee different aspects of the Great Lakes Pilotage Program could be challenging. For example, Coast Guard officials noted that a transfer of the rate-setting function may not consider potential impacts to other authorities associated with rate setting, such as limiting the number of pilot pools; prescribing a uniform system of accounts; performing audits; determining the number of pilots to be registered; and establishing conditions for services. Alternatives to a Regulated Monopoly of Great Lakes Pilotage Government Employee Model The existing model of Great Lakes pilotage consisting of federal oversight and economic regulation of independent pilot associations is referred to as a regulated monopoly. This model of regulating pilotage is employed almost exclusively within U.S. coastal states and is also a common method for delivering marine pilotage services worldwide. However, there is also some precedent for pilots serving as government employees. One reason why this government employee model has been identified as one potential alternative for U.S.-registered pilots in the Great Lakes- Seaway is because a majority of the Canadian pilots that operate in the Great Lakes-Seaway are federal employees. Although making U.S. Great Lakes pilots federal employees could eliminate the need for the Coast Guard to provide administrative and financial oversight of independent pilots, we found that U.S. Great Lakes pilot associations provide many administrative and logistical functions, such as dispatching and pilot transfers, which would need to be assumed by the federal government under this type of alternative model. According to pilots’ representatives, one of the principal impacts of the government employee model would likely be the provision of some financial benefit to the shipping industry, given that taxpayers would potentially be assuming the cost of pilotage salaries, benefits, and retirement-related benefits. Additional costs to the U.S. government would also likely be required to fund initial procurement of existing pilot association infrastructure and assets, such as offices and pilot boats. Another factor to consider in evaluating the pilots as federal employees model involves how the Coast Guard budget process may also affect the future funding and operation of pilotage operations. A significant expansion of the pilotage program staffing and associated resource requirements would likely pose an additional challenge to ensure sufficient annual appropriations are obtained, given the ongoing need to balance funding and resources across the Coast Guard’s 11 statutory missions. According to representatives of the Canadian Great Lakes Pilotage Association, pilotage operations in their jurisdiction are to be financially self-supporting through pilotage tariffs, and the Canadian government does not provide an annual appropriation for this purpose. They noted that government pension benefits are also incorporated into the pilotage rates to help achieve these offsets. Similar mechanisms could also potentially be used to fund the additional costs borne to the U.S. government within a federal employee pilot model. Additional considerations associated with a government employee model include the different compensation and overtime structures, and the potential for reduced flexibility afforded to the government if fewer numbers of pilots are needed due to reduced pilotage demand. For example, according to representatives of the U.S. pilot associations, each pilot presently receives the same compensation for each working day they are available, regardless of seniority. However, the U.S. federal government routinely employs a system of graduated compensation based on years employed and may face difficulties in hiring or terminating pilot employees if necessary due to shifting pilotage demand. Another approach identified within the government employee model is the use of harbor pilots. This option would generally entail pilots working directly for an individual or group of ports as municipal or port employees. According to one pilot representative, the key challenge identified with such an approach is that individual ports would each require its own infrastructure and pilot boats to service incoming vessels, which could represent a substantial investment. In addition, the geography of the Great Lakes and the long transits many times involved present additional hurdles associated with pilot transfers and related logistical support services make the harbor pilot approach less feasible. Competition for Pilotage Service Delivery Shipping industry stakeholders have also proposed that the Coast Guard consider the introduction of some level of competition for pilotage service delivery, which would represent the most significant change to the existing model of pilotage regulation. According to shipping industry stakeholders, the introduction of competition would be intended to provide an additional incentive for pilot associations to contain costs. Some specific mechanisms identified include introducing a competitive bidding process to provide pilotage services under multi-year contracts, or allowing individual pilots or groups of pilots to compete for business from vessel operators. The concept of using some form of competitive bidding to grant multi-year contracts for pilotage service delivery is generally consistent with government cost-containment efforts. However, stakeholders we spoke with were unable to identify any pertinent examples where market competition for pilotage services was currently used within U.S. coastal states to provide a basis for further evaluation of this model. According to the Coast Guard and pilot representatives, several features of the Great Lakes-Seaway pilotage system present challenges for potentially implementing competitive pilotage services in the Great Lakes- Seaway. Most notably, the nature of marine pilotage requires several years of specialized training and local experience that entail significant time and investment to acquire. These requirements generally result in a limited supply of available pilots that could compete for a competitive contract in the same geographical area. This represents a potential barrier to market entry and could lead to a single, entrenched service provider, which may reduce the competitive pressure toward cost containment. Further, if registered pilots did not have the assurance of steady employment in the Great Lakes, there may be increased incentives for them to seek opportunities outside of the region, thereby reducing the overall pool of available pilots. Other mechanisms of pilotage competition, such as allowing individual pilots or pilot associations to compete for business, would represent a fundamental shift from the norms of compulsory pilotage services worldwide. As a representative of the American Pilots Association stated at the September 2018 GLPAC meeting, one of the foundations of the existing regulated monopoly system is that pilots provide services using their independent judgement to ensure marine safety and the public interest and should not be subject to any potential financial incentive or business pressure from a vessel operator. Similar statements can be seen in Florida state statutes, which specify the need for economic regulation of marine pilotage at the state level, rather than competition in the marketplace, to better serve and protect the public health, safety, and welfare. In contrast, shipping industry stakeholders suggest that there are likely comparisons to the deregulation implemented in other industries where public safety is also of paramount concern, such as commercial aviation. However, an evaluation of models of competition used in other industries was outside the scope of our review. An additional challenge noted by pilot representatives is that, in a competitive model, pilots may prefer to pursue customers offering more regular or profitable work rather than operate in a non-discriminate manner as is currently the case under the existing numbered rotation system of pilotage assignment. Along these lines, research conducted by KPMG on international models of marine pilotage, found that although a model “comprised of independent contractor pilots could result in theoretically more competitive rates, the combination of what appears to be relatively the same demand for pilotage services in the market, and the uniqueness of pilot skillsets have resulted in a scenario where competition is limited in reality.” The authors’ findings also suggest that, in the few cases where competitive pilotage was introduced, it was generally unsuccessful; and that absent sufficient oversight, direct competition among pilots could potentially lead to incentives to cut costs through reduced focus on safety and quality of service. Agency Comments In May 2019, we provided a draft of this report to the Department of Homeland Security and the Coast Guard for review and comment. The Coast Guard provided technical comments which we incorporated into the report. We are sending copies of this report to the appropriate congressional committee, the Secretary of Homeland Security, the U.S. Coast Guard, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (206) 287-4804 or AndersonN@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Summary of Great Lakes Pilotage Rate-Setting Methodology Pursuant to the Great Lakes Pilotage Act of 1960, the Coast Guard regulates pilotage for oceangoing vessels on the Great Lakes—including setting the rates for pilotage services and adjusting them on an annual basis. For the 2018 shipping season, these base pilotage rates ranged from $271 to $653 per pilot hour depending on the specific areas where pilotage service is provided. According to the Coast Guard, the three U.S. pilot associations use this revenue to cover operating expenses, compensate working pilots, maintain infrastructure, such as pilot boats, dispatch equipment, and personal pilotage units; and train new pilots. The Coast Guard uses the following 10-step methodology to calculate revenues needed for each Great Lakes pilotage association based on the estimated volume of foreign vessel traffic for the upcoming shipping season. Step 1 – Recognize previous operating expenses. The Director of the Great Lakes Pilotage Program reviews audited operating expenses from each of the three U.S. Great Lakes pilot associations. This number forms the baseline amount that each association is budgeted. There is a 3-year delay between the year the expenses were incurred and when they are included in the rate-setting calculation. For example, the 2019 pilotage rates are calculated using 2016 operating expenses. Step 2 – Project operating expenses, adjusting for inflation or deflation. The Coast Guard applies 3 years of inflation adjustors to the baseline of operating expenses identified in Step 1. The inflation adjustors routinely used are from the Bureau of Labor Statistics’ Consumer Price Index. Step 3 – Estimate the number of working pilots. The Coast Guard determines the number of working pilots that need to be compensated via collection of pilotage fees. As part of this step, the Coast Guard also uses a “staffing model” to determine how many pilots may be needed for each district to handle expected shipping traffic at the beginning and close of the season. According to the Coast Guard, this number helps inform the Director of the Great Lakes Pilotage Program regarding how many total pilot credentials may be authorized for each district to help meet future demand. Step 4 – Determine target pilot compensation. This step contains two phases to determine the revenue needed for pilot compensation. In the first phase, the Coast Guard determines a target “compensation benchmark” for each of the working pilots. For the 2018 shipping season, this number was derived from 2015 data provided by the American Maritime Officers Union regarding labor contracts, along with annual inflation adjustments deemed applicable by the Director. The second phase entails multiplying this compensation figure by the number of working pilots in each pilotage district and area. Step 5 – Project working capital fund. This value is obtained by adding total operating expenses (step 2) and total pilot compensation figure (step 4) and multiplying that figure by the annual rate of return from the preceding year for new issues of high-grade corporate securities. Step 6 – Project needed revenue. The Director of the Great Lakes Pilotage Program adds the total values produced for operating expenses, total pilot compensation, and the working capital fund. This number, which is calculated separately for each district and area, represents the total projected revenue needed for the upcoming season. Step 7 – Calculate initial base rates. This step consists of first calculating the 10-year vessel traffic average for each district and area. Then, the figure for needed revenue is divided by the 10-year traffic averages. Step 8 – Calculate average weighting factors by area. Since each vessel that requires a U.S. Great Lakes pilot pays a multiple of the “base rate” based on its size (ranging from 1.0 for the smallest vessels to 1.45 for the largest vessels), the Coast Guard calculates the extra revenue that has historically been produced by the weighting factor in each area. Step 9 – Calculate revised base rates. The Coast Guard modifies the base rate to account for the extra revenue generated by the weighting factors. This is done by dividing the initial base rate by the average weighting factor to produce a revised rate. Step 10 – Review and finalize rates. According to the Coast Guard, this step can be referred to informally as “director’s discretion” and is principally intended to help ensure that the rates meet the goals set forth in applicable law and regulation. The Coast Guard reported that no additional adjustments were included as part of this step for the 2018 Final Rule. After the base pilotage rates are set, the Coast Guard also considers whether surcharges are necessary, such as those used to help fund the training of new pilots. This amount is calculated as a percentage of total revenue for each district and that percentage is applied to each bill until the total amount of the surcharge is collected. Appendix II: Further Information on Issues Identified by the Shipping Industry and Recent Litigation on Great Lakes Pilotage Financial Oversight and Cost Accounting Shipping industry stakeholders identified a number of issues related to improving the timeliness and transparency of pilotage association financial information used in pilotage rate-setting process. Among these include (1) addressing the 3-year time lag that exists to incorporate pilotage expenses into the rate calculations; (2) presentation of financial information in a uniform format; (3) disclosure of individual pilot compensation data; and (4) clarifying the purpose and authorized uses of the working capital fund. 3-year time lag to incorporate pilotage expenses. Shipping industry stakeholders suggest that the Coast Guard make an effort to reduce the 3-year time lag to incorporate pilotage expenses into the rate-setting calculations. For example, audited financial information for the 2016 shipping season is used in the development of the 2019 rulemaking. At the most recent GLPAC meeting in September 2018, Coast Guard representatives identified several reasons for this time lag, including about 6 months required for an auditor to conduct an independent review of pilotage expenses and multiple stages of federal review that can take an additional 6 months for the Coast Guard to develop and publish the proposed rate in the Notice of Proposed Rulemaking each year. Pilot representatives and Coast Guard officials generally agree that shortening this lag would be preferable, but are unable to identify a method by which this could be achieved given the existing time frames required for the financial auditing and rulemaking processes. Uniform format for financial reporting. Shipping industry stakeholders have requested that audited financial statements for the pilot associations be presented in a uniform format. According to an industry representative, the audited financial statements (prepared individually by each pilotage association each year after the shipping season) differ primarily due to the standard accounting practices of the different organizational structures. Specifically, two pilot associations are partnerships and one is a corporation. Our review indicates that a consistent format is used by the Coast Guard and its designated independent reviewer to present summary information of applicable expenses for all three pilot associations as part of the rulemaking process. Public reporting of individual pilot compensation. Shipping industry stakeholders contend that individual pilotage compensation levels should be disclosed to help ensure revenues are being shared equally among the associations’ workforce. According to one pilot representative, individual compensation data were previously provided for District 1 as part of audited financial statements, but was eliminated because the information was being used out of context. The pilot representative noted that although all pilots in his association generally receive the same rate of pay, some may obtain higher annual compensation due to additional days worked. According to Coast Guard officials, they do not collect or retain individual compensation data on the pilots, but they do review such data during visits to the pilot association offices to help ensure fair compensation practices. Enhanced transparency of the working capital fund. Members of the shipping industry also identified an issue related to the “working capital” component of the rate-setting process. According to these stakeholders, this fund could potentially be used to augment general revenue and compensation levels and there is a lack of transparency regarding how these funds are being applied to fund capital improvements. This position was the basis of one of the claims included in the complaint filed by a coalition of industry stakeholders in November 2018. In that complaint, the plaintiffs claim that the Coast Guard’s failure to eliminate the working capital element as a basis for additional revenue requirements or to bound revenue raised as working capital to particular uses is arbitrary and capricious, among other things. That case is ongoing. According to pilots’ representatives, this fund is important to help fund capital improvements, particularly through the winter months, but they also recognize that additional clarity could be provided about its intended uses and potential limitations. In November 2018, the Coast Guard issued guidance to each of the pilotage association’s presidents regarding the reporting and uses of the working capital fund. Specifically, the Coast Guard directed the associations to segregate revenues generated by this fund and place them into a separate account at least once per quarter, and further clarified that funds from this account could be applied only toward capital projects, infrastructure improvements/maintenance, and non-recurring technology purchases necessary for providing pilotage services. In 2016, the Coast Guard initiated changes to its rate-setting methodology regarding how it estimates projected vessel traffic for each district and the corresponding hours worked for related pilotage services. Citing a recommendation issued by the Great Lakes Pilotage Advisory Committee in 2014, the Coast Guard initially proposed using a rolling average of 5 years of historical shipping data to estimate traffic volume as part of its ratemaking calculations for the 2016 shipping season. However, based on public comments received on the 2016 Notice of Proposed Rulemaking, the Coast Guard increased this number to 10 years of historical data. According to the Coast Guard, this change was implemented to further reduce rate volatility and help remedy traffic overestimates that occurred in the past, largely based on industry projections. Given the increasing volume of actual Great Lakes-Seaway vessel traffic in recent years, shipping industry stakeholders contend that the 10-year rolling average used for rate-setting calculations represents an underestimate of traffic volume. Responding to the 2018 Notice of Proposed Rulemaking, industry commenters asserted that the 10-year average included a period of substantially depressed traffic volume caused by the recession in 2008-2009, which if used to estimate future traffic volume could result in increased pilotage rates. See Table 1 for a summary of the variance between actual traffic volumes during the 2017 Great Lakes-Seaway shipping season compared with the estimates calculated using a 10-year rolling average. In the November 2018 complaint, shipping industry organizations argued that the Coast Guard’s use of 10 years of traffic data, in contrast with the shorter periods used to determine expenses and manning levels, was arbitrary and capricious, among other things, and that case is ongoing. The process and sources used by the Coast Guard to develop a target compensation benchmark for Great Lakes pilots have been subject to ongoing disagreement among stakeholders. Prior to 2016, the Coast Guard used compensation data for first mates on domestic Great Lakes vessels as the basis for comparison. This data was based on labor contracts of the American Maritime Officers Union (AMOU). However, in 2016, when the AMOU determined it would no longer provide this data to the Coast Guard, program officials revised the rate-setting methodology to begin using the average compensation of Canadian vessel pilots as the primary source, along with a 10 percent adjustment that program officials believed was appropriate to reflect the different level of benefits provided to Canadian pilots as government employees. After the court found that the 10 percent adjustment to the Canadian compensation level benchmark was not supported by reasoned decision-making and remanded the matter to the Coast Guard, for the 2018 rulemaking, the Coast Guard reverted to using the pre-2016 compensation data of domestic “laker” first mates. However, the November 2018 complaint included a claim that the Coast Guard improperly applied an adjustment of “guaranteed overtime” to the compensation benchmarks based on additional input provided by the AMOU during the notice and comment period. This case is ongoing. Regardless of the basis used, the benchmark pilot compensation levels have not varied greatly in recent years after accounting for annual inflation adjustments. That is, target compensation in 2016 was $326,114 and has increased to $359,887 in 2019, an average annual increase of approximately 3.3 percent. One related change implemented by the Coast Guard in 2016 that can also affect pilot compensation figures includes the determination to calculate pilotage rates based on the actual number of working Great Lakes pilots rather than the total number authorized. For example, in 2019 there were 54 total authorized U.S.-registered Great Lakes pilots, but only 51 were actually employed and available to provide pilotage services. According to the Coast Guard, this change serves, in part, to remove any financial incentive of pilot associations to operate with fewer pilots than allowable to increase individual compensation levels. The shipping industry has also identified issues regarding the number of working days the Coast Guard uses to calculate compensation figures and its application of a 10-day per month rest standard for pilots. For example, in 2016, the Coast Guard began using 200 working days per season as the basis for staffing calculations—down from 270—to allow for up to 10-days of rest per month. According to the Coast Guard, this change was made, in part, to address recommendations from the National Transportation Safety Board regarding reducing possible “pilot fatigue.” However, shipping industry stakeholders have suggested that if 200 days is the benchmark for working days, it should also be used to determine pilot compensation levels. Instead, the Coast Guard multiplies the weighted daily rate derived from AMOU compensation data by 270 to calculate the target annual compensation. This issue is also the subject of a claim included in the 2018 complaint, which alleges that the Coast Guard’s use of the 270-day multiplier value is arbitrary and capricious, among other things. The shipping industry stakeholders further contend that the 10-day rest standard may need to be revisited to ensure adequate pilot availability and avoid any unnecessary increases in total pilot numbers. The Coast Guard states that this 10-day rest standard is not a requirement and generally does not apply during the busiest times of the season, when pilots would remain available to work additional days to service increased vessel traffic on the Great Lakes-Seaway. Billing Concerns and Dispute Resolution There is ongoing concern among shipping industry stakeholders about certain billings from pilot associations they view as unnecessary and the Coast Guard’s dispute resolution process. The primary billing issues cited by shipping industry stakeholders since 2016 include an increase in the number of tug boats requested, as well as cases where double pilotage was employed that vessel operators did not believe were necessary. In the case of tug usage, pilot representatives generally recognize an increase in tug usage but respond that they do not have any financial incentive to call for the use of tug boats and that pilots only request them, in coordination with the shippers’ agents, when they are deemed necessary. Pilot representatives at the 2018 GLPAC meeting also stated that tug boats represent additional insurance to avoid any potential collisions in an increasingly risk-averse environment. Further, they noted that the newer pilots that have come onboard in recent years may also be a contributing factor for an increase in tug usage. According to the Coast Guard, the program routinely reviews inquiries from shippers and masters on this issue, but decisions to use tug boats remain safety decisions between the master and pilot. In contrast, authorizations for double pilotage are provided on a case-by- case basis by the Director of the Great Lakes Pilotage Program as specified in regulation. In general, the Director may authorize double pilotage when aids-to-navigation have been removed due to ice and weather conditions, dead ship tows, adverse weather and sea conditions, or any abnormal condition that will likely result in extended transits in designated waters. According to the Coast Guard, there were instances in which pilot associations charged for double pilotage without obtaining authorization from the Director of the Great Lakes Pilotage Program. In such cases, the Coast Guard has ruled in favor of vessel operators with regard to billing disputes. According to Great Lakes Pilotage Program officials, if vessel operators believe a billing error was made, they should first engage directly with the respective pilot association to review the charges and rectify any mistakes. If no agreement is reached with the pilot association, then the vessel operator can make an appeal to the Coast Guard to conduct a further review. If the Coast Guard review determines that a chargeback is justified, they can issue an advisory opinion that the pilot association refund any amount not approved by the Coast Guard or reissue the bill. At the September 2018 GLPAC meeting, Coast Guard representatives noted that some billing concerns were presented after more than 2 years and did not include sufficient details to effectively review and make an informed decision. The Coast Guard is currently working on a proposal to establish reporting timelines for presenting and making determinations on billing disputes. Another billing concern cited by industry stakeholders at the 2018 GLPAC meeting includes objections to an absence of limits to charges when pilots are onboard a vessel but it cannot get underway due to inclement weather or for other reasons. Pilot representatives point out that such delays consume pilotage resources and the charges are needed to provide an incentive for shippers and agents to remain efficient when ordering and releasing a pilot. Shipping industry stakeholders note that there are a range of factors that can cause a pilot to be detained onboard and the charges, which can exceed $20,000 per day, are unreasonable and represent a large, unforeseen cost. According to Coast Guard officials, they plan to continue engagement with GLPAC members on this issue, recognizing that pilot resources should be employed efficiently, but also that weather/ice conditions may require pilots to remain onboard a vessel for an extended period of time at significant additional cost. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Christopher Conrad (Assistant Director), Ryan Lambert (Analyst-in-Charge), Chuck Bausell, Dominick Dale, Michele Fejfar, Eric Hauswirth, and Tracey King made key contributions to this report.
Why GAO Did This Study The Great Lakes-St. Lawrence Seaway maritime transportation system is the longest inland navigation system in the world. In 2016, the Coast Guard implemented a number of changes, including amending its methodology for setting the rates charged to shippers for using U.S. marine pilotage services in these waters. GAO was asked to review the Coast Guard's management of the Great Lakes Pilotage Program. This report (1) describes how the Coast Guard obtains stakeholder input on the Great Lakes Pilotage Program, and identifies key stakeholder issues that exist; and (2) discusses alternatives to the current structure and governance of the Great Lakes pilotage system identified by stakeholders, and the reported tradeoffs they may present. GAO reviewed applicable laws, Coast Guard rulemakings from 2016-2019, Great Lakes Pilotage Advisory Committee meeting minutes for 2017 and 2018, and issues identified by stakeholders. GAO also interviewed a range of stakeholders, including shipping industry and pilot representatives, to obtain perspectives on the Coast Guard's management of the program and any alternative governance options that may exist. What GAO Found The Coast Guard manages the Great Lakes Pilotage Program to implement federal requirements that any oceangoing or foreign commercial vessel entering the Great Lakes-St. Lawrence Seaway use a registered marine pilot to safely navigate the vessel through the system. The Coast Guard employs several mechanisms for communicating with stakeholders and obtaining their input on the program. These include the federal rulemaking process, meetings of the Great Lakes Pilotage Advisory Committee, and ad-hoc communications with local pilotage stakeholders. Since 2016, when the Coast Guard implemented several programmatic changes, shipping industry stakeholders and pilots have identified a number of issues that they would like to have considered for the program. The issues cited by shipping industry stakeholders relate, in large part, to the financial impacts associated with the Coast Guard's methodology for calculating pilotage rates. The issues raised by Great Lakes pilots and their representatives are varied and include changes that may be needed to respond to the increasing volume and variety of vessels needing Great Lakes pilotage services, such as cruise ships. U.S. Pilot Associations in the Great Lakes-St. Lawrence Seaway Shipping industry stakeholders and others have suggested potential alternatives to the structure and governance of Great Lakes pilotage. The proposed alternatives include consolidating the three U.S. pilot associations and districts, revising the existing governance structure and entities responsible for pilotage rate-setting, and introducing some level of competition for providing pilotage services. Each of these options presents various tradeoffs. For example, it is unclear if consolidating the three associations and districts would result in cost savings because there are relatively few administrative positions that could be reduced. According to the Coast Guard and pilot representatives, the specialized training and local experience needed to become registered pilots also presents a challenge to implementing competition because there is generally a limited supply of pilots available to compete in the same geographic area. Further, many of the governance structures and procedures of the existing Great Lakes pilotage system were established by statute and revisions would require legislative changes.
gao_GAO-20-65
gao_GAO-20-65_0
Background Overview of DWWCF Operations Like all DOD working capital funds, the DWWCF received its initial working capital through an appropriation or a transfer of amounts from existing appropriations to finance the initial cost of products or services. Ongoing DWWCF operations and maintenance of a minimum cash balance are funded through reimbursements to the DWWCF comprised of customer payments made to DFAS, DISA, and DLA. The flow of funding and provision of goods and services between the DWWCF agencies, their customers, and the DWWCF is shown in figure 1. DFAS, DISA, and DLA use funds from the DWWCF to provide goods and services across six activity groups, as shown in table 1. Activities of the DWWCF agencies operate on a break-even basis. As part of the annual budget submission for each upcoming fiscal year, rates are required to be established at levels estimated to recover the budgeted costs of goods and services, including all general and administrative overhead costs, prior period gains and losses, and applicable surcharges. Predetermined or “stabilized” rates developed during the budget process are applied to orders received from DWWCF customers during the fiscal year. The Office of the Under Secretary of Defense (Comptroller) is responsible for reviewing, coordinating, and publishing reimbursable rates for DOD. Where feasible, the Office of the Under Secretary of Defense (Comptroller) publishes applicable reimbursable rates prior to the beginning of each new fiscal year. The military departments are the primary consumers of goods and services provided by the DWWCF agencies. In fiscal year 2018, the reported total dollar value of goods and services ordered from DFAS, DISA, and DLA was approximately $49.4 billion, with the military departments collectively ordering about $36.3 billion (or 74 percent of the total dollar value of orders in fiscal year 2018) in goods and services. Most of the goods and services they purchased fell under two activity groups—supply chain management and energy management. Specifically, approximately $29.4 billion (60 percent of the total dollar value of orders in fiscal year 2018) were for supply chain management and approximately $10.8 billion (22 percent of the total dollar value of orders in fiscal year 2018) were for energy management, as shown in figure 2. Operating Budgets and Cash Balance Requirements for DWWCF Activities Prior to the beginning of the fiscal year, Annual Operating Budgets are issued for each DWWCF activity group managed by DFAS, DISA, and DLA. Budget formulation for a particular fiscal year begins approximately 18 months prior to the beginning of that fiscal year. Each activity’s Annual Operating Budget identifies total budgetary resources authorized for use during the fiscal year. In addition, each DWWCF agency is responsible for maintaining positive cash balances sufficient to allow their operations to continue uninterrupted. As of the end of fiscal year 2017, the DWWCF as a whole held a reported cash balance of about $3.0 billion, which decreased to $2.6 billion by the end of fiscal year 2018. According to DWWCF agency officials, it can be challenging to maintain an appropriate cash balance within the DWWCF because setting accurate rates that reflect their agencies’ actual costs is difficult. If rates are set too low during the budget formulation process, higher-than-expected costs or lower-than-expected sales of goods or services during the fiscal year may result in losses for the DWWCF, which in turn may lead to insufficient balances to meet the minimum current or future financing operational requirements. Similarly, if rates are set too high, lower-than-expected costs or higher-than-expected customer sales during the fiscal year may generate excessive gains for the DWWCF. Excessive gains may be transferred out of the DWWCF into other appropriation accounts or rescinded by Congress. In 2017, we described the DWWCF’s reported monthly cash balances and the extent to which they fell within targeted upper and lower cash requirements. We found that the DWWCF’s reported monthly cash balances were outside the targeted upper and lower cash requirements for 87 of 120 months during that time frame. This was caused by DLA charging its customers more or less than it cost to purchase, refine, transport, and store fuel and by DOD transferring funds into or out of the DWWCF to pay for combat fuel losses or other higher priorities, among other things. As we noted in our report, DOD revised its cash management policy to require a positive cash balance throughout the year and an adequate ending balance to support continuing operations into the subsequent year. According to this revised policy, in setting the cash requirement goals, DOD working capital funds are to consider four elements: (1) the rate of disbursement, which is the average amount disbursed between collection cycles; (2) the range of operation, or the difference between the highest and lowest expected cash levels based on budget assumptions and past experience; (3) risk mitigation, which requires some amount of cash beyond the range of operations to mitigate the inherent risk of unplanned and uncontrollable events; and (4) reserves, which are cash amounts held for known future requirements. DOD officials said they are in the process of adding additional guidance to the DOD Financial Management Regulation about when DOD managers should use available tools to help ensure that monthly cash balances are within the targeted upper and lower cash requirements, as we recommended in our 2017 report. For this report, we updated the 2017 analysis to include the DWWCF’s reported monthly cash balances and targeted upper and lower cash requirements for fiscal years 2017 and 2018, as shown in figure 3. For fiscal year 2017, the monthly cash balances were above and below the targeted upper and lower cash requirements one time each, and for fiscal year 2018 the targeted upper cash requirement was raised and the monthly cash balances were all within the revised targeted upper and lower cash requirements. Key Operating Principles for Effective Management of Working Capital Funds In our prior work, we identified four key operating principles to guide the management of working capital funds. These key operating principles call for (1) working capital fund self-sufficiency, which includes establishing transparent pricing; (2) clearly delineated roles and responsibilities; (3) performance measurement; and (4) built-in flexibility to obtain customer input and meet customer needs. As we describe later in this report, each of these key operating principles has three underlying components describing specific actions agencies should take to adhere to the principle. For further information about each key principle and its components, see appendix I. Defense Agencies Have Processes to Set Rates That Are Designed to Cover Costs but Are Not Transparent in Their Pricing We found that DFAS, DISA, and DLA have applied two of the three components of the key operating principle for working capital fund self- sufficiency by setting rates that are designed to cover actual costs and establishing a management review for rate setting. However, despite taking steps intended to allocate costs equitably among their customers, DFAS, DISA, and DLA have not fully applied the third component of the key operating principle by establishing pricing methodologies that are transparent to their customers. Defense Agencies Have Processes to Set Rates That Are Designed to Recover Actual Costs and Have Established Management Review for Rate Setting DFAS, DISA, and DLA each develop budget proposals annually that are designed to recover their projected costs, while also accounting for any gain or loss from previous years. The three DWWCF agencies generally set rates that are intended to mitigate prior year gains or recover all prior year losses in the current fiscal year of execution, although they can spread the return actions over several fiscal years to minimize the impact on customers from rate fluctuations. The rate-setting processes used by DFAS, DISA, and DLA include management reviews. Each agency’s management reviews and approves the budget proposals for its DWWCF activities during the budget formulation process. The agencies then send the budget proposals to the Office of the Under Secretary of Defense (Comptroller) for further review and approval, and the Office of the Under Secretary of Defense (Comptroller) issues a memo finalizing the rates to be charged by DFAS, DISA, and DLA during the fiscal year. Defense Agencies Use Multiple Approaches in Setting Rates and Allocating Costs DFAS, DISA, and DLA each use multiple approaches to set rates on an annual basis and adjust these rates, as appropriate, during DOD’s programming and budget development process. All three agencies include direct and indirect costs in their rates, and these costs vary due to differences in the agencies’ missions, including the goods and services they provide. In general, the DWWCF agencies describe direct costs as those costs that can be directly attributed to an output and a customer. For example, DFAS officials told us that DFAS includes the cost of the labor that supports civilian pay services for a customer as a direct cost. DISA officials said that DISA includes the cost of servers as a direct cost. DLA officials indicated that materiel costs, such as the cost to acquire fuel or a spare part, are considered direct costs. Alternately, the three agencies describe indirect costs as costs that cannot be attributed to one specific output and customer. For example, costs for information technology systems that support multiple customers, supervisory staff that support more than one customer, and general and administrative (overhead) costs such as a DWWCF agency’s general counsel services or physical facility maintenance are all indirect costs. DFAS, DISA, and DLA set rates during DOD’s annual programming and budget development process. The three DWWCF agencies begin the process of setting rates approximately 18 months prior to the fiscal year in which the rates will be applied. Setting rates in advance helps ensure that adequate resources are requested in the customers’ fund accounts to pay the established rates and prices. The Office of the Under Secretary of Defense (Comptroller) reviews and approves finalized rates for a particular fiscal year in a rate memo circulated during the prior fiscal year. DFAS, DISA, and DLA use a combination of the following three approaches when setting the rates. Table 2 shows instances in which each of the three DWWCF agencies use the following rate-setting approaches. Per Unit: Determines a specific dollar rate per unit that, when multiplied by the projected workload, will produce revenue sufficient to recover the full costs, including direct and indirect costs, of providing the good or service. Portion of Total Costs: Charges a portion of the agency’s total costs (both direct and indirect costs) of providing a service based either on the proportion of total workload projected for a specific customer or a uniform percentage across all customers. Percentage Markup on Direct Costs: Adds a fee based on a percentage of the direct costs of a service as a proxy for expected indirect costs. DFAS, DISA, and DLA officials described their approaches to allocating costs when setting rates. DFAS allocates costs to each of the services it provides and to each customer using those services through 29 predetermined business rules. DISA groups its services by the costs associated with providing them and allocates these costs to the services in each group based on factors such as the cost of equipment used to provide each service. Similarly, DLA uses various methods to allocate indirect costs to some or all of DLA’s goods or services based on factors such as the number of employees supporting the provision of a given good or service and the total sales of that good or service. In each case, these costs are then included in the rates DFAS, DISA, and DLA charge customers for each good or service. See appendixes II through IV for more information on each agency’s rate-setting approach. The Defense Agencies Have Adjusted Some Rate-Setting Methodologies to Be More Equitable, but Their Methodologies Are Not Transparent We found that DFAS, DISA, and DLA have taken steps intended to establish an equitable pricing methodology. However, customers from the military departments told us, and our review of related documentation provided at rate briefings and cost summits found, that the information they receive regarding the pricing of goods and services is not transparent. Officials from all three DWWCF agencies described efforts to more equitably allocate costs associated with a given good or service to the customers who use that good or service, as described below. DFAS, for fiscal year 2019, changed its method for allocating the costs of its facilities to its customers in an effort to more equitably allocate these costs. Previously, the costs for each DFAS facility were charged directly to the customers whose work was performed in that facility. Since the costs of facilities differ and customers do not choose the location that provides their service, DFAS changed this methodology so that customers now pay a uniform percentage of their direct costs to cover the total cost of DFAS facilities. DISA, for fiscal year 2017, changed the pricing structure for Defense Information System Network Infrastructure Services in response to recommendations from two DOD internal reviews. The structure changed from one designed to encourage adoption of the network across DOD to a consumption-based model that aligns customer billing with consumption so that customers have greater control over their costs. According to DISA officials, this change has enhanced collaboration between DISA and its customers, providing customers more frequent inventories of the services they require so that the customers can determine that the bills for those services reflect their requirements. DLA is implementing two changes to the pricing methodology it uses for distribution services, part of its Supply Chain Management activity group. The first change, which DLA refers to as distribution price equitability, was implemented during fiscal year 2017. This pricing methodology allocates overhead costs to reimbursable distribution services (special services not included in DLA’s standard rate structure). Previously, only rate-driven distribution services were charged for overhead. DLA proposed the second change, market basket pricing, for implementation in fiscal year 2020. Market basket pricing changes this pricing from being based solely on the weight of the items being distributed to a method that considers the level of effort required by DLA to distribute the items. For example, bulky, fragile, and hazardous items will be charged higher rates than small, easy-to-ship items. While we found that DFAS, DISA, and DLA have taken steps intended to establish an equitable pricing methodology, military department officials from the offices we contacted said that they lack visibility into the factors that determine their costs at one or more of the three defense agencies. Specifically, they said they had a limited understanding of the types of indirect costs that are included in the rates they are charged and how those costs are allocated, the specific changes that have been made to the methods used to set rates, or how changes in the customer’s use of the services, which would also change an agency’s workload, would affect overall costs. DFAS, DISA, and DLA have produced documentation for their customers to explain their rates and have developed ways to communicate with their customers—for instance, through the use of customer liaisons. However, officials from the military departments told us this documentation does not contain the level of detail they need to fully understand the rates. For example, DFAS. Navy and Army officials we spoke with regarding DFAS said that their departments lack visibility into how DFAS’s rates and bills are calculated because DFAS informational briefings do not describe the types of costs included in rates and how those costs are calculated and allocated to customers. As a result, officials said they are confused by why declines in their use of DFAS’s services have not resulted in reduced costs. These officials said that this information would make it easier for them to determine how to manage their costs and verify that costs are equitably allocated and reflect usage. DISA. Air Force officials we interviewed regarding DISA told us that DISA does not provide sufficient pricing transparency because, although DISA has provided some documentation of its rates, this documentation does not explain the methodology used to calculate the rates and the costs included in those calculations. Although Army officials who discussed DISA said that DISA rate briefings provide the level of information necessary for customers, the Air Force officials said that this lack of information on how DISA calculates rates makes it difficult for the Air Force to determine how it can manage its costs with DISA or whether the rates it pays reflect the costs of the services it uses. DLA. Navy and Air Force officials we interviewed regarding DLA told us that DLA does not provide sufficient pricing transparency despite the rate briefings DLA conducts for its customers. Although the Army officials who discussed DLA said that the rate briefings provide sufficient information for customers, the Navy officials told us that the lack of detailed information on the costs included in DLA’s rates makes it difficult for the Navy to determine how to lower its costs. They also said this lack of information prevents them from determining whether the rates they pay actually reflect the costs of the services they use, as intended. Similarly, the Air Force officials told us that DLA’s communication regarding its market basket pricing initiative, discussed during the DLA briefings, was confusing and did not include all the information they needed to prepare their budget, such as when the change would be implemented and how the initiative would affect the Air Force’s costs. Officials noted that, despite initially being told by DLA that the Air Force would experience a reduction in its distribution costs as a result of this initiative, they subsequently learned through a Resource Management Decision that the Air Force’s costs would increase instead. DFAS, DISA, and DLA officials told us they make efforts to communicate with their customers and to improve the transparency of their rates. For example, DFAS officials noted that they have one-on-one discussions with each of their customers during the customer rate briefings. DISA officials said that they respond to customer questions regarding rates and share information on the costs included in those rates. DLA officials said they discuss rates at a variety of customer forums and share documentation of changes to their rate-setting methodologies, such as market basket pricing, with customers. Officials from the military departments acknowledged these efforts by the DWWCF agencies to share information. However, as described in the examples above, officials told us that one or more of the agencies have not provided them with the information needed to fully understand their costs, to have assurance that costs are being allocated fairly, or to identify actions they could take to affect their overall bills, in some cases, despite requests for more detailed cost information. In addition, DFAS, DISA, and DLA provided us copies of documents that they present at rate briefings and cost summits to share information about their pricing methodologies with their customers. In our review of those documents, we found that they contained high-level information, such as the rates themselves and the estimated workloads, and did not contain detailed information about the types of costs included in the rates and how those costs are calculated. For example, although DLA provides its cost recovery rate for the materiel supply chains in its rate briefing documentation, the documentation does not provide information on the specific costs that go into that rate. As a result, based on these documents, we also were not able to fully understand the agencies’ costs and how those costs are allocated among their customers. By providing more complete and transparent information on methodologies used to calculate rates, the costs used in those calculations, and how changes in workload affect a customer’s rates, DFAS, DISA, and DLA could improve their communication with their customers and allow their customers to better understand and make decisions to help them manage the costs of the goods and services that they obtain. Such information would also better inform customers of any changes to the assumptions underlying rates and the impact those changes might have on their future costs. DWWCF Agencies Delineate Roles and Responsibilities, Measure Performance, and Assess Resource Requirements and Customer Needs We found that DFAS, DISA, and DLA have applied all of the components of the three remaining key operating principles for effective management of working capital funds. These principles relate to delineating roles and responsibilities, measuring performance, and assessing resource requirements and customer needs. By implementing these principles, the DWWCF agencies are better positioned to: Promote a clear understanding of who will be held accountable for specific tasks or duties, reduce the risk of mismanaged funds and tasks or functions “falling through the cracks,” and educate customers about whom to contact if they have questions. Measure their operational performance, assess their performance against strategic goals, and identify opportunities to improve performance. Enable customers to provide input about working capital fund services or voice concerns about their needs, enable agencies to prioritize customer demand, and enable agencies to use resources most effectively. DWWCF Agencies Clearly Delineate Roles and Responsibilities We found that all three DWWCF agencies have fully applied the three components of the principle for clearly delineating roles and responsibilities in that they define key areas of authority and responsibility, segregate duties to reduce fraud, and have a management review and approval process. Define key areas of authority and responsibility. DFAS, DISA, and DLA define key areas of authority and responsibility and provide customers with clear information on who to contact if they encounter issues or have questions. DFAS defines the responsibilities of key offices, such as those responsible for tracking revenue, in its Doing Business with DFAS catalog of services. This document lists points of contact, specific to each customer, who can provide support and address customers’ questions. DFAS also maintains service level and audit agreements with its customers, called mission work agreements, to document the specific level of effort and service it will provide. DISA’s instructions define the roles and responsibilities for key officials and offices involved in managing the agency’s DWWCF activities. DISA also provides contact information for its customer account representatives in its DWWCF Rate Book and on its website. DLA’s customer assistance handbook explains the roles of different offices within DLA and contains contact information for each of DLA’s activity groups and for customer-specific representatives. DLA also defines roles and responsibilities of interagency groups in the performance-based agreements it signs with the military departments and services. For example, an agreement between the Department of the Army and DLA defines roles and responsibilities for the Partnership Agreement Council, the organization that addresses and prioritizes issues related to improving logistics coordination between DLA and the Army. Segregate duties to reduce error or fraud. DFAS, DISA, and DLA segregate duties across their organizations and document this segregation. DFAS documents the segregation of responsibilities for tracking and recording transactions for each of its service offerings in its Doing Business with DFAS catalog of services. For example, DFAS’s Retired and Annuitant Pay section tracks the number of individuals serviced under those pay systems, and DFAS’s Central Revenue Office records these transactions in DFAS’s billing system. DISA describes its processes for segregating duties in the documentation of its working capital fund disbursements and collections processes. For example, when DISA charges a customer agency, an accounts receivable technician records the billing information and a certifying officer verifies and certifies the transaction. DLA documents its segregation of key roles and responsibilities for authorizing, processing, and reviewing transactions according to DOD and DLA guidance. For example, the DOD manual that outlines sales accountability and documentation processes for energy commodities assigns responsibility to DLA Energy for ensuring that DLA customers meet the criteria or have received waivers to purchase fuel through DLA, while DLA Transaction Services provides activity codes to authorized customers to manage their transactions. Additionally, individual fuel handlers at DLA Energy stock points are required to record customer data for sales and credits on a source document. Establish a management review and approval process. DFAS, DISA, and DLA have established management review and approval processes to promote the appropriate tracking and use of funds. DFAS documents its processes for tracking and reviewing transactions in its Doing Business with DFAS document. DISA describes the review of transactions in its documentation of its funds disbursements and collections processes. Each of DLA’s activity groups tracks transactions and funding using DLA’s accounting system of record—the Enterprise Business System. However, each group uses its own unique order validation process that is documented for each DLA activity group. Defense Agencies Measure Performance We found that all three DWWCF agencies are applying the three components of the key operating principle for measuring performance in that they have established performance measures and goals, aligned performance measures with strategic goals, and established a management review of DWWCF performance. Establish performance measures and goals. DFAS, DISA, and DLA have each established performance measures and goals. DFAS uses financial and mission-focused performance measures, called business models, which include metrics for service timeliness and accuracy, among others. DISA has operational performance metrics, such as service downtime, and collects customer feedback on service provision through its Mission Partner Engagement Office. DLA establishes performance measures and corresponding thresholds in the performance- based agreements it signs with its customers from the military departments and services. These performance measures include materiel availability and backorders, among other measures. Align performance measures with strategic goals. DFAS, DISA, and DLA have performance measures that are aligned with their strategic goals. Each DWWCF agency is responsible for maintaining positive cash balances sufficient to allow their operations to continue uninterrupted. To achieve this, all three agencies monitor their monthly cash balances and whether each of its activity groups is experiencing gains or losses. DFAS, DISA, and DLA also have aligned operational performance measures with their strategic goals as illustrated by the following examples. DFAS’s Fiscal Year 2017-2021 Strategic Plan identifies achieving cost, schedule, and performance targets that support delivery of best value services. DFAS monitors the timeliness and accuracy of its services reflecting this strategic outcome in its performance measurement. For example, DFAS measures the percentage of commercial payments it processes accurately and the percentage of military pay problem cases that it resolves in a timely manner. DISA’s Strategic Plan 2019-2022 states that optimizing enterprise services and capabilities to minimize costs while delivering high availability, among other benefits, is a strategic objective. To that end, DISA monitors data center and equipment availability through performance measures such as the average number of minutes of facility downtime per fiscal year. DLA’s Strategic Plan 2018-2026 identifies strengthening readiness and lethality as its highest priority line of effort. DLA monitors how its own performance affects readiness of critical weapon systems using its Service Readiness Dashboard, which includes a measure for the number of weapon systems that are non-mission capable due to DLA supply items being unavailable. Defense-Wide Working Capital Fund Midyear Rate Changes Agency officials said that rates are generally fixed for the entire fiscal year, but the Office of the Under Secretary of Defense (Comptroller) can approve midyear rate changes if required. For example, we previously reported that the Defense Logistics Agency (DLA) collected about $3.7 billion more from the sale of fuel than it cost in fiscal year 2015 because of lower fuel prices. Conversely, during fiscal year 2018, DLA’s cost of procuring fuel increased significantly due to increases in the price of the fuel procured from the market. As a result, in April 2018, DOD increased the rate from $90.30 per barrel to $115.92 per barrel to cover its costs. Establish management review of working capital fund performance. DFAS, DISA, and DLA regularly monitor and have management reviews of agency performance against these financial and mission-related performance measures. Officials from each agency said they review the financial performance of the agencies’ activities throughout the year the programs and budgets will be executed to identify how differences between budgeted rates and actual costs affect the fund’s gains, losses, and cash balances. This allows them to coordinate with the Office of the Under Secretary of Defense (Comptroller) to propose price changes when needed, although officials said that a midyear rate change is rare (see sidebar). The agencies also regularly review their non-financial performance based on the previously described measures to identify areas for improvement. Defense Agencies Build in Flexibility to Obtain Customer Input and Meet Customer Needs We found that all three DWWCF agencies are applying the three components of the key operating principle of building in flexibility to obtain customer input and meet customer needs by communicating with customers regularly and in a timely manner, developing processes to assess resource needs, and establishing processes to prioritize requests for service. Communicate with customers regularly and in a timely manner. DFAS, DISA, and DLA each routinely communicate with customers through annual rate briefings, customer forums, surveys, and other meetings. These meetings enable these agencies to provide high-level rate information to their customers and discuss the goods and services that their customers will need in the coming budget year. For example, DFAS communicates with the military services’ budget offices through an annual briefing at a meeting hosted by the Office of the Under Secretary of Defense (Comptroller) and surveys finance officers and end-user customers on their satisfaction with military pay services. Similarly, DISA holds routine meetings at the working group and senior official levels to discuss service offerings, among other things. In addition to its biannual cost summits where DLA discusses its pricing strategies with representatives from the military services, the Office of the Under Secretary of Defense (Comptroller), and the Office of the Under Secretary of Defense for Acquisition and Sustainment, DLA also holds an annual demand planning summit with the military services to discuss their projected requirements for the upcoming budget year. Develop process to assess resources needed to meet changes in customer demand. DFAS, DISA, and DLA each take steps to communicate with customers regarding future demand and requirements. All three agencies have customer-specific representatives that obtain information on future requirements and facilitate communication between the agencies and their customers. DFAS uses client executive liaisons to resolve issues and collect information about customer needs for its goods and services. DISA uses its Mission Partner Engagement Office to address customer concerns and conduct surveys about customer needs. DLA has national account managers that represent each military service and facilitate DLA’s engagement with the services regarding requirements and customer service representatives with select customers to meet their specific needs for DLA’s goods and services. Establish process to prioritize requests for services. DFAS, DISA, and DLA each have processes to adjust resources in response to the needs of their customers. This primarily occurs during the budget formulation process. DFAS officials told us that labor accounts for about 75 percent of the agency’s costs, and management can decide to adjust its workforce resources depending on customer needs, often by shifting personnel and workload among customers, temporarily hiring additional staff, or reducing staffing levels through attrition. DISA’s Strategic Resourcing Council is responsible for addressing issues such as resourcing strategies for existing and emerging programs. DLA uses its Enterprise Operations Planning Council, a group of DLA executives responsible for actively balancing customer needs and supply chain constraints, to ensure that resources are aligned with customer requirements during the budget formulation process. Conclusions The agencies whose operations are financed through the Defense-Wide Working Capital Fund have applied all but one of the components of the key operating principles for effective management of working capital funds—establishing a transparent and equitable pricing methodology, a component of the principle of ensuring self-sufficiency by recovering the agency’s actual costs. Transparent pricing helps ensure that customers understand their costs and can make choices to manage these costs. Officials from the military departments—the largest customers of DFAS, DISA, and DLA—said they lack visibility into the types of costs included in their rates and some do not understand how changes to rate-setting methodologies or defense agency workload can affect their overall costs. By providing this information to customers, DFAS, DISA, and DLA would better equip them to reduce their costs and improve efficiency. Further, DOD would have greater assurance that the DWWCF was operating as intended. Recommendations for Executive Action We are making the following three recommendations to DOD: The Secretary of Defense should ensure that the Director of the Defense Finance and Accounting Service provides customers with more complete information on the agency’s rate-setting methodologies in rate documentation, briefings, and other forums where rates are discussed, including the costs included in rates, how those costs are calculated, and how changes in DFAS’s workload affect customers’ overall costs. (Recommendation 1) The Secretary of Defense should ensure that the Director of the Defense Information Systems Agency provides customers with more complete information on the agency’s rate-setting methodologies in rate documentation, briefings, and other forums where rates are discussed, including the costs included in rates, how those costs are calculated, and how changes in DISA’s workload affect customers’ overall costs. (Recommendation 2) The Secretary of Defense should ensure that the Director of the Defense Logistics Agency provides customers with more complete information on the agency’s rate-setting methodologies in rate documentation, briefings, and other forums where rates are discussed, including the costs included in its rates, how it calculates those costs, and how and when proposed changes to its rate-setting methodologies will affect customers’ overall costs. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In an email accompanying its written comments, DOD concurred with our recommendations. In the department’s written comments, DFAS, DISA, and DLA stated that they intend to take steps to provide their clients with additional information on rates. These steps include reaching out to customers to better understand their information needs and providing additional information on potential pricing methodology changes. DOD’s comments are reprinted in appendix V. DOD also provided technical comments during this review, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional addressees and the Secretary of Defense. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Elizabeth Field at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Key Operating Principles for Managing Working Capital Funds Examples of evidence supporting principle the agency’s actual costs Transparent and equitable pricing methodologies allow agencies to ensure that rates charged recover agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined or changed including the assumptions used, customers can better anticipate potential changes to those assumptions, identify their effect on costs, and incorporate that information into budget plans. A management review process can help to ensure the methodology is applied consistently over time and provides a forum to inform customers of decisions and discuss as needed. Published price sheets for services are readily available. Documentation of pricing formulas supports equitable distribution of costs. Pricing methodology and accompanying process ensures that, in aggregate, charges recover the actual costs of operations. Management review process allows fund managers to receive and incorporate feedback from customers. Discussions with customers confirm an understanding of the charges and that they are viewed as transparent and equitable. 2. Clearly delineate roles Appropriate delineation of roles and responsibilities promotes a clear understanding of who will be held accountable for specific tasks or duties, such as authorizing and reviewing transactions, implementing controls over WCF management, and helping ensure that related responsibilities are coordinated. In addition, this reduces the risk of mismanaged funds and tasks or functions “falling through the cracks.” Moreover, it helps customers know who to contact in the event they have questions. Written roles and responsibilities specify how key duties and responsibilities are divided across multiple individuals/offices and are subject to a process of checks and balances. This should include separating responsibilities for authorizing transactions, processing and recording them, and reviewing the transactions. Written description of all WCF roles and responsibilities in an accessible format such as a fund manual. Discussions with providers and clients confirm a clear understanding. A routine review process exists to ensure proper execution of transactions and events. Principle 3. Measure performance Performance goals and measures are important management tools applicable to all operations of an agency, including the program, project, or activity level. Performance measures and goals could include targets that assess fund managers’ responsiveness to customer inquiries, the consistency in the application of the funds’ rate-setting methodology, the reliability of cost information, and the billing error rates. Performance measures that are aligned with strategic goals can be used to evaluate whether, and if so how, WCF activities are contributing to the achievement of agency goals. A management review process comparing expected to actual performance allows agencies to review progress towards goals and potentially identify ways to improve performance. Performance indicators and metrics for WCF management (not just for the services provided) are documented. Indicators or metrics to measure outputs and outcomes are aligned with strategic goals and WCF priorities. WCF managers regularly compare actual performance with planned or expected results and make improvements as appropriate. In addition, performance results are periodically benchmarked against standards or “best in class” in a specific activity. 4. Build in flexibility to obtain customer input and meet customer needs. Opportunities for customers to provide input about WCF services, or voice concerns about needs, in a timely manner enable agencies to regularly assess whether customer needs are being met or have changed. This also enables agencies to prioritize customer demands and use resources most effectively, enabling them to adjust WCF capacity up or down as business rises or falls. Established forum, routine meetings, and/or surveys solicit information on customer needs and satisfaction with WCF performance. Established communication channels regularly and actively seek information on changes in customer demand and assess the resources needed to accommodate those changes. Established management review process that allows for trade-off decisions to prioritize and shift limited resources needed to accommodate changes in demand across the organization. DFAS reported receiving total Defense-Wide Working Capital Fund orders for services valued at approximately $1.4 billion in fiscal year 2018. DFAS employs around 12,000 civilian personnel and provides services to DOD and other federal entities through a single activity group—Finance and Accounting Services. Approach to Allocating Costs: The Defense Finance and Accounting Service (DFAS) establishes rates for each of the services it provides. DFAS first links direct costs to each service and to each customer benefitting from that service. Then, DFAS applies 29 predetermined business rules to allocate indirect costs, which include mission-related indirect costs and general and administrative costs. These business rules identify costs associated with specific combinations of mission-related indirect costs necessary to provide a service and then apply a “fair-share” percentage of general and administrative indirect costs, which allows DFAS to determine the rates it needs to charge to recover all costs. General and administrative costs associated with supporting the entire DFAS organization are allocated at a uniform percentage within DFAS’s rates for all systems, services, and customers based on total direct costs. Services Provided: DFAS provides centralized finance, accounting, human resources, and financial systems management services. DFAS categorizes its services into three types: rate-based services (which include military and civilian pay services and accounting services), direct systems reimbursements (i.e., legacy accounting systems), and support- to-others (i.e., reimbursable services that are outside of DFAS’s core mission or reflect emerging mission workload). Indirect Costs: DFAS differentiates between two types of indirect costs: (1) indirect costs that are necessary to support DFAS’s direct mission but are not direct costs because they support multiple types of work or customers (e.g., information technology network infrastructure, senior operations management, and facilities costs) and (2) general and administrative costs that support DFAS as a whole and are not linked to specific services (e.g., costs for DFAS’s internal review office and other headquarters-related costs). rate for each service that includes direct and indirect costs as allocated by its predetermined business rules. For civilian pay services, the number of units sold is based on the number of active civilian pay accounts in a given month (e.g., the number of civilian leave and earnings statements generated). For accounting services, the number of units sold is based on the number of labor hours DFAS employees recorded supporting a given task and customer. 2. Portion of Total Costs: DFAS charges a portion of the agency’s total legacy systems costs (direct costs and both mission-related and general and administrative indirect costs) of providing a service based on the proportion of total workload projected for a specific customer. DFAS uses this approach for its direct systems reimbursement services, as described below. Direct Systems Reimbursement. DFAS charges customers a percentage of the total costs—including direct and both types of indirect costs—of each legacy accounting system based on each customer’s portion of total system usage. 3. Percentage Markup on Direct Costs: DFAS adds a percentage markup to its direct costs in support of non-core or emerging mission workload to recover general and administrative indirect costs of the associated support. DFAS uses this approach for its support-to-others services, as described below. Support-to-Others. DFAS charges customers the actual direct cost of providing a support-to-others service plus the general and administrative percentage markup. DISA reported receiving total Defense-Wide Working Capital Fund (DWWCF) orders for services valued at approximately $7.5 billion in fiscal year 2018. DISA employs around 8,700 military and civilian personnel and provides its services through two activity groups: Computing Services and Telecommunications Services and Enterprise Acquisition Services. Services Provided: Computing Services operates the DISA Data Centers, which provide mainframe and server processing operations, data storage, and other information technology services and support across the Department of Defense (DOD). Telecommunications Services provides secure telecommunications services, including the Defense Information Systems Network. Enterprise Acquisitions Services provides contracting services for information technology and telecommunications acquisitions from the commercial sector and contracting support to the Defense Information Systems Network programs and other customers through DISA’s Defense Information Technology Contracting Organization. Approach to Allocating Costs: The Defense Information Systems Agency (DISA) groups its services by the costs associated with providing them. These costs are specific to the service being provided and are influenced by factors such as the cost of equipment used to provide the service. Computing Services has a large collection of billing rates, tailored to the services provided to a customer, such as mainframe and server processing; storage; and other services. Approximately half of DISA’s business in Telecommunications Services is for the Defense Information Systems Network, for which DISA sets a standard rate to recover costs. The remaining half is for reimbursable services that cover services such as commercial satellite phones, instant message services, global videoconferencing services, and support for secure portable electronic devices (both smartphones and tablets). The commercial satellite communications program recovers costs through a management fee that is added to the direct contract costs. According to DISA officials, cost reimbursable services are those services that are not included in DISA’s standard offerings and thus do not have a standard rate. DISA recovers the cost for these services, including direct, indirect (overhead), and general and administrative costs, and the total cost is negotiated with customers up front. Approaches Used to Calculate Rates for DISA Services: DISA uses three approaches for calculating rates: 1. Per Unit: DISA determines a specific dollar rate per unit that, when multiplied by the projected workload, will produce revenue sufficient to recover the full costs, including direct and indirect costs, of providing the good or service. DISA uses this approach for most Computing Services and some Telecommunications Services. Computing Services. DISA calculates most of its Computing Services rates by dividing the total costs of providing a service by total projected units. Total costs of a service comprise direct and indirect costs, including general and administrative costs. Indirect Costs: These include costs that are associated with a particular service, such as facilities costs, and those that are associated with support provided to all services, such as personnel support. Contract management costs are included for all services but are recovered differently. Telecommunications Services. DISA’s mobility program, which provides support for portable electronic devices, recovers costs by charging a rate per device per month. Similarly, DISA’s cross- domain services, which provide the ability for customers to transfer information across different security domains (unclassified and classified systems) at a price for each filter supported. 2. Portion of Total Costs: DISA charges a portion of its total costs, including direct and indirect costs, of providing a service based on the proportion of total workload projected for a specific customer. DISA uses this approach for several of its Telecommunications Services. DISA’s Telecommunications Services charges customers a portion of the total costs for the Defense Information Systems Network based on each customer’s portion of total network usage. Total costs includes bandwidth, circuits, maintenance, sustainment costs, network support and operations labor, outage monitoring, and contract management, among others. This approach is also used for DISA’s Global Video Services (video teleconferencing capabilities) and Organizational Messaging Services (command and control messaging). 3. Percentage Markup on Direct Costs: DISA adds a percentage markup on its direct costs as a proxy for indirect costs. DISA uses this approach to calculate some rates for its Computing Services and for its Telecommunications Services and Enterprise Acquisition Services activity groups. Computing Services.There are some services within the Computing Services activity group which DISA charges on a reimbursable basis, such that customers pay the direct cost of the service provided plus an additional percentage of the direct cost to cover general and administrative costs. Telecommunications Services and Enterprise Acquisition Services. DISA charges the customer for the full cost of the contract plus an additional percentage of the direct costs to cover DISA’s indirect costs associated with contract management through the Defense Information Technology Contracting Organization. This fee ranges from 1.75 to 2.5 percent of the contract amount and is based on the expected support costs for associated information technology systems, billing support personnel and systems, financial management, and space and facility costs. This standard contracting fee may change from year to year, but it remains fixed within any given year. DLA reported receiving total Defense-Wide Working Capital Fund (DWWCF) orders for goods and services valued at approximately $40.6 billion in fiscal year 2018. DLA employs around 26,000 military and civilian personnel. DLA provides its services through three activity groups: Energy Management, Supply Chain Management, and Document Services. Approach to Allocating Costs: The Defense Logistics Agency (DLA) allocates direct costs to the individual good or service for which the costs were incurred. For indirect costs, DLA determines whether each cost is associated with providing specific goods or services (such as labor that supports a specific materiel supply chain) or is associated with supporting DLA as a whole (such as the DLA general counsel). DLA uses various methods to allocate these indirect costs, taking into account factors such as the number of employees supporting the provision of a given good or service and the total sales of that good or service. Services Provided: DLA provides fuel and other energy commodities through its Energy Management activity group; consumable materiel (i.e., supplies and parts), distribution services for this materiel, and disposition services for excess property through its Supply Chain Management activity group; and printing, electronic document management and invoicing, and other document services through its Document Services activity group. Indirect Costs: These include costs for information technology systems, facilities, and labor that support the provision of multiple goods and services. Costs for information technology systems and labor that provide enterprise-level support to all of DLA (such as DLA’s accounting system and headquarters staff), among other costs, are also included. The indirect costs included in rates vary among the different goods and services that DLA provides. rates that are calculated by dividing the total processing costs (excluding transportation costs) for items in each weight category by the projected number of units shipped for each category.a uniform percentage across all customers. DLA uses this approach for disposition and some document services, as described below. Disposition. DLA charges each customer a portion of the total direct and indirect costs of providing disposition services based on the customer's portion of total disposition service usage. When applicable, DLA subtracts the revenue it generates through the sale of excess property, reimbursements it receives from customers for hazardous waste management, and funding it receives for Overseas Contingency Operations from the total costs before assigning costs to customers. Document Services Electronic Document Access and Wide Area Workflow (Invoicing). For Electronic Document Access, DLA charges all customers a uniform percentage of its total costs for providing that service. For Wide Area Workflow, DLA charges each customer a portion of the total costs based on the customer’s portion of total system usage. 3. Percentage Markup on Direct Costs: DLA adds a percentage markup on the cost to acquire each good (i.e., the product cost) as a proxy for non-aqcuisition costs associated with that good (i.e., non- product costs). DLA uses this approach for its weapons systems and troop support materiel supply chains. Materiel Supply Chains.To calculate the cost recovery percentage, DLA divides the projected non-product costs for each materiel supply chain by the projected product costs of that materiel supply chain. The rate charged is the sum of the product cost of the good and an additional percentage of this product cost corresponding to the markup percentage. Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Alex Winograd (Assistant Director), Karyn Angulo, Martin de Alteriis, Garrick Donnelly, Christopher Gezon, Felicia Lopez, Keith McDaniel, Susan Murphy, Suzanne Perkins, Carol Petersen, Richard Powelson, Lauren Shaman, Kevin Walsh, and Doris Yanger made key contributions to this report. Related GAO Products Document Services: DOD Should Take Actions to Achieve Further Efficiencies. GAO-19-71. Washington, D.C.: October 11, 2018. Defense-Wide Working Capital Fund: Action Needed to Maintain Cash Balances within Required Levels. GAO-17-465. Washington, D.C.: June 30, 2017. Bulk Fuel: Actions Needed to Improve DOD’s Fuel Consumption Budget Data. GAO-16-644. Washington, D.C.: September 12, 2016. Department of Justice: Working Capital Fund Adheres to Some Key Operating Principles but Could Better Measure Performance and Communicate with Customers. GAO-12-289. Washington, D.C.: January 20, 2012. Intragovernmental Revolving Funds: Commerce Departmental and Census Working Capital Funds Should Better Reflect Key Operating Principles. GAO-12-56. Washington, D.C.: November 18, 2011. Federal User Fees: A Design Guide. GAO-08-386SP. Washington, D.C.: May 29, 2008.
Why GAO Did This Study As DOD continues to focus its resources on improving military readiness and modernizing its forces, it seeks to minimize costs associated with its business operations. DFAS, DISA, and DLA are financed through the Defense-Wide Working Capital Fund (DWWCF). Collectively, they provide shared services and goods to their customers, including finance and accounting services; information technology services; and fuel provision and inventory management. Senate Report 115-262, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2019, includes a provision that GAO evaluate the activities DWWCF agencies fund through overhead charges and fees collected from customers. This report evaluates the extent to which DFAS, DISA, and DLA (1) have a process for setting rates to recover costs and provide transparent pricing to customers and (2) clearly delineate roles and responsibilities, measure performance, and assess resource requirements and customer needs. GAO reviewed relevant sections of DOD's Financial Management Regulation and agency documentation and interviewed officials from DFAS, DISA, and DLA and the military departments in comparing the agencies' management practices to the key operating principles for effective management of working capital funds. What GAO Found The Defense Finance and Accounting Service (DFAS), Defense Information Systems Agency (DISA), and Defense Logistics Agency (DLA) use a combination of approaches to set rates that are intended to recover their costs and equitably allocate costs to customers. However, DFAS, DISA, and DLA have not provided transparent pricing to the military departments, which are their largest customers. Each agency annually develops budget proposals designed to recover projected costs and account for gains or losses from prior years. DFAS, DISA, and DLA have taken steps intended to establish an equitable pricing methodology. For example, DLA changed its pricing method for distribution services to align the rates customers pay with DLA's costs of providing the service. However, customers from the military departments said they lack visibility into the factors that determine their overall costs at one or more of the three defense agencies, including how indirect costs are allocated and included in the rates they are charged. GAO's review of cost and rate documentation provided to the military departments also found that they provide high-level information, such as the rates and estimated workloads, and did not include details about the types of costs included or how they are calculated. Specifically, (1) DFAS informational briefings do not describe the types of costs included in rates and how those costs are calculated and allocated. As a result, customers from the Army and Navy said they were confused about why declines in their use of DFAS's services have not resulted in reduced costs. (2) DISA does not include in its documentation the methodology it uses to calculate its rates, making it difficult for officials from the Air Force to determine how they can manage their costs with DISA. (3) DLA does not provide detailed information on the costs included in its rates, making it difficult for customers from the Navy and Air Force to determine how to lower their costs or, in the case of the Air Force, understand the cost implications of DLA's newly announced pricing initiative. Because DFAS, DISA, and DLA share only high-level information on their rate-setting methodologies, the military departments have been limited in their abilities to understand and manage the costs they pay for the services they obtain. By providing more complete information on rate setting, including the calculation and use of costs, DFAS , DISA , and DLA could help their customers better manage their costs and make more informed budgeting decisions. Improved transparency could also help customers anticipate how potential changes to the assumptions underlying rates could affect future costs. GAO also found that DFAS, DISA, and DLA clearly delineate roles and responsibilities, measure performance, and assess resource requirements and customer needs for goods and services, as called for by the three remaining key operating principles for effective working capital fund management. As a result, these agencies are positioned to promote a clear understanding of who will be held accountable for specific tasks or duties, reduce the risk of mismanaged funds, measure their operational performance and identify opportunities to improve performance, and use resources most effectively. What GAO Recommends GAO recommends that DFAS, DISA, and DLA provide more complete information to customers on their rate-setting methodologies. DOD concurred with GAO's recommendations.
gao_GAO-20-236
gao_GAO-20-236_0
Background NMB’s Organization and Mission NMB is headed by a three-member board, with each member appointed by the President and confirmed by the Senate for a term of 3 years. The board members provide overall leadership and strategic direction for NMB, and retain responsibility for key functions such as releasing parties from the mediation of major disputes if no agreement can be reached. In May 2018, NMB reorganized various agency components to improve its management and oversight of agency operations. This resulted in the creation of three mission areas and three mission support areas. The Offices of Fiscal Services and Information Services were newly created as a result of the delegation order (see fig. 1). In June 2019, NMB hired a Chief Financial Officer (CFO), who serves as the Director of the Office of Fiscal Services. The CFO has authority over NMB’s budget, accounting, and financial auditing functions. In January 2019, NMB hired a Chief Information Officer (CIO), who serves as the Director of the Office of Information Services. The CIO has authority over NMB’s information technology and related systems, including its electronic record keeping functions. All offices, along with NMB’s Designated Agency Ethics Official, report directly to the Board. Previously, the Offices of Administration, Mediation, and Arbitration reported to a Chief of Staff, a position that was eliminated in 2018. NMB’s overall mission is to provide for the independence of air and rail carriers and employees in matters of self-organization, help prevent interruption to commerce conducted through the operation of those carriers, administer adjustment boards, as well as develop complementary strategies to resolve disputes. NMB has three program areas to fulfill its mission: Representation. Rail or air carrier employees select unions for the purposes of collective bargaining through secret-ballot elections conducted by NMB. NMB is charged with resolving any questions concerning representation of a specific craft or class through the agency’s Office of Legal Affairs, and has sole jurisdiction to decide these disputes. Mediation and Alternative Dispute Resolution. The RLA provides for mediation to help resolve disputes between management and labor during collective bargaining negotiations. When rail or air carriers and unions cannot reach agreement on the terms of a new or revised collective bargaining agreement – such as working conditions or rates of pay – either party can apply for NMB’s mediation services to resolve their differences. Additionally, NMB may impose mediation if it finds that resolving the dispute is in the public’s interest. NMB also offers grievance mediation to parties as an alternative way to resolve disputes filed for grievance arbitration. Although mediation is voluntary, it is a less expensive approach to resolving grievances, using NMB’s existing mediation staff rather than outsourcing—and paying—external arbitrators. Arbitration. The RLA also offers grievance arbitration to help resolve disagreements between carriers and unions over how to interpret and apply provisions of existing collective bargaining agreements. NMB does not directly provide arbitration services, but rather maintains a list of registered arbitrators from which the parties can select someone to review and decide their case. In the airline industry, the parties pay the costs of arbitration. In the railroad industry, however, consistent with the requirements of the RLA, NMB pays the fee and travel expenses of the arbitrator. Executive Branch Oversight of the NMB The Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) have key oversight responsibilities for all federal agencies, including NMB. OMB is responsible for the oversight of NMB’s management and information technology. OPM is the central personnel management agency of the federal government charged with administering and enforcing civil service laws, regulations, and rules. OPM annually administers surveys to federal employees across the government, including NMB, to solicit their views on their agencies including agency leadership, collaboration, and other issues. OPM also offers various services to agencies to evaluate organizational climate. Federal law does not establish an Inspector General (IG) for NMB. However, the agency signed a Memorandum of Understanding (MOU) in 2018 with the National Labor Relations Board’s (NLRB) Office of Inspector General to provide independent audit and investigative oversight. In the MOU, the NLRB IG agreed to (1) operate a hotline for employees to anonymously submit information—via email or telephone messages—regarding fraud, waste, and abuse involving the NMB’s programs and operations, and (2) take action to address complaints, such as inform the appropriate law enforcement agency or the NMB Chairman or Board Members, as appropriate. Federal Risk and Authorization Management Program (FedRAMP) Requirements FedRAMP is a government-wide program that provides authorizations for use of cloud services. As an executive agency that uses a cloud service approved through FedRAMP, NMB is subject to related requirements. Through a December 2011 memorandum, OMB established requirements for executive agencies to use FedRAMP when conducting security authorizations for agency use of cloud services. In addition, the FedRAMP Program Management Office issued guidance in 2017 that specifies authorization requirements, including that an agency should document the authorization of the agency system supported by a cloud service approved through FedRAMP and the related cloud service used by the agency. Prior GAO Work GAO has issued three prior reports on NMB and collectively had 13 recommendations. NMB had previously implemented six of those recommendations, and seven remained in our current review. We issued our first report in December 2013 with seven recommendations in key management areas, including strategic planning, performance measurement, and workforce planning. We also suggested that Congress consider authorizing an IG at an appropriate federal agency to provide independent audit and investigative oversight at NMB. We issued a second report in February 2016, which found that NMB needed to take additional actions to implement the seven recommendations from our December 2013 report. We also made one additional recommendation related to procurement. We issued our third report in March 2018, which found that NMB had taken action to implement four of the recommendations from our December 2013 report and the recommendation from our February 2016 report. However, additional actions were needed to close the remaining three recommendations. We also made five additional recommendations related to the backlog of arbitration cases, outside employment, organizational climate, and NMB’s travel and telework policies. Since 2018, NMB Has Fully Implemented One Open GAO Recommendation, but Additional Actions Are Needed to Fully Address Others and Meet New Information Security Requirements NMB Has Fully Implemented One of GAO’s Seven Open Recommendations, but Shortcomings Persist in Other Areas NMB implemented a recommendation from GAO’s 2018 report to create and monitor requests for outside employment, but has not taken action to fully implement the remaining six recommendations from GAO’s past reviews (see table 1). By not fully implementing these recommendations, NMB remains at risk in several areas key to its mission, including information privacy and security and organizational climate, among others. Ethical Standards for Outside Employment and Activities GAO 2018 Recommendation: Develop and implement policies for approval and monitoring of employee requests for outside employment and other outside activities to prevent violations of ethics rules, consistent with Office of Government Ethics standards of conduct and federal internal control standards. Since our 2018 review, we found that NMB has developed and implemented policies for approving employee requests for outside employment and the agency monitors these requests. We reported in 2018 that NMB did not have a policy for approving and monitoring employee requests for outside employment consistent with the Office of Government Ethics (OGE) standards of conduct and federal internal controls. NMB also did not systematically track or monitor when managers or board members approved such activities for an employee. We recommended that establishing an outside employment policy and a system to monitor activities would help to prevent violations of ethics rules. In our current review, we found that NMB has implemented our recommendation. NMB worked with OGE to develop a policy on outside employment that details how employees should submit outside employment requests, consistent with OGE standards. NMB has incorporated the policy into annual and new employee ethics training. Once NMB approves an outside employment request, the agency monitors outside employment through employees’ annual financial disclosure forms. Rail Arbitration Case Backlog GAO 2018 Recommendation: Develop and execute a plan to address the rail arbitration case backlog. Since our 2018 review, we found that NMB has used several strategies to reduce its backlog by 57 percent; however, without a plan establishing specific goals and timeframes, it is difficult to track the agency’s progress against specific measures of success. We reported in 2018 that NMB’s rail grievance arbitration case backlog had more than tripled since 2011, and that NMB did not have a specific plan and related processes to address it. However, identifying and assessing the risks associated with the backlog and developing a plan to effectively manage it are key to implementing effective risk management. In our current review, we found that NMB has implemented several initiatives to reduce the rail grievance arbitration case backlog, including removing older cases, using lead cases—cases that have the same parties and similar fact patterns, allowing a decision from one case to settle others—and promoting an “Ambassador Program” to move cases from grievance arbitration to grievance mediation. NMB officials credit these strategies with reducing the backlog from a height of 8,550 cases at the end of fiscal year 2017 by 4,852 cases—about 57 percent—to 3,698 cases as of the end of fiscal year 2019 (see table 2). 1. Removing older cases. NMB officials said that NMB has removed older arbitration cases that were filed, but had not yet been moved forward to arbitration. Specifically, officials explained that, in late summer 2018, NMB removed 400 cases from the backlog that were 3 years or older. NMB officials said that the agency subsequently removed 1,025 cases that were 2 years or older. NMB officials told us that parties may choose to re-file a removed case. NMB has not received objections from unions and carriers regarding the removal of older cases. 2. Using lead cases: For lead cases, NMB and the parties agree that the decision for one case will be used to settle other cases with similar fact patterns. For example, officials said that a similar fact pattern would be cases that had the same union and carrier and dealt with the same underlying issue. In fiscal year 2017, NMB used the decisions for nine lead cases to settle 4,240 additional claims. In fiscal year 2018, NMB used the decisions for four lead cases to settle 600 additional claims. 3. Promoting the Ambassador Program. NMB’s Ambassador Program involves NMB reaching out to parties to encourage them to voluntarily move cases from arbitration to grievance mediation. NMB has assigned experienced mediators to carriers and unions as “ambassadors.” Unions that have disputes with a carrier can raise the issue through the ambassador in hopes of avoiding the formal arbitration process; in that way, the Ambassador Program may proactively decrease the number of arbitration cases filed. NMB is interested in using the Ambassador Program to resolve multiple claims regarding the same issue, policy, or employment action. NMB officials said in fiscal year 2018, NMB had seven cases in the Ambassador Program and closed six cases. NMB officials said in fiscal year 2019, NMB had four cases in the Ambassador Program; none are closed to date. NMB officials said that the Ambassador Program and the lead case program are related, in that many of the cases moved through the Ambassador Program are lead cases. For example, NMB reported that in fiscal year 2018, one grievance mediation case was used to settle 300 claims. In fiscal year 2017, NMB heard five cases in the Ambassador Program, and the decisions on these cases were applied to 1,951 remaining claims to resolve them. In addition, NMB officials told us a small number of railway carriers and unions file the largest percentage of the grievance arbitration cases (see fig. 2). In fiscal year 2019, four railway carriers represented 72 percent of the backlog, and four railway unions represented 87 percent of the backlog. The Office of Arbitration seeks to coordinate with the organizations with the most arbitration cases to help them move toward mediation or other techniques to decrease the arbitration backlog. Another method NMB reported using to reduce the backlog is to direct otherwise unobligated funding at the end of the fiscal year to fund arbitration cases, in addition to the amount of funds it had initially budgeted for arbitration. Specifically, NMB officials said that the agency allocated at least $1 million in additional funds in fiscal years 2017, 2018, and 2019 for arbitration cases at the end of each fiscal year, which allowed NMB to fund arbitration for approximately 4,200 more cases overall, closing nearly all of those cases. Officials said that these additional funds came from unfilled full-time equivalent staff position salaries and contracts that NMB did not award. Officials said they do not anticipate having similar amounts of funding available for arbitration in the future, once NMB hires staff and awards the contracts. While NMB has implemented various strategies to reduce the rail arbitration case backlog, it has not developed a plan to link the strategies to specific goals or timeframes. GAO’s Standards for Internal Control in the Federal Government state that management should define objectives in specific and measurable terms. Further, federal agencies are required to develop annual performance plans that measure performance to reinforce the connection between long-term strategic goals and day-to- day activities of its managers and staff. NMB’s 2018 Annual Performance and Accountability Report does not link NMB’s efforts to reduce the backlog to specific and measurable objectives to assess their effectiveness. By developing specific and measurable objectives to reduce the overall backlog or any component thereof, NMB and Congress would be able to more adequately assess NMB’s progress in reducing the backlog relative to its goals. Organizational Climate Assessment GAO 2018 Recommendation: Complete and take actions on the organizational climate assessment and survey results as a means to address employee concerns. Since our 2018 review, we found NMB has completed an organizational climate assessment but has not taken actions to address the results of that assessment. We reported in 2018 that surveyed NMB employees expressed concerns about the organizational climate at NMB. In addition, NMB’s strategic plan called for an organizational climate assessment to be conducted by the end of calendar year 2015 and every 3 years thereafter. However, at the time of our 2018 report, NMB had not conducted such an assessment. In addition, NMB officials said that they did not take action in response to survey results, which had a 59 percent response rate, because they believed the negative responses were attributable to a few employees. GAO recommended that NMB conduct an organizational climate assessment and develop actions to address the results of that assessment. In our current review, we found that NMB conducted an organizational climate assessment and has taken some actions to address the elements identified in the assessment, but must take additional actions to address employee concerns. NMB worked with OPM to conduct an organizational climate assessment in April 2019. The assessment had a response rate of 95 percent. Several NMB officials said the agency achieved a higher response rate than prior surveys because the Board held an all staff meeting to emphasize the importance of taking the assessment. In May 2019, NMB received the results of the organizational climate assessment from OPM. NMB identified a lack of communication across departmental staff as an issue. To address this, NMB directed regular interdepartmental updates, where each quarter a department is given an opportunity to present the activities within that department. NMB officials said that NMB held its first interdepartmental update in October 2019, with the Office of Legal Affairs presenting. The next interdepartmental update is scheduled for February 2020. NMB has identified some additional potential actions to address issues raised by the organizational climate assessment, including directing NMB’s CFO to rewrite the travel policy and to work with OPM to identify recommended training for supervisors, among others. However, these potential actions are not finalized and are generally unlinked to timeframes for implementation. By not taking these actions, NMB employees may be less engaged, which may lead to absenteeism or turnover. Travel Policy GAO 2018 Recommendation: Revise NMB’s Travel Policy and develop appropriate internal controls to ensure compliance with federal regulations. Since our 2018 review, we found that NMB has not revised its travel policy to be consistent with the Federal Travel Regulation (FTR) issued by the General Services Administration. We reported in 2018 that NMB’s travel policy was, in some respects, not consistent with the FTR. NMB management had also granted NMB staff exceptions to the agency travel policy that were not consistent with the FTR. For example, the FTR requires employees to rent the least expensive car available, but a former NMB management official approved the use of a luxury rental car in some cases. Our 2018 report found that without greater oversight of employee travel expenses, NMB may be incurring unnecessary additional expenses for employee travel. In our current review, we found that NMB has not revised its travel policy to be consistent with the FTR. However, NMB’s Office of Fiscal Services plans to rewrite portions of the travel policy, including clarifying roles and responsibilities of NMB employees and adding a Frequently Asked Questions portion to the policy. NMB officials said the revised policy is expected to be completed in 2020, and will be reviewed by the CFO in consultation with the Office of Legal Affairs prior to its publication. It is unclear the extent to which these changes will make NMB’s travel policy consistent with the FTR. In addition, NMB has taken steps to strengthen its internal controls related to travel, including: 1. Replacing the Chief of Staff role in travel policy. In August 2018, NMB replaced references to the eliminated Chief of Staff position in its travel policy to make the Board the decision making body for travel- related issues. This clarification strengthened internal controls because no one individual is singularly responsible for approval. 2. Updating NMB’s travel charge card program. In 2019, NMB transitioned to a new travel charge card program run by the General Services Administration. Both NMB and the Department of Treasury’s Bureau of Fiscal Services, which provides accounting services to NMB, routinely monitor the program, including monitoring each employee’s use of the travel card to ensure only appropriate official government-related expenses are being charged to the card. The CFO receives reports from this new program. 3. Issuing an interim procedure. Separately, NMB has established an interim procedure for disputed claims that sets timeframes for when vouchers must be approved to avoid delays in returning vouchers to travelers. The interim procedure requires travelers to cite the specific regulatory authority to support their disputed claim. The NMB Board is determining whether this procedure should be established officially in the travel policy. While NMB has made these initial efforts to strengthen internal controls related to travel, such as increasing oversight from the Board, NMB has not revised its travel policy to be consistent with the FTR. For example, NMB has not updated its policy to clarify the use of personal credit cards as discussed in our 2018 review. Without an updated policy consistent with the FTR, NMB may be incurring needless additional expenses for employee travel. Telework Policy Since our 2018 review, we found that NMB has not yet revised its telework policy, but the agency has collected telework agreements and provided training for teleworking employees. We reported in 2018 that NMB’s telework policy is not consistent with the requirements of the Telework Enhancement Act of 2010, which requires employees to take telework training and have signed telework agreements prior to beginning telework, and NMB did not consistently enforce its policy. NMB’s telework policy, effective October 2015, did not mention employee telework training nor did management require employees to complete training before entering into a telework agreement, as required by federal law. In addition, management allowed employees to telework without a written telework agreement, even though this requirement is specified in NMB’s telework policy. NMB agreed to review its policy and make any revisions determined to be necessary. In our current review, we found that NMB now tracks telework training and agreements to ensure that teleworking employees have telework agreements and completed telework training prior to engaging in telework. However, NMB has not updated its telework policy to be consistent with the requirements of the Telework Enhancement Act of 2010, instead determining after reviewing its policy that a revision was unnecessary. Despite this determination, the telework policy, last updated in October 2015, does not reflect the current structure of NMB: for example, it includes responsibilities for the Chief of Staff, a position that no longer exists. Further, the policy does not mention employee telework training. Until NMB updates its policy, it will continue to be outdated regarding official responsibilities and inconsistent with relevant law. NMB Has Not Fully Implemented Key Information Privacy and Security Practices, or Met Recent Information Security Requirements Information Privacy GAO 2013 Recommendation: Establish a privacy program that includes conducting privacy impact assessments and issuing system of record notices for systems that contain personally identifiable information. Since our 2018 review, we found that NMB has not always followed key information privacy practices to protect personal information federal agencies collect. In our 2018 review, we found that NMB did not establish a privacy program that included practices such as conducting privacy impact assessments and issuing system of records notices for systems that contain personally identifiable information. For example, in our 2018 review, we found that while NMB designated a privacy officer, the agency did not conduct privacy impact assessments for its systems and those of third-party providers containing the agency’s personally identifiable information. In our current review, we found that, of the four key information privacy practices described in our 2013 report, NMB is still following one, partially following two, and minimally following one practice. For example, NMB documented a privacy impact assessment dated July 2018. However, the assessment did not specify whether a system of records notice would be developed as required by OMB. For additional details on the extent to which NMB is following key information privacy practices, see appendix II. Information Security GAO 2013 Recommendation: Develop and fully implement key components of an information security program in accordance with the Federal Information Security Management Act of 2002. Since our 2018 review, we found that NMB continues to only partially follow the eight key information security practices in accordance with the Federal Information Security Management Act (FISMA). These practices include developing and implementing risk-based policies and procedures to ensure compliance with applicable standards and guidance, including system configuration requirements. For example, in our 2018 review, we found that, while NMB had its information security policy documented in its April 2016 Information Program Plan, which included risk assessment requirements, NMB had not developed agency- wide policies and procedures on the oversight of its third-party providers that support the operations and assets of the agency, as required by FISMA. In our current review, we found that, while NMB has created a policy to conduct periodic risk assessments of cyber threats and vulnerabilities, the agency did not provide risk assessment documentation of its enterprise network for fiscal year 2019. NMB officials said that the agency had not fully addressed information security practices due to a lack of resources. NMB officials stated the agency plans to address several of these practices with the targeted completion expected in fiscal year 2020. As a step to further focus on information technology challenges, NMB established the Office of Information Services and, as noted earlier, hired a CIO. While hiring a CIO does not directly address the practices described above, NMB officials said that these actions, along with hiring more staff and making key acquisitions through contracts, will enable NMB to fully follow the practices in the future. For additional details on the extent to which NMB is following key information security practices, including NMB’s recent engagement of contractors, see appendix II. In addition to the gaps in key information security practices discussed above, we found in our current review that NMB has not fully implemented federal requirements related to authorizing the cloud service approved through FedRAMP that the agency uses. OMB defines an authorization to operate as an official management decision where a federal official or officials authorize the operation of information system(s) and accept the risk to agency operations and assets, individuals, and other organizations based on the implementation of security and privacy controls. OMB requires agencies to use FedRAMP processes when granting authorizations to operate for their use of cloud services. The FedRAMP Program Management Office published guidance in 2017 to describe the process by which agencies can reuse existing authorizations. According to the FedRAMP guidance, agencies should document the authorization of 1) the agency system supported by the cloud service; and 2) the cloud service used by the agency. Additionally, the agency should provide a copy of its authorization letter for the cloud service to the FedRAMP Program Management Office so that the office can verify the agency’s use of the service and keep agencies informed of any changes to a provider’s authorization status. These steps ensure that federal agencies have made a determination of whether the cloud service provider’s risk posture is acceptable for use at that agency. According to NMB, the agency is using a cloud service that was approved through FedRAMP to support the agency’s enterprise network. NMB had documented the authorization of its enterprise network, but NMB had not documented its authorization of the cloud service to demonstrate that it had accepted the risk of using the service. In addition, NMB had not provided the authorization letter for the cloud service to the FedRAMP Program Management Office. NMB officials stated that the agency’s internal information security guidance did not include procedures to address FedRAMP requirements because the officials were unaware of those requirements. Without taking these steps, the FedRAMP Program Management Office may not be able to inform NMB, in a timely manner, if its cloud service provider has experienced a security incident. NMB Lacks Effective Internal Controls to Manage and Oversee Its Annual Appropriation and Audit Policy NMB has taken steps to improve its agency management and oversight, such as reorganizing some agency mission areas and filling key staff positions; however, it lacks effective internal controls to manage and oversee its annual appropriation and ensure that its audit policy is consistently followed. As a result, the agency did not use funding the Board said is needed to accomplish NMB goals. NMB had about $4 million in unobligated appropriations in expired accounts in the U.S. Treasury and unavailable to NMB for new obligations from fiscal years 2016 through 2019. In addition, NMB has not taken corrective actions to address management deficiencies identified during audits. NMB Lacks Effective Internal Controls to Manage and Oversee Its Annual Appropriations NMB has not established effective internal controls to assist the agency in managing and overseeing its annual appropriations. NMB has had significant unobligated balances remaining for the last 4 fiscal years, even though officials said they could not accomplish some of the agency’s goals – such as hiring staff and information technology initiatives – due to a lack of financial resources (see table 3). For example, from fiscal years 2016 through 2019, NMB had unobligated balances ranging between approximately $600,000 to over $2 million. These are the remaining funds from its appropriations received each year from fiscal year 2016 through 2019. In total, over 8 percent of NMB’s appropriations for the last 4 fiscal years went unobligated. NMB officials noted that hiring challenges and uncertainty regarding the agency’s final appropriation as a result of continuing resolutions— legislation that continues to fund federal agencies until final agency appropriations for a fiscal year are made—kept the agency from obligating funds during those fiscal years to achieve its goals. For example, NMB officials said that the Board did not pursue certain planned hiring, as well as other contract actions and travel, because of uncertainty about the amount of final appropriations that would be available. GAO has reported that continuing resolutions present challenges for federal agencies, and that agencies may not have enough time to spend funding on high-priority needs such as hiring. However, given the frequency of continuing resolutions, it is even more important for NMB to develop an effective plan to use its appropriations to accomplish agency goals. During our review, we found that NMB struggled to plan effectively for contingencies such as funding under continuing resolutions, although NMB’s budget request and appropriations were generally consistent for several years. Additionally, NMB officials told us they lacked an effective process to reliably forecast the amount of funding the agency would have remaining at the end of a fiscal year, and we found NMB did not plan effectively to allow the agency to obligate its fiscal year appropriations. NMB officials said the agency waited until the end of the third quarter to assign unobligated funds to other priorities in order to allow for the option of retaining temporary services during periods of high demand. Although NMB was able to reassign at least $1 million to arbitration work in each of the fourth quarters in 2017, 2018, and 2019, there was insufficient time to use other available funding in additional areas of need. The Board has taken steps to improve its budget execution process. In particular, the Board has implemented new bi-weekly budget reviews with the CFO meant to help NMB better forecast the agency’s available funds, including more reliably predicting the amount of unobligated funds and how to use those funds to meet agency goals. However, these changes have not been incorporated into a formal, written process to help NMB manage its appropriations more effectively to achieve agency goals. One goal under NMB’s Strategic Plan is to provide timely, efficient, and responsible stewardship of agency fiscal resources. Federal internal control standards state that internal controls comprise the plans used to fulfill the goals of the agency, and we have reported that maintaining written policies and procedures can help ensure that adequate internal controls are in place. Further, those standards state that management should obtain reliable financial data on a timely basis to enable effective monitoring. Until NMB establishes and documents an effective plan to manage its appropriations, as well as timely, reliable financial data, it may miss opportunities to achieve its objectives as efficiently and effectively as possible. NMB Lacks Effective Internal Controls to Ensure that It Consistently Follows Its Audit Policy to Identify and Address Audit Deficiencies NMB lacks effective internal controls to ensure that it consistently addresses deficiencies identified from financial and other audits. For example, NMB did not follow its own requirements to create corrective action plans to address findings of financial audits or GAO recommendations. Under agency policy, those corrective action plans should detail major steps for NMB to take, estimated completion dates, and other related information. Although NMB provided its financial auditors and GAO with general plans to address findings and recommendations, those plans have not always included major steps or estimated completion dates, and NMB has not always followed through with the steps it agreed to take. For instance, NMB’s financial auditor noted a deficiency in NMB’s internal controls related to financial reporting in 2017, and noted a similar deficiency in 2018 because NMB still had not addressed the problem sufficiently. Effective remediation of internal control deficiencies, like those found by GAO and other audits, is essential to achieving the objectives of the Federal Managers’ Financial Integrity Act, as amended (FMFIA). Unless NMB follows its own policy and federal guidance on corrective action plans, it may not do what is needed to address the risks associated with any deficiency. Likewise, NMB did not follow its policy to circulate draft financial audit findings and provide a draft response to the Board. When NMB received notice of a 2018 draft management letter from its independent financial auditors, the letter was not circulated for over 5 months nor was the Board provided with any draft response to the findings. Moreover, although NMB’s Board was notified of the letter’s existence in November 2018, the Board did not ask for the letter prior to May 2019, and said instead that they relied on the official in charge of the audit to follow procedure. Federal internal control standards state that management should obtain relevant data, including compliance data, in a timely manner so that they can be used for monitoring, but NMB officials and the Board did not obtain such information, putting the agency at risk for missed opportunities to identify and address audit deficiencies. Additionally, NMB has not effectively monitored the sufficiency of its internal controls as required under FMFIA. NMB has also not conducted its planned fiscal year 2017 internal controls review of its Office of Mediation or its fiscal year 2018 internal controls review of its Office of Legal Affairs in order to complete its annual review and report under FMFIA. Monitoring the effectiveness of internal controls provides the basis for an agency’s annual assessment and report of internal control, as required by FMFIA. NMB officials said the agency had not completed those reviews in a timely manner due to the timing of multiple audits occurring at NMB. NMB recently scheduled those reviews for 2020. Without monitoring its internal controls, NMB may not identify and be able to address significant management problems that can impede the agency’s ability to achieve its goals. Although NMB has identified and taken steps to address some of these audit and internal control deficiencies, it has not established an effective process to consistently monitor adherence to its audit policy and federal standards, evaluate the results, and remediate any deficiencies. For example, NMB has revised its audit policy to assign responsibility for audits and related follow-up to the CFO, who is tasked with helping NMB develop appropriate corrective action plans. Additionally, the Board said it addressed the issue of not circulating the audit management letter with the responsible official and changed the protocols for circulating letters for audit findings to include the Board in addition to the CFO. However, these actions, by themselves, do not establish the monitoring activities required by NMB’s audit policy and federal internal control standards. Under NMB’s new audit policy, the Board has responsibility to provide top-level oversight of NMB’s management activities related to audit coordination and follow-up; federal internal control standards require management to establish and operate monitoring activities to monitor the internal control system, evaluate the results, and remediate identified deficiencies on a timely basis. Further, FMFIA requires regular evaluation of the sufficiency of an agency’s internal controls. The failure of NMB to conduct the necessary reviews to support its annual assertion under FMFIA hampers the agency’s ability to identify risks in its internal controls and to correct any associated material weaknesses, as well as deprives Congress of information necessary to oversee the agency. Further, by not following its own policies and federal internal control standards, NMB may miss opportunities to improve its ability to achieve objectives, address audit deficiencies, and improve management oversight. Conclusions NMB has fully implemented one of the seven recommendations still open from prior GAO reports: creating standards on outside employment, which will help prevent employee violations of ethics rules. However, while making varying degrees of progress on the others, NMB still has more work to implement all six remaining recommendations. NMB has decreased its backlog of rail arbitration cases, but it has no specific goals against which to measure its progress toward reducing the backlog and ensuring NMB and Congress can adequately assess NMB’s resolution of disputes. Likewise, while the Board’s implementation of the climate assessment illustrated that it recognizes the need to understand employee concerns regarding communication across teams, agency travel, and training for management, among other things, it has not fully executed plans to address those concerns in order to benefit from that assessment. Finally, while NMB has improved certain aspects of how it implements its travel and telework policies, it has not sufficiently changed the policies themselves to ensure that NMB policies are consistent with the Federal Travel Regulation and the Telework Enhancement Act of 2010, respectively. Moreover, NMB established the Office of Information Services and hired a new CIO to assist NMB in addressing information security and privacy recommendations, but NMB still must change its underlying information policies and procedures, including updating its information privacy policy to reflect the current structure of NMB and perform a review of its system security plans. Additionally, until NMB complies with the recent FedRAMP requirements, its data may be at greater risk in the event of a security incident. Without fully implementing the remaining six recommendations and addressing the recent FedRAMP requirements, NMB is missing opportunities to mitigate information security risks and improve its own management and performance. Moreover, NMB faces challenges in managing and overseeing its annual appropriation and audit policy as a result of ineffective internal controls. Specifically, as a result of ineffective internal controls for managing and overseeing its annual appropriation, NMB has forgone several million dollars in funding that could have been used to accomplish agency goals. While continuing resolutions can make it difficult for agencies to achieve hiring and other goals, until NMB develops a written plan to document NMB’s process for reviewing and monitoring the agency’s annual appropriation to effectively manage its budgetary resources and spending, NMB will likely continue to miss opportunities to accomplish its goals. Similarly, until NMB establishes a specific process for the Board to monitor and evaluate NMB’s adherence to audit protocols, NMB will not be well positioned to address audit recommendations from its financial auditors and GAO, hindering efforts to improve its operations. While NMB officials have told us that they did not have the resources for certain changes that we recommended, such as information security and privacy improvements, they had more resources than they actually used, as evidenced by unused appropriations. Given the range of management issues that have remained unaddressed over the past 6 years, NMB should ensure their available resources are used effectively. Recommendations for Executive Action We are making the following four recommendations to the National Mediation Board (NMB): 1. The Chairman of the NMB should document NMB’s authorizations for its use of cloud services approved through FedRAMP and submit the authorizations to the FedRAMP Program Management Office. (Recommendation 1) 2. The Chairman of the NMB should update NMB’s security policies and procedures to include FedRAMP’s authorization requirements. (Recommendation 2) 3. The Chairman of the NMB should develop a written plan to document NMB’s process for reviewing and monitoring the agency’s annual appropriation to ensure that funds are used effectively. (Recommendation 3) 4. The Chairman of the NMB should establish a process for the Board to effectively monitor and evaluate NMB’s adherence to audit protocols and implementation of actions to address audit recommendations. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to the National Mediation Board (NMB) for review and comment. The agency provided written comments, which are reproduced in their entirety in appendix III. NMB also provided technical comments, which we incorporated as appropriate. NMB agreed with our four recommendations, and stated that it would take actions to address them. With regard to our first two recommendations concerning the Federal Risk and Authorization Management Program authorizations, NMB stated that it plans to complete the required actions by the end of fiscal year 2020. While NMB stated that it would take actions to address our third and fourth recommendations, concerning improvements to better monitor its annual appropriations and adhere to audit protocols to implement audit recommendations, respectively, NMB did not provide a timeframe for when these actions would be completed. NMB also said that it is taking actions to fully implement the remaining recommendations from our prior reports concerning its rail arbitration case backlog, organizational climate assessment, travel and telework policies, and information privacy and security. We are sending copies of this report to the appropriate congressional committees, NMB, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: National Mediation Board Documents Compared with Statutory and Policy Requirements Outside Employment Appendix II: Status of National Mediation Board Practices in Information Privacy and Security NMB appointed a senior agency official for privacy in April 2019 and documented the assignment through a memorandum. Partially following NMB established a privacy policy dated October 2017 that includes procedures for protecting sensitive information, including personally identifiable information. However, the policy reflects outdated roles and responsibilities. For example, the policy reflects the role of chief of staff that no longer exists in the agency. An NMB official stated the agency engaged a technical writer (contractor) to update the policy by the end of fiscal year 2020. Partially following The NMB documented a privacy impact assessment dated July 2018. assessments for systems containing personally identifiable information 4. However, the assessment did not specify whether a system of records notice would be developed as required by the Office of Management and Budget (OMB). An NMB official stated the agency engaged an information system security officer (contractor) to address this practice by the end of fiscal year 2020. NMB did not issue a system of records notice for its enterprise network and did not provide any documentation that this notice was not required in the agency’s privacy impact assessment. An NMB official stated the agency engaged an information system security officer (contractor) to address this practice by the end of fiscal year 2020. Partially following NMB developed an Information Program Plan dated April 2016 that states the agency annually conduct a risk analysis. NMB had assessments of its enterprise network conducted on May 2016 and November 2017. NMB also completed an information system risk assessment dated October 2017 that identifies and describes threats. However, NMB did not provide any assessment documentation for its network in fiscal year 2019. An NMB official stated the agency engaged a security assessor (contractor) to address this practice by the end of fiscal year 2020. Partially following NMB has developed an information security policy by documenting its based policies and procedures to ensure compliance with applicable standards and guidance including system configuration requirements existing April 2016 Information Program Plan. While the policy includes risk assessment requirements, it does not reflect oversight of NMB third- party providers. An NMB official stated that the agency engaged a technical writer (contractor) to address this practice by the end of fiscal year 2020. Partially following NMB’s current system security plan for its enterprise network has been in that cover networks, facilities, and systems or groups of systems, as appropriate place since March 2016. However, the plan does not include full implementation details on operational controls or a rationale on why controls are not applicable as recommended in National Institute of Standards and Technology guidance. An NMB official stated that the agency engaged an information system security officer (contractor) to address this practice by the end of fiscal year 2020. Extent NMB is following Partially following NMB has security awareness training guidelines signed April 2016 that specify agency employees and contractors will receive annual security awareness training. An NMB official stated that security awareness training is to be conducted each fiscal year. However, an NMB official stated the agency did not provide security awareness training in fiscal year 2018. NMB provided that training in fiscal year 2019, and an NMB official said the agency engaged an information system security officer (contractor) to address this practice by the end of fiscal year 2020. In May 2016, the NMB’s enterprise network was independently tested by the Department of the Treasury’s Bureau of Fiscal Service Division of Security Services. In addition, an NMB official documented a security assessment for the network signed November 2017. However, NMB did not provide us with any additional documentation to show the enterprise network was assessed in fiscal year 2019. According to an NMB official, the agency engaged a security assessor (contractor) to address this practice by the end of fiscal year 2020. Program Plan dated April 2016. In addition, the agency documented a plan of actions for its enterprise network dated January 2018. However, the plan of actions did not fully meet OMB requirements such as planned completion dates and changes to milestones, among other things. An NMB official stated that the agency engaged an information system security officer (contractor) to address this practice by the end of fiscal year 2020. Partially following NMB’s security-incident procedures dated June 2016 include information procedures for detecting, reporting, and responding to incidents on handling cyber incidents. However, the procedure did not include required actions specified by the Federal Information Security Modernization Act of 2014, such as notifying the federal information security incident center, law enforcement agencies, and relevant offices of inspector general and general counsel. An NMB official stated the agency engaged a technical writer (contractor) to address this practice by the end of fiscal year 2020. Partially following NMB documented a continuity of operations plan policy dated March 2016. However, the agency has not documented a contingency plan for its enterprise network. An NMB official stated the agency engaged an information system security officer (contractor) to address this practice by the end of fiscal year 2020. covered by information in a system of records, the category of records that are maintained about the individuals, and how the information is shared and routinely used by the agency. Appendix III: Comments from the National Mediation Board Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mary Crenshaw (Assistant Director), Andrew Nelson (Analyst-In-Charge), Cindy Brown Barnes, Larry Crosland, Mikey Erb, Chelsa Gurkin, John Lack, and Dana Pon made significant contributions to this report. Also contributing to this report were Shirley Abel, Amy Anderson, Bill Anderson, J. Howard Arp, Gary Bianchi, Rachael Chamberlin, Vijay D’Souza, Robert Graves, Carol Henn, Janice Latimer, Barbara Lewis, Benjamin Licht, Jessica Orr, Monica Perez- Nelson, James Rebbe, Constance Satchell, Monica Savoy, Almeta Spencer, Sabrina Streagle, Barbara Steel, Amy Sweet, Curtia Taylor, Candice Wright, and Paul Wright.
Why GAO Did This Study NMB was established under the Railway Labor Act to facilitate labor relations for airline and railway carriers by mediating and arbitrating labor disputes and overseeing union elections. The FAA Modernization and Reform Act of 2012 included a provision for GAO to evaluate NMB programs and activities every 2 years. GAO's previous reports, issued in December 2013, February 2016, and March 2018, included 13 recommendations for NMB based on assessments of policies and processes in several management and program areas. NMB had implemented six of those recommendations previously, leaving seven for our review. This fourth report examines the (1) extent to which NMB has taken actions to fully implement GAO's remaining recommendations, and (2) other challenges NMB faces in key management areas and in overseeing its operations. GAO reviewed relevant federal laws, regulations, and NMB documents, such as its travel and telework policies; examined arbitration caseload data and the results of NMB's 2019 Organizational Climate Assessment; and interviewed NMB officials. What GAO Found The National Mediation Board (NMB), which facilitates labor relations for airline and railway carriers, has implemented one of GAO's seven recommendations remaining from past reports (see table). Specifically, NMB has developed a policy to prevent violations of ethics rules regarding outside employment and monitors compliance with that policy. NMB has not yet fully implemented the other six recommendations. For example, NMB has developed some strategies to reduce its arbitration case backlog, but lacks a plan with goals and time frames to complete that work. Similarly, NMB has completed an organizational climate assessment, but still must take additional actions to address employee concerns. By not fully implementing these and other recommendations, NMB remains at risk of not fulfilling its mission in several key areas, including information security and organizational climate. In this review, GAO found that, in addition to the six unimplemented recommendations, NMB lacks internal controls to effectively manage and oversee its appropriations and consistently follow its audit policies. NMB officials said the agency needed its full funding to address various agency priorities, such as hiring information technology specialists, but NMB did not use all of its funding for fiscal years 2016 through 2019, leaving a total of more than $4 million unobligated from those years; those funds are not available to NMB for new obligations. Officials said that hiring challenges and uncertainty concerning the agency's final appropriations made managing its budget resources difficult. NMB has a new process to monitor its budget resources, but has not documented that process. Without documenting that process, NMB may not be certain it uses its funding effectively to achieve its hiring and other goals. Additionally, NMB has not consistently followed its audit policy to address deficiencies identified in financial and other audits. For example, NMB did not create specific corrective action plans to address findings from financial or GAO audits. The NMB Board said it relied on senior managers to follow procedures, but the Board is ultimately responsible for ensuring that its managers implement the internal control system. Without a process to effectively oversee and evaluate its adherence to internal controls and its own audit policies, NMB may miss opportunities to achieve objectives, address audit deficiencies, and improve management oversight. What GAO Recommends GAO is making four recommendations, including that NMB document its process for reviewing and monitoring the agency's annual appropriations to ensure effective use of funds, and establish a process for the Board to effectively monitor and evaluate NMB's adherence to audit policies. NMB agreed with GAO's recommendations.
gao_GAO-20-343
gao_GAO-20-343_0
Background Uranium Enrichment and Reprocessing of Spent Nuclear Fuel Uranium enrichment is the process of increasing the concentration of the uranium-235 isotope relative to uranium-238 in a quantity of uranium. Natural uranium consists of approximately 0.7 percent of the fissile uranium-235 isotope, while uranium used in commercial nuclear power reactors generally consists of 3 to 5 percent uranium-235 and uranium for nuclear weapons requires a higher concentration of uranium-235. In addition, as a nuclear reactor operates, some of the uranium in the reactor fuel is converted to plutonium, which can also be used as a weapons material when it is separated from other elements of the irradiated, or spent, fuel through a process known as reprocessing. Plutonium and enriched uranium are “special nuclear material” under the Atomic Energy Act. The processes for obtaining such material— enrichment and reprocessing—are called sensitive nuclear technologies. The Treaty on the Nonproliferation of Nuclear Weapons, the International Atomic Energy Agency, and Safeguards Under the Treaty on the Nonproliferation of Nuclear Weapons, which came into force in 1970, non-nuclear weapon state parties to the treaty may not acquire nuclear weapons and must conclude a Comprehensive Safeguards Agreement (CSA) with the International Atomic Energy Agency (IAEA). IAEA is an independent international organization affiliated with the United Nations that has the dual mission of promoting the peaceful uses of nuclear energy and verifying, through a set of technical measures called safeguards, that nuclear technologies and materials are not diverted from peaceful uses to military purposes. Most countries have also brought into force an Additional Protocol to their CSAs, which provides IAEA with a broader range of information on the country’s nuclear and nuclear-related activities than under a CSA alone and gives the agency’s inspectors access to an expanded range of locations. For example, the Additional Protocol requires states to declare the location and status, among other things, of uranium mines and uranium and thorium mills. Under a CSA alone, material in mining or ore processing activities (e.g., uranium at mines and mills) is not subject to the agency’s safeguards as it is not yet suitable for enrichment. The United States promotes universal adoption of the Additional Protocol as a policy, but it is not a requirement for the conclusion of a nuclear cooperation agreement with the United States. Figure 1 shows the safeguards arrangements of the partners with which the United States has nuclear cooperation agreements. The U.S. Legal Basis for Nuclear Cooperation Section 123 of the AEA establishes a framework for civilian nuclear cooperation agreements, which are a prerequisite for the export of certain nuclear material and equipment, including major components of nuclear reactors. The United States has 23 such agreements with other nations and entities. Section 123 generally requires that nuclear cooperation agreements include nine nonproliferation conditions, such as a guarantee from the cooperating party that transfers will not be used for any military purpose. The President may exempt an agreement from any of these requirements, provided that the president determines that the inclusion of any such requirement would be seriously prejudicial to United States nonproliferation objectives or otherwise jeopardize the common defense and security. See Table 1 for a list of the nine requirements. Section 123 of the AEA also requires that State supply the President with an unclassified Nuclear Proliferation Assessment Statement (NPAS) for each proposed agreement, accompanied by a classified annex prepared in consultation with the Director of National Intelligence. The NPAS describes how the agreement meets AEA nonproliferation requirements and usually includes an overview of the other party’s nuclear energy program and related infrastructure, nonproliferation policies, and relations with countries of proliferation concern. Section 123 also lays out requirements for informing congressional committees and obtaining congressional review. It requires that the President submit any proposed agreement along with the NPAS to the House Committee on Foreign Affairs and the Senate Committee on Foreign Relations for consultation for a period of at least 30 days of continuous session. The proposed agreement, with the NPAS, must subsequently be submitted to Congress as a whole (and referred to the abovementioned committees) for a period of 60 days of continuous session, during which the committees consider it and submit recommendations to the House and Senate, respectively, as to whether to approve the agreement. As a general matter, the agreement may then be brought into effect unless a joint resolution of disapproval is enacted before the end of this period. Section 123 also requires the President to keep the abovementioned committees “fully and currently informed” of any initiative or negotiations relating to a new or amended agreement for peaceful nuclear cooperation. Figure 2 depicts the stages and time frames for negotiation and conclusion of nuclear cooperation agreements. Another section of the AEA, Section 57(b), governs the direct or indirect engagement or participation in the development or production of special nuclear material outside the United States. Under this provision, DOE regulates exports of commercial nuclear technology and assistance. DOE has promulgated these regulations at 10 C.F.R. Part 810; authorizations under these regulations are accordingly referred to as “Part 810 authorizations.” Activities authorized under section 57(b) may not require a nuclear cooperation agreement. The Secretary of Energy signed seven “Part 810” authorizations for the export of nuclear technology to Saudi Arabia between December 2017 and February 2019. For more information about Part 810, see table 2. Enrichment and Reprocessing Commitments in Nuclear Cooperation Agreements In negotiating nuclear cooperation agreements, the United States has sometimes pursued nonproliferation measures beyond the nine conditions specified by the AEA. For example, the agreement that the United States concluded with the United Arab Emirates (UAE) in 2009 includes a provision in which UAE agreed to forswear enrichment and reprocessing capabilities. This broad restriction on any enrichment and reprocessing, which became known as the “gold standard,” goes beyond the enrichment and reprocessing restriction required by Section 123 of the AEA, because it applies to all nuclear material rather than just U.S.- obligated material. U.S.-obligated material includes material transferred by the United States or material used in, or produced through, the use of material or facilities transferred by the United States. Following the conclusion of the UAE agreement, the NSC deliberated requiring the so- called “gold standard” for all nuclear cooperation agreements as a policy, but ultimately adopted a policy of pursuing it on a case-by-case basis. The nuclear cooperation agreement that the United States concluded with Taiwan in 2014 included a similar provision. By contrast, the agreement concluded with Vietnam the same year includes a political commitment, rather than a legal one, not to acquire enrichment and reprocessing capabilities. Role of Agencies in Nuclear Negotiations In addition to the roles of State, DOE, and the NSC discussed previously, additional U.S. agencies such as Commerce, DOD, and NRC are involved in matters related to international nuclear cooperation and the negotiation and conclusion of a nuclear cooperation agreement. Table 2 describes agency roles related to nuclear cooperation. Stakeholders Have Identified a Range of Potential Nonproliferation Benefits and Concerns Stakeholders we interviewed identified various potential nonproliferation benefits and concerns related to negotiating a nuclear cooperation agreement with Saudi Arabia. Specifically, stakeholders identified the following benefits: A nuclear cooperation agreement would limit production of weapons-usable material. Several stakeholders told us that a nuclear cooperation agreement with Saudi Arabia would give the United States the opportunity to directly restrict Saudi Arabia’s proliferation potential. For example, a U.S.-Saudi nuclear cooperation agreement would include a term required by the AEA that would limit Saudi Arabia’s production of weapons-usable material by prohibiting Saudi Arabia from separating plutonium accumulated in any reactor supplied under the agreement without U.S. consent. According to some stakeholders, other potential supplier countries likely would not impose such restrictions as conditions of supplying Saudi Arabia with nuclear materials or equipment. Cooperation would help the United States retain influence. Several stakeholders noted that nuclear cooperation with Saudi Arabia could help revitalize the United States as a global nuclear supplier, which would help the United States retain its current influence over global nonproliferation norms and rules. For example, as a global nuclear supplier, the United States would have greater influence in international nuclear forums such as the Nuclear Suppliers Group, which establishes nonproliferation guidelines. According to one stakeholder, the United States’ political leverage to promote strong global nonproliferation norms depends upon the United States’ retaining a leadership role in nuclear energy. Another stakeholder said that nuclear cooperation agreements provide the United States with influence over countries’ proliferation decisions. For instance, this stakeholder said that nuclear cooperation agreements include legal conditions that reinforce the legal obligations of the Treaty on the Nonproliferation of Nuclear Weapons and create an additional disincentive to violate those conditions or withdraw from the treaty. Stakeholders we interviewed also identified several proliferation concerns that U.S.-Saudi nuclear cooperation may not mitigate, and could potentially aggravate. According to these stakeholders, concerns include the following: Concerns about stated Saudi nuclear weapon ambitions and commitment to obligations. Some stakeholders expressed concern over Saudi officials’ stated interest in acquiring nuclear weapons. As previously noted, senior Saudi officials have said publicly that there could be conditions under which the country would seek to acquire nuclear weapons or develop a nuclear weapons program. For example, Saudi Crown Prince Mohammed bin Salman said publicly in 2018 that if Iran develops or obtains a nuclear weapon, Saudi Arabia would also work to do so. In 2009 and 2012, respectively, King Abdullah and Prince Turki al- Faisal were reported to have made similar statements. Some stakeholders said that the intent behind such statements was to send a message about Saudi Arabia’s posture toward Iran, but some other stakeholders said that lower-lever Saudi officials have also indicated that the country is open to pursuing nuclear weapons. Several stakeholders said that such statements should be taken seriously as indicators of Saudi nuclear weapons ambitions. One stakeholder said that such statements raise concerns as to Saudi Arabia’s commitment to its obligations under the Treaty on the Nonproliferation of Nuclear Weapons. This stakeholder also said that Saudi Arabia has demonstrated willingness to disregard the terms of transfers of U.S. conventional arms to the country, calling into question whether the country could be trusted to abide by the terms of the nuclear cooperation agreement. Concerns about the extent to which a nuclear cooperation agreement would mitigate the risks of a Saudi weapons program. Several stakeholders questioned whether the terms of an agreement would meaningfully restrict proliferation behavior. For example, notwithstanding the provision of Section 123 of the AEA that prohibits a partner country from using U.S.-obligated material or equipment for weapons purposes, some stakeholders said that another risk of nuclear cooperation is that it would provide Saudi Arabia with the infrastructure and knowledge to produce nuclear material for a future weapons program. In addition, some stakeholders said that there were questions as to whether the United States could enforce the terms of an agreement if it was breached—for example, whether in practice the United States would be able to retrieve U.S.-obligated nuclear material from another country. One stakeholder also noted that the terms of a nuclear cooperation agreement would only be relevant in mitigating proliferation risks if Saudi Arabia contracted with a U.S. company to build the reactors. If Saudi Arabia purchases reactors from other suppliers, its nuclear program will not be bound by the section 123-mandated restrictions of a nuclear cooperation agreement with the United States, since those restrictions only apply to U.S.-obligated material. Concerns about the thoroughness of a U.S. assessment of Saudi proliferation risks. Some stakeholders raised concerns about whether the NPAS process would adequately assess Saudi proliferation risks. We have previously identified weaknesses in the NPAS process related to interagency consultation and a robust, transparent review process. As described above, an NPAS for a U.S.-Saudi nuclear cooperation agreement would be expected to include an overview of Saudi Arabia’s nuclear energy program and related infrastructure, nonproliferation policies, and relations with countries of proliferation concern. An NPAS would also include an analysis of the adequacy of safeguards and other control mechanisms to ensure that assistance provided under the U.S.- Saudi agreement is not used to further any nuclear weapons effort. Some stakeholders said that it would be important for the NPAS for Saudi Arabia to address the questions regarding the country’s stated intentions to develop a nuclear weapons program. One stakeholder questioned whether an NPAS would provide a sufficient assessment of Saudi nuclear proliferation behavior or potential because the statutory requirement for intelligence community input into the NPAS is narrowly worded. Specifically, the addendum that the intelligence community is to provide to each NPAS is required to contain a comprehensive analysis of the country’s export control system with respect to nuclear-related matters, including interactions with other countries of proliferation concern and the actual or suspected nuclear, dual-use, or missile-related transfers to such countries, but the requirement does not call for the intelligence community to assess the country’s intent to develop nuclear weapons. State officials declined to tell us whether they had begun drafting an NPAS in anticipation of an agreement with Saudi Arabia. However, State officials noted that their engagement with the intelligence community in the development of an NPAS goes beyond the requirements of that statute, but they also said that the legal requirement was limited. Concerns about regional proliferation risks and undermining of global nonproliferation norms. Several stakeholders expressed concerns about the regional and international nonproliferation implications of a U.S.-Saudi nuclear cooperation agreement. For example, several stakeholders said that an agreement without restrictions on enrichment and reprocessing could lead to the renegotiation of the agreement with the UAE. The agreement with the UAE, which includes a commitment to forswear enrichment and reprocessing, also contains a provision that would allow the UAE to request renegotiation of its agreement if another country in the region concludes a less restrictive agreement with the United States. Several stakeholders also raised the concern that a nuclear cooperation agreement without additional nonproliferation conditions would undermine U.S. and global nonproliferation norms by sending the message that such norms were negotiable. For example, in addition to the Additional Protocol being a mechanism to prevent diversion of nuclear material, many stakeholders said that insisting on the Additional Protocol was critical and emphasized the importance of the Additional Protocol as a global nonproliferation norm. Several stakeholders also questioned the premise that supplying Saudi Arabia’s nuclear program would allow the United States to retain influence over international nonproliferation norms. One stakeholder said that the United States has not been a significant nuclear exporter for decades and has nonetheless retained its influence. Nuclear Cooperation Negotiations with Saudi Arabia Have Stalled over Differences over Nonproliferation Conditions The United States and Saudi Arabia have not made significant progress toward a nuclear cooperation agreement because of persistent differences between the parties over nonproliferation conditions, including U.S. insistence that Saudi Arabia conclude an Additional Protocol with IAEA and that Saudi Arabia agree to restrictions on enrichment and reprocessing, based on our analysis of available information. The United States and Saudi Arabia first held formal nuclear cooperation negotiations in 2012, during which the United States provided a draft agreement text to Saudi officials that included the nine nonproliferation conditions required under Section 123 of the AEA, according to NNSA officials. In that round of negotiations, Saudi officials accepted “the vast majority” of the conditions in the draft text, according to NNSA officials; these officials estimated that approximately three pages of the text remained to be negotiated. NNSA officials told us that the areas of disagreement include provisions required by the AEA. In the next formal negotiations in 2018, there was no progress in resolving the remaining issues, and no changes to the text of the agreement were made at the time, according to agency officials. The areas of disagreement that were not resolved in 2012—including those regarding provisions required by the AEA—remained unresolved as of January 2020, according to agency officials. These areas of disagreement include: Additional Protocol. The United States has urged Saudi Arabia to conclude an Additional Protocol with IAEA, according to a September 2019 letter from the Secretary of Energy to the Saudi Minister of Energy, Industry, and Mineral Resources and based on public statements by the Secretary of Energy and another government official. Several former agency officials and other stakeholders said that Saudi Arabia has expressed an unwillingness to conclude an Additional Protocol with IAEA. Restriction on enrichment and reprocessing. According to public statements by agency officials, the United States supports a permanent restriction on enrichment and reprocessing. According to the Secretary’s September 2019 letter and to former officials we interviewed, however, the United States may be willing to accept a temporary restriction on enrichment and reprocessing in its negotiations with Saudi Arabia. According to these former officials, such a temporary restriction would allow the United States and other countries more time to work with Saudi Arabia to reach agreement on mutually acceptable terms. However, one stakeholder said that this option would not be attractive to Saudi Arabia and would not be useful to the United States as a nonproliferation measure because an existing nuclear cooperation agreement and any nuclear infrastructure that it would have enabled would reduce U.S. leverage to influence Saudi enrichment and reprocessing decisions in the future. Despite the lingering disagreement on certain provisions between both countries, NNSA officials told us in November 2019 they believed the negotiations had made progress since 2012 because the continued interactions with Saudi officials over this time were useful in advancing Saudi understanding of the United States’ position on the nonproliferation conditions of a potential agreement. We are unable to characterize Saudi views on the status of the negotiations or on other aspects of our review, because State did not respond to our repeated requests for assistance in facilitating travel to Saudi Arabia and interviews with relevant Saudi officials. We also did not receive a response to our written request to the Saudi ambassador to the United States for an opportunity to interview relevant Saudi officials about the negotiations. Agency Management of Negotiations, Including Agency Roles and Informing Congress, Remains Unclear Agency management of U.S.-Saudi nuclear cooperation negotiations remains unclear with regard to agency roles and informing Congress. We were unable to confirm U.S. agency roles at a range of U.S.-Saudi interactions where nuclear cooperation was or may have been discussed. We were also unable to determine whether the agencies kept the relevant congressional committees fully and currently informed of the negotiations. Agency Roles in U.S.- Saudi Nuclear Cooperation Negotiations Remain Unclear The roles various U.S. agencies have played in U.S.-Saudi nuclear negotiations remain unclear because DOE and State did not provide us with information to clarify or corroborate such roles. According to a State official and DOE officials, State would have “by definition” led any negotiations and without State present, any interactions between U.S. and Saudi officials on nuclear cooperation did not constitute negotiations. The AEA stipulates that State conduct any nuclear cooperation negotiations but does not define “negotiations.” According to one stakeholder, during an NSC meeting in late 2017, during which nuclear cooperation with Saudi Arabia was discussed, the NSC made a decision to reinforce established agency roles, including specifying that State would lead any negotiations. We were unable to confirm whether NSC made such a decision because NSC did not respond to our requests for interviews or documentation. However, through our interviews with State, DOE, and NRC officials, we determined that representatives of each agency participated in the 2012 and March 2018 formal nuclear cooperation negotiations with Saudi Arabia. State and DOE officials did not provide information that we requested about interactions between the United States and Saudi Arabia, such as the dates and agency participants. However, despite the limited cooperation from State and DOE, we were able to identify through our analysis of documentation and interviews with other stakeholders, a range of interactions between the United States and Saudi Arabia where nuclear cooperation was or may have been discussed. The interactions we were able to identify during which potential nuclear cooperation was discussed are as follows: five bilateral meetings, including a September 2018 meeting in Washington, D.C., a December 2018 meeting in Saudi Arabia, and an August 2019 meeting in Washington, D.C.; a Civil Nuclear Energy Roundtable in Saudi Arabia in December 2017, a commercial nuclear mission to Saudi Arabia in April 2018, in partnership with DOE; and the letter from the Secretary of Energy to his Saudi counterpart in September 2019 conveying U.S. positions on nonproliferation conditions for U.S.-Saudi nuclear cooperation. We also identified five interactions where the U.S. Secretary of Energy and Saudi officials may have discussed nuclear cooperation, including a phone call in November 2017 and meetings on the sidelines of four events: the IAEA General Conference in Austria in September 2017, the Bilateral Energy Dialogue in Saudi Arabia in December 2017, the World Economic Forum in Switzerland in January 2018, and the Future Investment Initiative in Saudi Arabia in October 2019. Figure 3 illustrates U.S.-Saudi negotiations and other interactions, and appendix II includes a detailed list of the interactions we were able to identify. Because State and DOE did not cooperate with our information requests, we cannot confirm that the interactions we identified constitute all of the interactions between the United States and Saudi Arabia on potential nuclear cooperation since 2012. Furthermore, we were unable to determine whether the agencies followed the established roles in the other interactions with Saudi Arabia where nuclear cooperation was or may have been discussed because NSC, State, and DOE did not respond to our requests for information to clarify these matters. Specifically, with the exception of the April 2018 commercial nuclear mission to Saudi Arabia, we were unable to determine whether State or other agency officials authorized, were present for, or were aware of a number of DOE–led interactions with Saudi Arabia described above. In addition, State and DOE officials declined to confirm whether State authorized the September 2019 letter from the Secretary of Energy to his Saudi counterpart regarding U.S. positions on the nonproliferation conditions for nuclear cooperation. The Level of Information U.S. Agencies Have Provided to Congress about U.S.-Saudi Nuclear Cooperation Negotiations Remains Unclear It is unclear whether the agencies kept the relevant committees fully and currently informed of U.S.-Saudi negotiations. State officials stated that they consistently provide information to Congress, but the limited information they provided to us does not support this position. As previously stated, section 123 of the AEA requires that the President keep certain congressional committees “fully and currently informed of any initiative or negotiations relating to a new or amended agreement for peaceful nuclear cooperation.” State officials told us during our May 2019 interview that they consistently provided information to Congress on the nuclear cooperation negotiations and other interactions with Saudi Arabia. However, neither State nor DOE provided documentation within the time frame of our review to support these statements. DOE did not respond to our request for information on any dates or related details of any congressional briefings related to U.S.- Saudi nuclear cooperation negotiations. State did not respond to our initial request in May 2019 for information on dates and related details of any congressional briefings it held on U.S.-Saudi nuclear cooperation negotiations. However, in January 2020, after reviewing a preliminary draft of this report, State officials provided a list of congressional briefings on U.S. nuclear cooperation initiatives since 2013. We reviewed this list and identified two briefings specifically focused on nuclear cooperation negotiations with Saudi Arabia: one held in January 2018 for House Committee on Foreign Affairs staff and another held in May 2019 for House Committee on Oversight and Reform staff. State officials also noted that U.S.-Saudi nuclear cooperation may have been discussed in other State briefings that focused on nuclear cooperation in general or with other countries, such as briefings to the House Committee on Foreign Affairs and Senate Committee on Foreign Relations in July 2019 and November 2019. State officials declined to discuss the details of any congressional briefings with us, including the participating agencies, substantive issues, and other details. Consequently, we could not establish the extent and substance of information the agencies provided to Congress on U.S.-Saudi nuclear cooperation negotiations. After State did not provide us with the information we requested, we reached out to a number of current and former staff of the House Committee on Foreign Affairs and Senate Committee on Foreign Relations, representing both parties. Through our interviews with eight of these staff, we were able to identify one congressional briefing by the agencies in December 2017 on the status of U.S.-Saudi nuclear cooperation negotiations. However, based on our interviews with congressional staff, we were unable to identify the dates of any other briefings by the agencies on the U.S.-Saudi nuclear cooperation negotiations. Notably, based on our review of the documentation and interviews with congressional staff, it does not appear that the agencies provided a briefing to the House Committee on Foreign Affairs or Senate Committee on Foreign Relations until more than a year after the last formal U.S.-Saudi nuclear cooperation negotiations in March 2018. Current and former congressional staff we interviewed also described their frustration in trying to obtain information, beyond briefings, from the agencies on the status of the negotiations. Several current and former congressional committee staff we interviewed told us that they learned of developments in the U.S.-Saudi negotiations through the press or from representatives of the nuclear industry, rather than directly from the agencies, despite having asked the executive branch to keep them informed of any developments. For example, one former staff member of a relevant committee told us that they learned of the March 2018 formal negotiations just days before the meeting through a press article. Another former congressional committee staff member said that since late 2017, the agencies have only provided information to Congress about the negotiations in response to forceful measures, such as holds on nominations or legislation. According to many of the current and former congressional staff we interviewed, this stands in contrast to past practice in which agencies regularly briefed the committees on nuclear cooperation negotiations without coercion, and sometimes even initiated the meetings. State and DOE provided Congress with contradictory justifications for not providing such information to Congress, according to our review of documents and interviews with congressional staff. For example, one congressional committee staff member told us that agency officials said they were not obligated to keep the committee currently and fully informed of negotiations because the United States was not in negotiations with Saudi Arabia. On another occasion, when pressed by members of Congress in congressional hearings, an agency official said he could not discuss nuclear cooperation negotiations with Saudi Arabia because negotiations were ongoing. Specifically, in September 2019, the Assistant Secretary of State for International Security and Nonproliferation stated in a hearing that he could not get into details of nuclear cooperation negotiations with Saudi Arabia because the negotiations were ongoing. These contradictory justifications may have led to inconsistency in the agencies providing information to Congress on nuclear cooperation negotiations. By committing to regularly scheduled, substantive briefings to Congress on nuclear cooperation initiatives and negotiations, State and DOE could enhance transparency and build confidence with Congress on nuclear cooperation, preemptively address congressional concerns about cooperation with certain countries, and support congressional oversight on nonproliferation matters. Former congressional staff, including those involved in drafting Section 123(e) in 2008—the “fully and currently informed” provision—said the intent of the provision was to promote transparency on the status of any nuclear cooperation negotiations to the congressional committees of jurisdiction to lay the groundwork for congressional consideration of any agreement. However, some former congressional staff said that the provision allows for broad interpretation and that it may be up to Congress to more clearly define the “fully and currently informed” requirement. By specifying, through an amendment to the AEA, its expectations for timeliness and information provided by the agencies on nuclear cooperation negotiations and initiatives, Congress could have better assurance that it will get the information it needs for its oversight of nuclear nonproliferation matters. Conclusions State officials told us that they consistently provided information to Congress on the nuclear cooperation negotiations and other interactions with Saudi Arabia. They later provided a list of congressional briefings on U.S. nuclear cooperation initiatives since 2013 but did not specify what was discussed. Based on this limited information, it is unclear whether the briefings by State kept Congress fully and currently informed of developments in the negotiations with Saudi Arabia, and congressional staff provided us with examples of having to find information on the negotiations from other sources, such as press articles. NNSA is a separately organized agency within the Department of Energy, with responsibility for its nuclear weapons and nonproliferation programs, among other things. transparency and build confidence with Congress on nuclear cooperation, preemptively address concerns about cooperation with certain countries, and support congressional oversight on nuclear nonproliferation matters. Former congressional staff involved in drafting the “fully and currently informed” provision said that its intent was to promote transparency and lay the groundwork for congressional consideration of any agreement. However, some said that this provision allows for broad interpretation of the “fully and currently informed” requirement. By specifying, through an amendment to the AEA, its expectations for timeliness and information provided by the agencies regarding nuclear cooperation negotiations and initiatives, Congress could have better assurance that it will get the information it needs for its oversight of nuclear nonproliferation matters. Matter for Congressional Consideration Congress should consider amending the Atomic Energy Act to require regularly scheduled briefings, for instance, on a quarterly basis, and specify expectations for the content of such briefings, such as potential difficulties in negotiating nonproliferation conditions with partner countries. Recommendation The Secretary of State, in coordination with the Secretary of Energy, should commit to regularly scheduled, substantive briefings for the House Committee on Foreign Affairs and the Senate Committee on Foreign Relations on all initiatives and negotiations related to nuclear cooperation in order to enhance transparency and establish greater confidence with Congress on nuclear cooperation matters. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to the Secretaries of State, Energy, Defense, and Commerce, and to the Chairman of the NRC for review and comment. In its written comments, reproduced in appendix III, State neither agreed nor disagreed with our findings, and concurred with our recommendation. State also noted in its response that it is already implementing the recommendation; specifically, that it conducted briefings on nuclear cooperation in 2018 and 2019 to Congress. However, as we noted in our report, because State officials declined to discuss the details of these briefings, we could not establish the extent and substance of information the agencies provided to Congress on U.S.-Saudi nuclear cooperation negotiations. Furthermore, as we reported, staff of the relevant congressional committees we interviewed were able to identify only one briefing on U.S.-Saudi nuclear negotiations and several staff expressed frustration in trying to get information about the negotiations, including learning of developments through the press. NRC also provided written comments, which are reproduced in appendix IV; NRC neither agreed nor disagreed with our recommendation. DOE provided technical comments, which we incorporated as appropriate. DOD and Commerce did not have any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of State, the Secretary of Energy, the Secretary of Defense, the Secretary of Commerce, the Chairman of the Nuclear Regulatory Commission, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology NNSA is a separately organized agency within the Department of Energy, with responsibility for its nuclear weapons and nonproliferation programs, among other things. Saudi nuclear cooperation negotiations and any areas of disagreement, we also reviewed official documentation such as agency correspondence to Saudi officials and transcripts of congressional hearings. In addition, we submitted to the Saudi Ambassador to the United States a written request for an opportunity to interview relevant Saudi officials about the negotiations, but did not receive a response. To examine U.S. agency management of the negotiations, including how the agencies have informed Congress about the negotiations, we reviewed official documentation such as agency correspondence to Saudi officials, certain export authorization application packages, dates of congressional briefings on nuclear cooperation, and agency documentation related to U.S. government advocacy for U.S. businesses related to nuclear cooperation with Saudi Arabia. We also requested a list of dates and participants of U.S.-Saudi interactions pertaining to nuclear cooperation, as well as materials used for briefings, if any, by the agencies to Congress. The agencies provided us with limited information in response to some categories we requested and did not provide information in other categories. Specifically, beginning in May 2019, we requested from the Departments of State and Energy and the National Security Council (NSC) basic factual information on license applications for the transfer of nuclear technology to Saudi Arabia; the dates of any discussions or negotiations between U.S. and Saudi officials; the U.S. and Saudi agencies, offices, and representatives present at such meetings; and the types of records produced from such meetings. DOE provided us with information on the license applications, and State and DOE provided us with limited information on their general processes relating to the negotiation of agreements. State officials also provided a list of congressional briefings on U.S. nuclear cooperation initiatives since 2013 in January 2020, after reviewing a preliminary draft of this report, but declined to discuss the details of the briefings with us, including the participating agencies, substantive issues, and other details that would have allowed us to establish the extent of information provided to Congress on U.S.-Saudi nuclear cooperation negotiations. Furthermore, neither agency nor NSC provided substantive information in any of the other categories we requested; in order to complete this review within a time frame responsive to the needs of our congressional requesters, we adjusted our audit objectives to focus on examining the status of the negotiations and management of the negotiations process. Because State, NSC, and DOE did not provide information to fully address these adjusted objectives, we obtained documentation and information from other agency officials and over 30 other stakeholders, including, as previously noted, former senior U.S. government officials, current and former congressional staff, and nuclear industry representatives and knowledgeable nongovernmental experts who have followed the negotiations. We conducted our work from April 2019 through April 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Identified Developments in U.S.- Saudi Nuclear Cooperation Since the United States and Saudi Arabia signed a memorandum of understanding on nuclear energy cooperation in 2008, there have been a variety of interactions between the United Sates and Saudi Arabia regarding potential nuclear cooperation between both countries, as well as other developments related to such cooperation. The Atomic Energy Act (AEA) does not define “negotiations.” In this report, we use “formal nuclear cooperation negotiations” and “formal negotiations” to signify sessions where parties aim to agree on specific terms and conditions in the text of an agreement. We use the term “interactions” for all U.S.-Saudi encounters on potential nuclear cooperation other than the two formal negotiations explicitly identified by agency officials. Table 3 provides information on dates we identified of formal U.S.-Saudi negotiations; other U.S.-Saudi interactions; National Security Council meetings to discuss policy and related matters on U.S.-Saudi negotiations; agency briefings to Congress on the negotiations; and other related developments, including developments in Saudi Arabia related to its planned nuclear power program. See table 3 for more information. Appendix III: Comments from the Department of State Appendix IV: Comments from the Nuclear Regulatory Commission Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the individual named above, other key contributors to this report were William Hoehn, Assistant Director; Alisa Beyninson, Analyst in Charge; Antoinette Capaccio; Tara Congdon; Camille Pease; Steven Putansu; Dan Royer; Sara Sullivan; and Madeline Welter.
Why GAO Did This Study U.S. policy has long sought to balance U.S. civilian nuclear exports with the nation's obligation to ensure that they are not used to proliferate nuclear weapons. The Atomic Energy Act (AEA) provides a framework for certain civilian nuclear exports and outlines the requirements for nuclear cooperation agreements, including that certain nonproliferation conditions be met; that State conduct negotiations with the technical assistance and concurrence of DOE; and that the President keep certain congressional committees fully and currently informed of negotiations or initiatives. This report describes, among other things, (1) the status of U.S.-Saudi negotiations and any areas of disagreement and (2) what is known about U.S. agency management of the negotiations. GAO reviewed the AEA and documentation of interactions between U.S. and Saudi officials regarding nuclear cooperation. GAO received limited information from State and DOE officials during the review but interviewed over 30 other stakeholders, including former senior executive branch officials, former congressional staff, and others with knowledge of and insights into nuclear cooperation issues and the negotiations. What GAO Found Since 2008, when the United States and Saudi Arabia signed a memorandum of understanding on nuclear energy cooperation, the current and prior U.S. administrations have engaged in discussions and negotiations about nuclear cooperation with the Saudi government. However, these negotiations are stalled; the two countries have not been able to resolve disagreements on several nonproliferation conditions, including Saudi Arabia agreeing to enrichment and reprocessing restrictions and signing an Additional Protocol with the International Atomic Energy Agency (IAEA), which would allow IAEA to obtain additional information about and access to Saudi nuclear activities. U.S. agency management of the negotiations with Saudi Arabia remains unclear in two areas regarding AEA requirements—(1) that the Department of State (State) conduct negotiations, with the technical assistance and concurrence of the Department of Energy (DOE), and (2) that certain congressional committees be informed. First, it is unclear which U.S. agencies were present at or aware of various interactions where nuclear cooperation was or may have been discussed, except for the formal negotiations in 2012 and 2018 and a commercial mission coordinated with State. GAO was able to identify eight interactions where nuclear cooperation was discussed and five more interactions where nuclear cooperation may have been discussed (see figure). Note: Interactions depicted in this figure include meetings, phone calls, and a letter, among other things. Second, GAO was unable to determine whether the agencies kept the committees fully and currently informed. GAO identified two briefings on the negotiations—in December 2017 and January 2018—to the relevant committees, but it does not appear that these committees were briefed until more than a year after the March 2018 formal negotiations. According to congressional staff, Congress on occasion learned of developments through non-agency sources and had to apply forceful measures, including holds on nominations, to get information from the executive branch. By committing to regular briefings to Congress on nuclear cooperation negotiations and initiatives, State could better support congressional oversight on nuclear nonproliferation matters. In addition, congressional staff have said the AEA allows for broad interpretation of the “fully and currently informed” requirement. By specifying, through an amendment to the AEA, its expectations for timeliness and information provided by the agencies on nuclear cooperation negotiations and initiatives, Congress could have better assurance that it receives the information it needs for oversight of nuclear nonproliferation matters. What GAO Recommends GAO believes that Congress should consider amending the Atomic Energy Act to require regularly scheduled briefings. GAO is also making a recommendation that the Secretary of State commit to regularly scheduled, substantive briefings to the relevant congressional committees. State concurred with our recommendation.
gao_GAO-19-698
gao_GAO-19-698_0
Background Types of Missions Conducted by EOD Forces The military services – Army, Navy, Air Force, and Marine Corps – have highly trained EOD personnel to eliminate explosive hazards in support of a variety of events and activities, ranging from major combat operations and contingency operations overseas to assisting the Secret Service in its protection of the President of the United States (see fig.1). EOD forces are dispersed worldwide to meet combatant commanders’ operational requirements related to these missions. Although the services’ EOD forces support combatant commanders, NORTHCOM’s Joint Force Headquarters-National Capital Region coordinates EOD force support of land-based homeland defense and DSCA missions. EOD forces conduct combat-related and DSCA missions that support national military objectives. EOD combat-related missions include preparations for combat such as training and exercises, and the wartime execution of EOD missions. EOD forces play a major role in all phases of combat operations. For example, these forces contribute to information gathering during operations and serve to enable the safe conduct of operations within an operational area. Additionally, EOD forces support freedom of maneuver and force protection. Further, they may directly support missions such as counterterrorism, deterring and defeating aggression, and countering weapons of mass destruction, among others across the spectrum of operations. Officials from each service stated that EOD forces prepare for these combat-related missions during predeployment in-garrison periods. EOD forces also conduct DSCA missions when they are not engaged in combat-related missions. DOD provides EOD forces when requested in advance by specific federal agencies and approved by the appropriate DOD official. Officials stated that generally, EOD forces undertake VIP support missions during in-garrison periods, just after returning from combat-related deployments or while preparing for the next deployment. DOD Guidance and Processes Related to EOD Manpower and Risks The military services collectively have more than 6,300 EOD positions to fulfill combatant command missions, and demand for EOD manpower and expertise is high. Each service determines the number of EOD technicians it needs based on its respective requirements, which consider combatant commanders’ wartime missions and plans. According to a DOD official, the services take into account the long lead times—up to 3 years in one service—that can be necessary to produce qualified and experienced EOD specialists. In accordance with DOD policy, when considering EOD wartime requirements, service officials should make certain that national military objectives can be accomplished using a minimum of manpower that produces maximum combat power. DOD policy also states that a formal validated process is to be used to determine wartime manpower requirements. Generally, manpower requirements are the amount of personnel needed to accomplish a job, mission, or program. Joint doctrine outlines mission tasks associated with EOD units. Once a service determines the tasks required of a particular community (such as EOD), the service then sizes its forces (i.e., determines the manpower requirement) according to the demand for those tasks among the combatant commands. Risk is the effect of uncertainty on objectives with the potential for either a negative outcome or a positive outcome or opportunity. In the military, accurately appraising risk allows leaders and staffs to manage and communicate risk effectively to inform decisions across disparate processes. Joint doctrine describes a planning process that aligns resources and military activities, and enables leaders to examine risks, among other factors, to determine a preferred course of action to achieve an objective. Planning for EOD involves military manpower systems that accurately determine the required EOD forces and decision makers who decide how much risk is acceptable if or when there is a shortfall of EOD forces. According to DOD doctrine on joint planning, regardless of the efforts to mitigate it, some level of risk will remain and should be identified to senior leaders so there is a common understanding of the decisions required and the potential effects of those decisions. Commanders must include a discussion of risk in their interaction with DOD senior leaders and that discussion must be in concrete terms that enable and support decision- making. In the context of strategic and military risk evaluation during joint planning, combatant commanders and DOD’s senior leaders work together to reach a common understanding of risk, decide what risk is acceptable, and minimize the effects of accepted risk by establishing appropriate risk controls. Military Services’ Processes for Determining EOD Manpower Levels Focus on Combat- Related Missions, but Do Not Consider the Increasing Demand for Some DSCA Missions The military services’ processes for determining EOD manpower levels are based on combat-related missions and, accordingly, do not fully consider DSCA missions. However, DOD provides EOD resources for various DSCA missions such as: aiding in the protection of the President of the United States and dignitaries through VIP support missions; providing assistance to law enforcement agencies and other civil authorities in the United States and its territories when necessary to save lives under DOD’s immediate response authority; and rendering safe military munitions when requested by civil authorities (see fig. 2). EOD and manpower officials from each of the military services explained that, in practice, their respective services focus on combat-related missions and do not consider DSCA missions in determining the number of EOD personnel needed. Specifically: According to Army officials, the Total Army Analysis process that is used to size Army forces considers core functions for combat operations and warfighting requirements. They explained that this process does not consider DSCA requirements in determining the number of EOD forces needed. In Army guidance, manpower is based on wartime missions and wartime requirements for sustained combat operations, among other types of information. Due in part to force structure adjustments and the drawdown of EOD forces, since 2014, according to information provided by the Army, the Army has reduced more than 800 EOD positions, the equivalent of two EOD battalions and 13 EOD companies. According to Navy officials, the Navy makes manpower decisions with a focus on wartime requirements by analyzing required operational capabilities against the projected operational environment. In Navy manpower guidance, this analysis is critical to developing fleet manpower requirements for units such as EOD forces. Navy officials explained that the process does not consider the DSCA mission in determining EOD manpower. The Air Force’s EOD manpower standard, which has been updated through 2013, is based on in-garrison needs and wartime requirements. In Air Force manpower guidance, manpower is described as a critical resource that enables combat capability; the guidance further notes that manpower requirements are identified and resources are subsequently allocated for peacetime and wartime missions. However, Air Force officials stated that the process focuses on results for combat-related missions and does not specifically include DSCA requirements. According to Marine Corps officials, the Marine Corps’ EOD forces are sized to support Marine Expeditionary Forces for deployment for overseas combat operations. The Marine Corps’ manpower guidance describes a force structure process designed to identify and provide the capabilities, including personnel and equipment, necessary to accomplish mission essential tasks. Marine Corps officials stated that the service does not receive additional EOD manpower specifically for DSCA missions. Although service manpower calculations do not reflect DSCA missions, one of the DSCA missions—VIP support—is manpower intensive and occurs frequently. Specifically, the workload for VIP support can be substantial and has increased from about 248,000 man-hours in fiscal year 2007 to over 690,000 man-hours in fiscal year 2017. According to a NORTHCOM official, this rise is due to an increase in the different types and complexity of threats requiring more EOD personnel to sufficiently support civil authorities. Figure 3 below illustrates the increase in the amount of time the EOD forces have spent on VIP support missions. The military services have a long-standing practice of providing support to civil authorities, including EOD support. DOD support to civil authorities is grounded or reflected in statute and DOD guidance. For example, the Presidential Protection Assistance Act of 1976 requires executive agencies, including DOD, to assist the Secret Service on a temporary basis in protecting the President, the Vice President and other persons— such as visiting foreign dignitaries (see fig.4). In addition, the National Military Strategy and current homeland defense strategy prioritize defending the homeland and providing support to civil authorities. Moreover, DOD guidance addresses DSCA generally as well as specific support to the Secret Service, Department of Homeland Security, and law enforcement. Further, the Secretary of Defense approved a Joint Staff standing execute order (EXORD) which is used to execute routine VIP support missions related to the protection of dignitaries on short notice. This order authorizes NORTHCOM to provide EOD support to the Secret Service and U.S. Department of State within the NORTHCOM area of operations, and to coordinate that support at locations worldwide. Joint doctrine for EOD also lists DSCA as one of nine military missions that EOD forces may directly support, and states that the majority of EOD DSCA missions will be in support of law enforcement or emergency support agencies. Finally, the military services’ Inter-Service Responsibilities for Explosive Ordnance Disposal lists several common responsibilities of the military services’ EOD assets that include providing support to civilian agencies such as the Secret Service. While the DSCA mission is emphasized in departmental guidance and support of civil authorities has placed increasing and significant demands on EOD forces, the military services do not fully consider these factors in determining the appropriate number of EOD forces. According to EOD officials, this is because the primary mission of EOD forces is to conduct combat missions in support of combatant commanders and meet operational plans. Service officials stated that DSCA missions are not priority missions when it comes to sizing their respective forces, and that they do not routinely increase EOD manpower in order to provide support to other federal agencies. Standards for Internal Control in the Federal Government state that management should design control activities to achieve objectives and respond to risks. Specifically, management should ensure policies and procedures are relevant and effective in achieving an entity’s objectives. In addition, the standards state that management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives. Quality information is information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. DOD manpower policy states that manpower management shall be flexible, adaptive to program changes, and responsive to new management strategies, and that existing policies, procedures, and structures shall be periodically evaluated to ensure efficient and effective use of manpower resources. However, the military services’ current processes for determining the appropriate amount of EOD manpower do not fully account for the increase in DSCA missions requiring EOD support. While it is understandable that the services prioritize combat missions when determining EOD requirements, they are not considering all available information in their decision-making process. This lack of consideration limits their ability to efficiently and effectively achieve their objectives and manage risks. Accounting for the increase in EOD manpower demand may not necessarily result in an increase in manpower; however, the services will be better prepared to understand the demand on existing EOD forces and evaluate any resulting risks. Ultimately, unless the military services update appropriate guidance to ensure that they consider the total EOD force required to support combat-related and DSCA missions, decision makers cannot accurately assess the sufficiency of EOD forces to meet both missions and the associated risks. DOD Cannot Evaluate the Effects of VIP Support Missions on Military Preparedness Because the Services Are Not Required to Notify Decision Makers DOD cannot evaluate the effects of VIP support missions on military preparedness because current VIP support mission guidance does not require the military services to notify the Joint Staff and appropriate combatant commands when military preparedness is negatively affected by these missions. According to officials from the military services, the execution of VIP support missions introduces risk that threatens the services’ abilities to execute combat-related missions. Specifically, military preparedness is degraded when the services’ EOD forces are unable to concurrently complete predeployment tasks, such as training for combat, because the forces are called upon to execute routine VIP support missions. Officials told us that EOD forces can only conduct these VIP support missions during the time period when EOD forces are scheduled to conduct predeployment tasks and accomplish training. As a result, according to officials, VIP support missions can deleteriously affect military preparedness for EOD forces. In multiple instances, missions supporting civil authorities have stressed the Army’s EOD capabilities, resulting in missed training and the inability to participate in exercises and activities supporting combat-related missions, according to statements and data provided by the Army. Furthermore, fulfilling VIP support missions can be particularly difficult because short-notice demand for EOD teams often exceeds the planned VIP support demand that can be supported. As a result, Army EOD teams are sometimes dispatched at the expense of military preparedness for combat-related missions in support of combatant commands, according to Army officials. Specific details of the effect recent VIP support missions have had on the Army’s EOD capabilities are included in our July 2019 restricted report. Officials from other services also acknowledged that undertaking routine VIP support missions comes at the expense of training for combat-related missions because of the high demand for and limited number of EOD forces. According to a senior Navy official, that service has sometimes refused mission requests to protect dignitaries because of its inability to meet operational demands, such as deployments and training for its EOD forces and support missions to protect dignitaries simultaneously. When this occurs, however, NORTHCOM will ask another service to accept the mission, thereby putting increased demand on that other service’s EOD forces that, in turn, may conflict with their scheduled training and preparations for combat missions, according to military service officials. Because NORTHCOM has few permanently assigned forces to conduct VIP support missions, it must instead rely on EOD forces from each of the military services that are in-garrison and preparing for but not currently deployed to a combat-related mission. According to DOD guidance, DOD’s ability to grant Secret Service requests for support is to be evaluated based on a number of factors, one of which is the effect on military preparedness. For example, DOD Directive 3025.18, Defense Support of Civil Authorities (DSCA), specifies that requests from civil authorities for assistance shall be evaluated for several factors, including the impact on DOD’s ability to perform its other primary missions. The guidance also provides that the Chairman of the Joint Chiefs of Staff is responsible for advising the Secretary of Defense on the effects of requests for civil support on national security and military readiness. According to joint doctrine, a commander’s tasks associated with the function of command and control include managing risk—such as that arising from EOD support for other agencies protecting dignitaries—as well as communicating and ensuring the flow of information across the staff and joint force, and to higher authorities. Additionally, in the context of evaluating strategic and military risk during joint planning, combatant commanders and senior DOD leaders work together to reach a common understanding of risk, decide what risk is acceptable, and minimize the effects of accepted risk. The Standards for Internal Control in the Federal Government also addresses the importance of an entity using quality information to achieve its objectives. Specifically, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. As previously mentioned, the Joint Staff has issued a Secretary of Defense-approved EXORD that provides guidance for the military to provide EOD support to the Secret Service and Department of State for routine VIP support missions. However, this EXORD does not specify a requirement for the services to notify DOD stakeholders regarding the effect on military preparedness for combat missions. As a result, the military services are not advising the Joint Staff or NORTHCOM when these VIP support missions are adversely affecting EOD military preparedness for combat-related missions. Regarding military preparedness, the absence of a notification requirement precludes decision makers from understanding the risk to EOD forces’ ability to perform their primary mission. Decision makers need this information to carry out their responsibilities and assess risk to ensure efficient and effective accomplishment of both VIP support missions and preparation for combat-related missions for combatant commands. Conclusions The military services’ EOD forces provide the combatant commanders necessary capabilities for combat and combat-related missions. They also provide capabilities through their DSCA missions that are important to supporting U.S. law enforcement agencies and other federal, state, and local civil authorities. DOD has manpower processes that result in careful consideration of the requirements of the combatant commander for combat-related missions. However, those manpower processes do not fully consider DSCA missions, such as the VIP support mission and its accompanying substantial workload. Until DOD processes begin to consider the demand for EOD support for both types of missions, decision makers cannot know the complete manpower requirement for EOD. Consequently, the extent to which the services’ EOD forces are sufficient or insufficient to meet national military objectives cannot be fully known. Furthermore, DOD lacks a requirement in guidance specific to the VIP support mission to notify stakeholders regarding the effects of such missions on military preparedness for combat-related missions. As a result, DOD may not be fully considering risks associated with the use of EOD forces for VIP support on the preparation and training of those forces for combat-related missions. Recommendations for Executive Action We are making the following four recommendations to DOD. The Secretary of the Army should update Army manpower guidance, or other guidance as appropriate, to ensure that all missions conducted by EOD forces, including DSCA missions, are considered in determining the required number of EOD forces. (Recommendation 1) The Secretary of the Air Force should update Air Force manpower guidance, or other guidance as appropriate, to ensure that all missions conducted by EOD forces, including DSCA missions, are considered in determining the required number of EOD forces. (Recommendation 2) The Secretary of the Navy should update Navy and Marine Corps manpower guidance, or other guidance as appropriate, to ensure that all missions conducted by EOD forces, including DSCA missions, are considered in determining the required number of EOD forces. (Recommendation 3) The Secretary of Defense should ensure that the Chairman of the Joint Chiefs of Staff, in collaboration with the combatant commands, incorporate into the appropriate guidance a requirement that the military services notify the Joint Staff and the affected combatant commands when the execution of VIP support missions negatively affects the preparedness of EOD units for combat-related missions. (Recommendation 4) Agency Comments We provided a draft of this report to DOD for review and comment. DOD did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretaries of the Army, Navy, and Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Guy LoFaro, (Assistant Director), Ben Atwood, Naba Barkakati, Christopher Gezon, Amie Lesser, Dennis Mayo, Paulina Reaves, Michael Silver, Michael Shaughnessy, Stephen Woods, and Lillian Yob made contributions to this report.
Why GAO Did This Study EOD forces are a high demand, critical asset that support DOD's ability to execute military operations. DOD increased the number of EOD forces by more than 70 percent from 2002 to 2012 because of increased demand. When not deployed, EOD forces provide support to civil authorities. One of these missions is protecting U.S. and foreign dignitaries—also referred to as VIP support missions. House Report 115-200 included a provision for GAO to review matters related to EOD capabilities and requirements. This report assesses the extent to which (1) the military services consider all combatant command EOD requirements, including DSCA, in determining the number of EOD personnel needed, and (2) DOD evaluates the effect of VIP support missions on the military preparedness of EOD forces. GAO reviewed relevant guidance, analyzed EOD data, and interviewed EOD and manpower officials. This is a public version of a sensitive report that GAO issued in July 2019. Information that DOD deemed sensitive has been omitted. What GAO Found The military services' processes for determining the necessary number of explosive ordnance disposal (EOD) personnel are based on combat-related missions. However, these processes do not fully consider some defense support of civil authority (DSCA) missions that EOD forces conduct. Demand for EOD forces for DSCA missions can be manpower-intensive and frequent. For example, EOD forces' workload for protecting U.S. and foreign dignitaries—also referred to as Very Important Person (VIP) support missions—increased from about 248,000 to over 690,000 man-hours in fiscal years 2007 to 2017 (figure). However, according to officials, the services do not consider DSCA missions in determining the number of EOD personnel needed, instead focusing on combat-related missions. Unless the Department of Defense (DOD) ensures that the services update guidance to consider the total EOD force required to support both missions, decision makers cannot accurately assess the EOD forces' sufficiency. DOD guidance specific to VIP support missions does not include a requirement for the services to report on the effect of VIP support missions on military preparedness. According to officials, military preparedness is degraded when the services' EOD forces are unable to concurrently complete predeployment tasks such as training for combat. Per DOD guidance, Secret Service support requests are to be evaluated based on their effects on military preparedness. Without this information, decision makers are precluded from understanding the risk to EOD forces' military preparedness resulting from the routine VIP support missions. Decision makers need this information to ensure efficient and effective accomplishment of both VIP support missions and preparation for combat-related missions for affected combatant commands. What GAO Recommends GAO is making four recommendations including that DOD (1) update the appropriate service guidance to ensure that all EOD missions, including DSCA missions, are considered in determining the required number of EOD forces, and (2) incorporate into appropriate guidance a requirement for the military services to notify the Joint Staff and combatant commands when VIP support missions negatively affect the military preparedness of EOD units. DOD did not provide comments on the draft of this report.
gao_GAO-20-61
gao_GAO-20-61_0
Background Female Participation in the U.S. Military While female participation in the military dates back to the American Revolution, women have formally served in United States military units since 1901 with the establishment of the Army Nurse Corps. The Act of May 14, 1942 authorized the president to establish and organize a Women’s Army Auxiliary Corps for the purpose of “making available to the national defense when needed the knowledge, skill, and special training of the women of this Nation.” In 1948, the Women’s Armed Services Integration Act of 1948 authorized the military services to, subject to the provisions of the act, enlist and appoint women to their active and reserve components. Certain provisions of the Women’s Armed Services Integration Act of 1948, including limits on the number of women in the Navy and Marine Corps, were repealed in 1967, and additional changes to DOD policies have been made since then. For example, the Department of Defense Appropriation Authorization Act, 1976 directed the secretaries of the military departments to, among other things, take such action as may be necessary and appropriate to insure that women were eligible for appointment and admission to the military service academies. Almost two decades later, the National Defense Authorization Act for Fiscal Year 1994, among other things, required the Secretary of Defense to ensure that qualification of members of the armed forces for military occupational career fields open to both male and female members is evaluated on the basis of common, relevant performance standards without differential standards or evaluation on the basis of gender. It also repealed the remaining statutory prohibitions on the Secretary of the Navy assigning female servicemembers to duty on vessels and aircraft engaged in combat missions or expected to be assigned combat missions. In January 1994, the Secretary of Defense issued a memorandum creating the Direct Ground Combat Definition and Assignment Rule, which made servicemembers eligible for assignment to all positions for which they were qualified, but it excluded female servicemembers from assignment to units below the brigade level whose primary mission was to engage in direct combat on the ground. The memorandum required the services to coordinate approved implementing policies and regulations—including certain service restrictions on the assignment of women—with the Assistant Secretary of Defense for Personnel and Readiness prior to their issuance. The memorandum also permitted the services to propose additional exceptions. In its 2011 final report, the Military Leadership Diversity Commission stated that the services’ have been leaders in providing opportunities for all servicemembers, regardless of racial/ethnic background, or gender, and stated that the DOD’s mission-effective force is a living testament to progress in the areas of military equal opportunity policies and related recruiting and management tactics. The report also stated that more needs to be done to address 21st century challenges and that the Armed Forces have not yet succeeded in developing a continuing stream of leaders who are as demographically diverse as the nation they serve. A 2013 Secretary of Defense and Chairman of the Joint Chiefs of Staff memorandum rescinded the 1994 Direct Ground Combat Definition and Assignment Rule. That memorandum also directed the military services to open currently closed units and positions to female servicemembers, consistent with certain principles and with the implementation of certain standards. The memorandum also directed that the integration of female servicemembers into these newly opened positions and units occur as expeditiously as possible, considering good order and judicious use of fiscal resources, and no later than January 1, 2016. The military services also took action through issuing guidance. For example, in 2013, the Commandant of the Marine Corps issued a letter to Marine Corps leadership stating that it is imperative for the Marine Corps to take a fresh approach to diversity and establishing four task force groups, including one titled “Women in the Corps: Attract, Develop, and Retain Women Officers.” Subsequently in June 2014, the Secretary of the Air Force and Air Force Chief of Staff released a memorandum establishing active-duty officer applicant pool goals, which are intended to reflect the nation’s highly talented, diverse, and eligible population. More recently, in 2015, the Secretary of Defense determined that no exceptions were warranted to the full implementation of the rescission of the Direct Ground Combat Definition and Assignment Rule and directed the secretaries of the military departments and chiefs of the military services to begin to execute the implementation of their approved plans to open all military occupational specialties, career fields, and branches for accession by female servicemembers as soon as practicable and not later than April 1, 2016. Figure 1 presents a timeline of selected events in female participation in the military, including changes to laws and policies. Percentage of Female Servicemembers Increased Slightly Over 15 Years; Data and Studies Show That Female Servicemembers Attrite at Higher Rates and Are More Likely to Separate Due to Various Factors Overall Percentages of Female Active-Duty Servicemembers Increased Slightly from Fiscal Year 2004 through 2018, and the Percentages of Female Servicemembers Vary among the Services Overall, the percentage of female active-duty servicemembers slightly increased from fiscal year 2004 through 2018. However, our analyses also determined that for fiscal years 2004 through 2018, female enlisted servicemembers and commissioned officers had higher attrition rates than their male counterparts, and the percentage of female active-duty servicemembers began to decrease at the 10–to-less-than-20-years of service career point, meaning a smaller pool of female servicemembers being available for leadership opportunities. We also found that female servicemembers are generally more likely to separate from the military, and that the reasons active-duty servicemembers separate from the military vary by gender, pay grade category, and length of service. In addition, other factors—such as access to quality childcare or family planning—have been found to influence female active-duty servicemembers’ separation decisions based on our review of existing literature. The services have experienced slight increases in their populations of female active-duty servicemembers from fiscal year 2004 through 2018. More specifically, the overall percentage of female active-duty servicemembers increased slightly department-wide within that 15 year period, from 15.1 percent in fiscal year 2004 to 16.5 percent in fiscal year 2018, with slight decreases identified in some years—for example, fiscal years 2005 through 2009. Comparatively, the percentage of males serving on active duty decreased from 84.9 percent in 2004 to 83.5 percent in 2018. In fiscal year 2018, the Air Force had the highest percentage of female active-duty servicemembers (20.2 percent), followed by the Navy (19.6 percent), the Army (15.1 percent), and the Marine Corps (8.6 percent). The Air Force also had the highest percentages of female enlisted and officers in fiscal year 2018 (20.0 percent and 21.3 percent, respectively). The Marine Corps (8.7 percent female enlisted and 7.9 percent female officer), had the lowest percentages in fiscal year 2018. Figure 2 shows the representation of active-duty servicemembers, by gender, organization, and pay grade for fiscal year 2018. The Air Force and the Army had higher percentages of female servicemembers than the Navy and Marine Corps in fiscal year 2004— the first year of the data we analyzed–-and those percentages remained relatively stable over the full 15 fiscal years of data we analyzed. Additionally, the percentage of female servicemembers in the Air Force remained higher in each year than in the three other services over that 15 year period. The Navy and the Marine Corps experienced larger increases in their overall percentages of female active-duty servicemembers from fiscal year 2004 through fiscal year 2018. For example, the overall percentage of female active-duty servicemembers in the Navy increased by 4.9 percentage points, from 14.7 percent in fiscal year 2004 to 19.6 percent in fiscal year 2018. The Marine Corps experienced an increase of 2.5 percentage points in that same time period, from 6.1 percent in fiscal year 2004 to 8.6 percent in fiscal year 2018. Figure 3 shows the percentage of female active-duty servicemembers across all services in select years from fiscal years 2004 through 2018, by their organization. We also found that although the percentage of female active-duty servicemembers generally increased across the department from fiscal year 2004 through 2018, the percentage of female active-duty servicemembers was higher for those with fewer years of service and generally decreased as years of service increased. Specifically, as figure 4 shows, the percentages of female enlisted and commissioned officers in all four services with either 10 to 20 years of service or 20 or more years of service were generally lower than those with less than 10 years of service. We also found that the percentages of women with more years of service were higher in more recent years, specifically in fiscal years 2014 through 2018 as compared to fiscal years 2004 through 2009. For example, in fiscal years 2014 through 2018, the percentage of female enlisted with 20 or more years of service (12 percent) was 2.2 percent higher than the percentage of female enlisted in fiscal years 2004 through 2009 (9.8 percent). Similarly, the percentage of female commissioned officers with 20 or more years of service in fiscal years 2014 through 2018 (12.1 percent) was 1.4 percent higher than female commissioned officers with the same length of service in fiscal years 2004 through 2009 (10.7 percent). In addition, the percentage of female warrant officers with 20 or more years of service in fiscal years 2014 through 2018 (8.3 percent) was 2.3 percent higher than female commissioned officers with the same length of service in fiscal years 2004 through 2009 (6 percent). Female Enlisted and Commissioned Officers Had Higher Attrition Rates than Males during Fiscal Year 2004 through 2018 and Are Generally More Likely to Separate Due to a Variety of Factors According to Data From fiscal year 2004 through 2018, female active-duty enlisted servicemembers and commissioned officers had higher annual attrition rates than corresponding males during that same time period. However, the gaps between male and female attrition rates for enlisted and commissioned officers have narrowed in more recent years. Specifically, for fiscal years 2004 and 2018, enlisted female active-duty servicemembers’ annual attrition rates were 33.1 and 8.6 percent, respectively. In fiscal years 2004 and 2018, enlisted male active-duty servicemembers’ annual attrition rates were 22.7 and 6.1 percent, respectively. For fiscal years 2004 and 2018, female commissioned officer annual attrition rates were 10 and 0.7 percent respectively, while male commissioned officer annual attrition rates were 6 and 0.4 percent in those same years, respectively. In fiscal years 2004 and 2018, female warrant officer annual attrition rates were 12.5 and 0 percent, and male warrant officer annual attrition rates were 3.2 and 0 percent in fiscal years 2004 and 2018, respectively. Figure 5 shows active-duty servicemember annual attrition rates over time from 2004 through 2018, by gender and pay grade. Additionally, we developed a set of statistical models—all discrete time duration analysis—using data from fiscal years 2004 through 2018 which accounted for active-duty servicemembers’ time in service (i.e., the period of time from when they joined the military until their separation). The models estimated the association of gender with separation. We accounted for specific servicemember characteristics, such as gender, branch of military service, pay grade, race or ethnicity, marital status, and the existence of dependents to estimate the associations that these characteristics have with active-duty servicemembers separating from the service. The results of our statistical models show that female active-duty servicemembers are more likely to separate from the military than males at any given period of time in service. The average estimated likelihood of female active-duty servicemembers’ separation for each quarter year of time in service is 2.3 percent, while the average estimate for male active- duty servicemembers is 1.8 percent. In relative terms, the likelihood of separation for female active-duty servicemembers is 28 percent higher than the likelihood of separation for male active-duty servicemembers. When controlling for various individual and occupational characteristics— including pay grade categories, marital status, race or ethnicity, education level, occupation, and whether the servicemember has dependents— among others—female active-duty servicemembers’ average estimated likelihood of separating from the military per quarter year of time in service ranges from 1.8 percent to 3.1 percent, depending on their branch of service, while that for their male counterparts ranges from 1.4 percent to 2.3 percent, if other personal characteristics remain the same. In relative terms, the likelihood of separation for female active-duty servicemembers is estimated to be 13 to 46 percent higher than that of their male counterparts. Based on our statistical models, we also found the following by particular characteristics: Married versus unmarried without dependents: In all of the services, both female and male married active-duty servicemembers without dependents are more likely to separate from the military than unmarried male and female active-duty servicemembers without dependents. For example, the likelihood of separation for both female and male married active-duty servicemembers without dependents in the Air Force and the Navy are twice as high as male and female unmarried active-duty servicemembers without dependents in the same services. Married with dependents versus unmarried without dependents: Married male active- duty servicemembers with dependents in all of the services except the Air Force are less likely to separate from the military than unmarried males without dependents. However, married female active-duty servicemembers who have dependents and are serving in the Army, the Navy, and the Air Force are more likely to separate compared to unmarried female active-duty servicemembers without dependents. For example, in the Navy, the likelihood of separation for married female active-duty servicemembers who have dependents is 17 percent higher relative to that for unmarried female active-duty servicemembers without dependents. Comparatively, we estimate that the likelihood of separation for married male active-duty servicemembers in the Navy who have dependents is 28 percent lower than the likelihood of separation for unmarried male active-duty servicemembers in the Navy who do not have dependents. Unmarried with dependents versus unmarried without dependents: In all four services, unmarried female active-duty servicemembers who have dependents are more likely to separate from the military than their unmarried counterparts who do not have dependents. Our analysis produced similar results for unmarried male active-duty servicemembers with dependents, except for those serving in the Navy, who we found are less likely to separate than unmarried male active-duty servicemembers without dependents. More specifically, we estimate that the likelihood of separation for unmarried male and female active-duty servicemembers who have dependents and serve in the Army, Marine Corps, or Air Force, is from 9 percent to 32 percent higher than that for their unmarried male and female counterparts who do not have dependents. Further, we estimate that the likelihood of separation for unmarried female active- duty servicemembers who are serving in the Navy and who have dependents is 35 percent higher relative to the likelihood of separation for those female servicemembers who serve in the Navy and are unmarried and do not have dependents. Pay grade categories: Our analysis found that enlisted male and female active-duty servicemembers in all of the services are more likely to separate from the military than male and female active-duty officers and warrant officers within the same service. For example, we estimate that the likelihood of separation for male and female officers serving in the Navy is 62 and 63 percent lower, respectively, relative to the likelihood of separation for enlisted male and female active-duty servicemembers serving in the Navy. Race or ethnicity minority groups versus whites: In all of the services, black and Hispanic female active-duty servicemembers are less likely to separate from the military than white female active-duty servicemembers. All other racial or ethnic minority female active-duty servicemembers are also less likely to separate from the military than white female active-duty servicemembers except in the Army. More specifically, we estimate that black, Hispanic, and all other racial or ethnic minority female active-duty servicemembers in all of the services (except in the Army) are at least 13 percent less likely to separate from the military relative to white female active-duty servicemembers. All other racial or ethnic minority female active-duty servicemembers (except black and Hispanic) serving in the Army are estimated to be 26 percent more likely to separate from the military relative to white female active-duty servicemembers. In 2011, the Military Leadership Diversity Commission’s final report discussed explanations for discrepancies in representation among senior military leaders, including lower retention of mid-level female enlisted and officer servicemembers. Additionally, OSD officials stated that, in 2017, ODEI conducted an assessment of diversity and inclusion among officers that analyzed fiscal year 2012 through 2016 data to determine whether there was a difference between male and female retention within each of the services. According to DOD, ODEI found various increases and decreases in female retention; however, the officials stated that the assessment did not include an analysis to identify the reasons for the differences in retention among female servicemembers within the services. In its 2017 and 2018 reports, DACOWITS identified dual-military couples as facing retention challenges and the 2017 report stated that, proportionally, more female servicemembers are married to a military spouse than are male servicemembers. Additionally, in the 2017 report, DACOWITS stated that servicemembers who are separated from the military because of issues related to parenthood, including family care plans, are disproportionately female. The DACOWITS report further stated that, according to data provided to DACOWITS by the services, between fiscal year 2007 and 2016, female servicemembers represented between 65 and 83 percent of parenthood-related discharges. We also analyzed 15 years of separation code data (fiscal years 2004 to 2018) to identify the documented reasons why active-duty servicemembers separated from the military during that time. Our analysis of these data found that the reasons active-duty servicemembers separate from the military vary slightly based on gender, pay grade category, and length of service, as well as by time period. For example, misconduct was a top reason for separation from 2004 through 2013 for enlisted male servicemembers with 5 or fewer years of service, whereas pregnancy was one of the top three reasons for separation for female enlisted with 5 or fewer years of service, during that same period. However, neither misconduct for male servicemembers nor pregnancy for female servicemembers were found to be in the top three reasons for separation in fiscal years 2014-2018. The results of this analysis are shown below in figures 6, 7, and 8. Other Factors Identified in the Literature That Can Influence Female Active- Duty Servicemembers’ Decisions to Separate from Military Service To better understand other factors that may underlie a servicemember’s decision to separate, we reviewed a variety of studies on female active- duty servicemember retention in the military. Through our review, we identified six factors that were reported to influence female active-duty servicemembers’ separation from the military: work schedules, deployments, organizational culture, family planning, sexual assault, and dependent care. Work schedules. Specifically, four of the six studies in our literature review cited work schedule as a reason for or factor influencing separation by female active-duty servicemembers. For example, in several studies female active-duty servicemembers cited the demands and uncertainty of their work schedules. In one study, which asked senior female enlisted Army personnel about the primary factors responsible for their decision to leave the military, a review of the participants’ responses indicated that the primary factor responsible for female servicemembers exiting the service sooner than their male counterparts was that the female members believed they constantly had to sacrifice family time for their careers. In another study, former female active-duty naval surface warfare officers cited the uncertainty of their work schedules as having a strong influence on their decision to separate from the military. Deployments. The occurrence of deployments and their effects on family life were also highlighted in four of the six studies as factors influencing female servicemembers’ decisions to separate from the military. For example, one study of female Air Force pilots identified deployments as a factor that caused them to consider leaving active duty. In another study, which included 295 active-duty and reserve female officers in grades O-1 to O-5, participants in 94 percent of the 54 focus group mentioned deployments as an important negative influence on retention, given their effect on spouses and children. Organizational culture. Organizational culture also had an effect on female servicemembers’ decisions to separate from the military in four of the six studies we reviewed. In one study, female active-duty, reserve, and Air National Guard officers in the Air Force mentioned the lack of female mentors and role models in leadership positions, and the experience of sexism as factors influencing the decision to separate. Female servicemembers also discussed how having leaders who are not supportive or understanding of family needs can contribute to a negative or toxic work environment. Study participants also noted that they often faced sexism and the existence of an “old boy’s network,” especially in career fields dominated by males. As such, these female servicemembers felt they had to work harder to prove themselves and also felt they were sometimes not treated equally because they were female. Family planning. Three of the six studies in our literature review cited family planning as being another characteristic that influences separation for female active-duty servicemembers. In one study, female officers in a majority of focus groups (85 percent of 54 focus groups) mentioned issues related to pregnancy that could affect their decisions to stay in or leave the Air Force. More specifically, Air Force female officers (active duty, reserve, and Air National Guard) cited the difficulty of timing pregnancies to fit within rigid career timelines. These female servicemembers stated that they felt they needed to ensure that pregnancy occurred at certain times in their careers to minimize negative career effects. Even with that effort, the female servicemembers stated that negative effects still persisted due to missed opportunities while pregnant, such as in-residence professional military education, or career- field specific problems, such as loss of flying time for pilots. Sexual assault. Two of the six studies in our literature review cited sexual assault as a reason for separation by female active-duty servicemembers. In one study, female military veterans mentioned both the occurrence of a sexual assault and how it was handled by the military as contributing to their separation. For example, two females stated that the perpetrator was not punished, and another woman cited the lack of support from other servicemembers as contributing to their decisions to separate from the military. In another study examining female officer retention (active duty, reserve, and Air National Guard) in the Air Force, a few participants cited cases in which either they or individuals they knew had decided to leave specifically because of a sexual assault. Participants commented that female officers often do not want to report the incident, deciding instead to separate. Dependent care. Two of the six studies in our literature review also mentioned challenges with dependent care as influencing female servicemembers’ decisions to separate from the military. For example, in one study, female military veterans cited difficulties being separated from their children for long time periods as a reason for ending military service. These difficulties were both emotional and practical, including limited stable and safe placement options for children while mothers were deployed. In another study, female Air Force officers in 59 percent of 54 focus groups stated that difficulties with childcare development centers on military bases—including service hours that were incompatible with their work schedules, inconsistent quality of care, and long waitlists—could influence their separation decisions. Participants in that study’s focus groups stated that childcare development centers often have limited hours that make it difficult to coordinate childcare with long work hours or shift work. For example, according to the study’s focus group participants, pilots are sometimes required to fly at night and regularly need overnight child care, outside of typical childcare development center hours. Further, participants stated that some female servicemembers also raised concerns about the quality of care at childcare development centers, noting that the quality of employees is not consistent across locations and that the childcare development centers generally do not provide day-care services that include educational activities to enhance children’s learning, unlike some off-base options. In addition, some female servicemembers in that same study’s focus groups cited problems setting up childcare with childcare development centers before the end of their maternity leave due to lengthy wait lists. Promotion Rates from Fiscal Year 2004 through 2018 Varied by Gender for Enlisted and Officers, and the Likelihood of Promotion Differs by Demographic Factors Our analyses determined that for fiscal years 2004 through 2018, female active-duty servicemember promotion rates were slightly lower for enlisted in most years, but higher for officers as compared to their male counterparts. We also found that the percentage of promotions for eligible female and male active-duty servicemembers decreases at certain grade levels, and the likelihood of promotion varies across certain characteristics, including gender and pay grade. Promotion Rates for Female Enlisted Were Slightly Lower, While Promotion Rates for Female Officers Were Slightly Higher Than Males from Fiscal Years 2004 through 2018 Overall, we estimated that in most years from fiscal years 2004 through 2018, promotion rates for female enlisted active-duty servicemembers were slightly lower than those for male enlisted active-duty servicemembers. Specifically, female enlisted promotion rates were lower than male enlisted promotion rates by a range of 0.1 percentage points to 2.5 percentage points during much of that time period. However, in fiscal years 2015 and 2018, female enlisted promotion rates were higher than their male counterparts by 0.1 percentage points and 0.4 percentage points, respectively. In contrast, female commissioned officers had higher promotion rates than male commissioned officers each year during that same period. Specifically, from fiscal years 2004 through 2018, female commissioned officer promotion rates ranged from 3.3 to 5.3 percentage points higher than male commissioned officer promotion rates. Similarly, from fiscal year 2004 through 2018, female warrant officer promotion rates were higher—a range of 1.5 to 19.3 percentage points—than male warrant officer promotion rates in most years. However, in fiscal years 2015 and 2016, promotion rates for male warrant officers were higher by 1.4 percentage points and 1.9 percentage points, respectively. Figure 9 shows active-duty servicemember annual promotion rates over time, by gender and pay grade category, for fiscal years 2004 through 2018. We also present additional data in appendix III on servicemember promotion rates in fiscal years 2004 through 2018. The 2017 DACOWITS report stated that female servicemembers are particularly underrepresented in military leadership and, as of July 2017, the percentages of female servicemembers in the highest ranks were much lower than in the lowest ranks, particularly among officers. Further, according to DACOWITS, the percentage of female servicemembers declined by nearly two-thirds from the lowest to highest- ranking commissioned officer position, and by nearly half from the lowest to highest-ranking enlisted position. Through our analysis of DMDC data, we found a similar trend in 2018 with the percentage of female servicemembers declining by nearly three quarters from the lowest to highest-ranking commissioned officer positions (21 percent to 5.4 percent). Additionally, the trend was also similar for enlisted personnel for which the percentage of female enlisted declined by nearly half from the lowest to highest-ranking positions (16.6 percent to 9.1 percent). The Likelihood of Servicemember Promotion Varies across Demographic Groups, Including Gender and Other Factors Based on our discrete time duration analysis, we estimated that promotion rates may vary for female active-duty servicemembers relative to their male counterparts across the services, after adjusting for certain demographic and occupation-specific factors, including gender, time in service, branch of service, pay grade, marital status, and whether the active-duty servicemember has dependents. We estimated that in the Navy, enlisted female active-duty servicemembers may have a lower likelihood of promotion than their male counterparts, whereas the evidence is mixed for the Army, the Marine Corps, and the Air Force after controlling for certain individual- and occupation-level characteristics. Figure 10 presents the likelihood of female promotion as compared to males when controlling for time in service, while figure 11 presents the difference in likelihood of promotion when controlling for various demographic factors. Officials from the Service Women’s Action Network told us that, with regard to career progression, the rigidity and timing of some job requirements for certain military occupational specialties are not conducive to becoming pregnant or raising a young family. Specifically, these officials stated that such requirements—for example, Naval surface warfare tours—often occur at the time in a female active-duty servicemember’s life when she may try to become pregnant or have young children. However, according to these officials, such tours must occur at these specific points in one’s career in order to get promoted. Similarly, the Military Leadership Diversity Commission reported in its 2011 final report that, although the services do not have a checklist of assignments required for promotion, each service, community, and career field has a notional career path comprising key work and educational assignments, including leadership and staff assignments early on in one’s career, holding command assignments, meeting certain educational milestones, and holding executive officer or assistant positions to current flag or general officers. Further, the report stated that women and minorities face barriers to serving in such key assignments which can affect their ability to reach senior leadership ranks. The Military Leadership Diversity Commission also reported that one barrier may include lack of sufficient knowledge about these key assignment opportunities, perhaps because women and minorities may not receive the same career counselling or mentoring about key assignments as their white male counterparts. DOD officials stated that as part of the 2017 ODEI assessment, female promotion rates were also analyzed across the services. According to those officials, the assessment found variations in promotion rates from fiscal year 2012 through fiscal year 2016 among female servicemembers; however officials also stated that the assessment did not include an analysis to identify the reasons for the differences in promotion rates among female and male servicemembers. DOD Identified Female Recruitment and Retention as Important to Diversity, but the Military Services Have Not Developed Plans to Guide and Monitor Such Efforts DOD has identified that female recruitment and retention is important to diversity in the military, but the services do not have plans that include goals, performance measures, or timeframes to guide and monitor current or future efforts to recruit and retain female active-duty servicemembers. While recruiting is an important first step in building a diverse force and increasing the representation of female servicemembers, retention plays a similarly important role in maintaining that diversity once it is achieved. DOD’s 2012-2017 Diversity and Inclusion Strategic Plan, quoting the 2011 National Military Strategy, stated that the all-volunteer force must represent the country it defends and benefits immensely from the different perspectives and linguistic and cultural skills of all Americans. According to ODEI officials, the department is currently updating its diversity and inclusion strategic plan to guide efforts through 2024. However, neither the 2012-2017 plan nor the draft updated plan, according to officials, has a focus on goals, such as recruitment or retention goals, for any one particular demographic group. Officials we interviewed stated that there is a general goal to recruit a force that reflects the makeup of the country it represents as a method for encouraging trust in the military among the population at large. However, according to OSD and service officials, the department emphasizes gender-neutral occupational standards and policies, with its focus on recruiting and retaining the best and brightest service members. Specifically, OSD officials stated that the department’s priorities and goals are aimed at improving the retention and promotion rates of all active-duty servicemembers, while ongoing OSD efforts to evaluate diversity within the department focus more broadly on the overall state of diversity of both the military and civilian workforces. OSD officials further stated that retention goals have, in the past, been misconstrued as quotas based on gender and, as such, the department does not set goals or targets for gender. While we recognize the department’s concern about goals being misconstrued as quotas, goals are not quotas and we have previously reported that quantitative and qualitative performance measures “help organizations translate their diversity aspirations into tangible practice.” For example, an organization can track data on its workforce to evaluate the effectiveness of the organization’s diversity management efforts and the progress it is making in those efforts. In addition to analyzing quantitative workforce data, we further reported that organizations can use qualitative data derived from interviews, focus groups, and surveys to identify employee perceptions—including available opportunities and work environment or culture—among various segments of their workforces. In its 2017 report, DACOWITS stated that each of the military services experiences challenges retaining women to a varying degree, with a particularly wide gender gap in operational specialties. DACOWITS’ report further stated that concerns persist that this attrition will result in a disproportionate impact to mission readiness if left unresolved. DACOWITS has also made a number of recommendations specific to the services’ efforts to address and increase female representation in the military through the use of goals and targets. For example, in 2014, DACOWITS recommended that the services should have targets to increase the representation of enlisted female servicemembers and that these targets should be benchmarked against the pool of eligible recruits. Subsequently in 2015, DACOWITS recommended, among other things, that the services should set goals to systematically increase the representation of women in the officer and enlisted ranks. However, according to officials from the four services, the services currently do not have plans that include goals, performance measures, and timeframes to guide and monitor efforts to recruit and retain female servicemembers. For example, Marine Corps officials stated that DOD has not tasked the Marine Corps to prioritize gender with regard to retention or promotion. Marine Corps officials also stated that the Marine Corps does not have any programs or initiatives that focus specifically on reducing attrition and increasing retention of female servicemembers and that its programs focus on increasing the retention of quality Marines—regardless of gender. As another example, Air Force officials stated that, while the Air Force has some specific initiatives that each have their own goals, performance measures, and timeframes included as part of those initiatives, these efforts have not been consolidated into a deliberate plan that targets female servicemembers. Navy and Army officials also stated that their respective services do not have plans specific to female retention efforts. We found that OSD has not provided guidance to the services to develop and implement plans to guide and monitor their efforts to recruit and retain female servicemembers. While DOD is in the process of updating its diversity and inclusion strategic plan to guide efforts through 2024, the updated plan will focus—like the 2012-2017 plan—on providing an overarching construct for the department’s diversity efforts. DOD’s 2012- 2017 Diversity and Inclusion Strategic Plan recognized that, due to the significant amount of time it takes to develop senior DOD leaders, it is essential that the department act to tap into the nation’s growing diverse talent pool. We have previously reported that pressures facing DOD— including increased competition for resources and involvement in more than a decade of conflict—underscore the importance of using a strategic approach to recruiting, developing, and retaining its workforce. In addition, although DOD has reported that the services generally met overall recruiting and retention goals—goals that do not consider gender—we have also reported in recent years on challenges associated with meeting its goals for certain critical skills and specialties—for example, the medical field and pilots—and rebuilding readiness across the force. Given appropriate planning and monitoring, the department could, as the former Secretary of Defense stated in 2015, benefit by drawing strength from the broadest possible pool of talent, which includes the female population that makes up over 50 percent of the population. Our prior work on effective strategic workforce planning states that agencies should periodically measure their progress toward meeting human capital goals and the extent to which human capital activities contribute to achieving programmatic goals and provide information for effective oversight by identifying performance shortfalls and appropriate corrective actions. In addition, internal control standards for the federal government state that management should define objectives clearly, including what is to be achieved , who is to achieve it, how it will be achieved—and in what timeframes—in addition to helping ensure that terms are understood at all levels. Finally, the standards also stipulate that management should develop information needed for corrective action, if necessary. Until DOD provides clear guidance and the services establish plans for monitoring and guiding their efforts to recruit and retain female active-duty servicemembers, including establishing goals, performance measures, and timeframes, the department may continue to experience slow growth of the female population and miss opportunities to retain a valuable segment of the population for its active-duty force. Conclusions Women have been eligible for appointment and admission to the military service academies for over 40 years and, more recently, DOD has taken steps to open more positions to female servicemembers, including ground combat positions. However, while DOD has identified that it intends to increase diversity—including gender diversity—across the services, data show that the overall percentage of female servicemembers across the department has increased slightly from fiscal years 2004 through 2018. In addition to this slight overall growth, female enlisted and commissioned officer rates of attrition during that same period were slightly higher in comparison to their male counterparts. The percentage of female active- duty servicemembers tends to decrease at the 10-to-less-than-20 years of service category, and female active-duty servicemembers are more likely to separate from the military than their male counterparts. Moreover, from fiscal years 2004 through 2018, promotion rates for female active- duty servicemembers were slightly lower among the enlisted ranks in most years, but higher for officers as compared to their male counterparts. DOD has an ongoing effort to study the state of diversity in the department and is in the process of developing a new Diversity and Inclusion Strategic Plan for 2019-2024. However, these efforts address the department’s overall diversity and do not provide guidance to the services for developing plans to guide and monitor efforts to recruit and retain female active-duty servicemembers. Without such guidance and clear plans that include goals, performance measures, and timeframes to guide and monitor efforts to recruit and retain female servicemembers in the active-duty force, the services are not positioned to achieve the department’s goals of maintaining a ready force that includes the best and the brightest and is also representative of the population it serves. Recommendations for Executive Action We are making a total of five recommendations—one to the Secretary of Defense and one to each of the military services. Specifically: The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness provides guidance to the services, for example, in its forthcoming diversity and inclusion strategic plan, to develop plans, with clearly defined goals, performance measures, and timeframes, to guide and monitor recruitment and retention efforts of female active-duty servicemembers in the military. (Recommendation 1) The Secretary of the Army should develop a plan, with clearly defined goals, performance measures, and timeframes, to guide and monitor the Army’s female active-duty servicemember recruitment and retention efforts. (Recommendation 2) The Secretary of the Navy should develop a plan, with clearly defined goals, performance measures, and timeframes, to guide and monitor the Navy’s female active-duty servicemember recruitment and retention efforts. (Recommendation 3) The Secretary of the Navy should ensure that the Commandant of the Marine Corps develops a plan, with clearly defined goals, performance measures, and timeframes, to guide and monitor the Marine Corps’ female active-duty servicemember recruitment and retention efforts. (Recommendation 4) The Secretary of the Air Force should develop a plan, with clearly defined goals, performance measures, and timeframes, to guide and monitor the Air Force’s female active-duty servicemember recruitment and retention efforts. (Recommendation 5) Agency Comments We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix V, DOD and the services concurred with our recommendations and noted steps the department has taken and would be taking. DOD also provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Secretary of the Army; the Secretary of the Navy; the Commandant of the Marine Corps; the Secretary of the Air Force; the Office for Diversity, Equity, and Inclusion; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: List of Sources Used in the Literature Review and Content Analysis Caswell, David C.USAF Female Pilot Turnover Influence: A Delphi Study of Work-Home Conflict.Wright-Patterson Air Force Base, Ohio: Department of the Air Force, Air Force University, Air Force Institute of Technology (June 2016). https://ntrl.ntis.gov/NTRL/dashboard/searchResults/titleDetail/AD1054221 .xhtml. Dichter, Melissa E. and Gala True. ““This is the Story of Why My Military Career Ended Before It Should Have”: Premature Separation From Military Service Among U.S. Women Veterans.” Affilia: Journal of Women & Social Work, vol. 30, no. 2 (2015): 187-199. http://dx.doi.org/10.1177/0886109914555219. https://dialog.proquest.com/professional/docview/1681926518?accountid =12509. Keller, Kirsten M., Kimberly Curry Hall, Miriam Matthews, Leslie Adrienne Payne, Lisa Saum-Manning, Douglas Yeung, David Schulker, Stefan Zavislan, and Nelson Lim. Addressing Barriers to Female Officer Retention in the Air Force (Santa Monica, California: RAND Corporation, 2018). https://www.rand.org/pubs/research_reports/RR2073.html. Pierce, Penny F., TriService Nursing Research Program. Women Veterans Project: Operation Iraqi Freedom. Ann Arbor, Michigan: University of Michigan (2008). https://ntrl.ntis.gov/NTRL/dashboard/searchResults/titleDetail/PB2013101 316.xhtml. Stoker, Carol and Alice Crawford. Surface Warfare Officer Retention: Analysis of Individual Ready Reserve Survey Data. Monterey, California: Naval Postgraduate School, Graduate School of Business and Public Policy (January 22, 2008). https://ntrl.ntis.gov/NTRL/dashboard/searchResults/titleDetail/ADA476863 .xhtml. Williams, Nanette Marie, The Influence of Contemporary Army Culture on Senior Enlisted Women’s Decision to Commit to a Lifelong Career. Flint, Michigan: Baker College, 2013. https://search.proquest.com/docview/1427847908?accountid=12509. Appendix II: Objectives, Scope, and Methodology This report examines (1) trends in the percentage of female active-duty servicemembers in the military and their attrition rates from fiscal year 2004 through 2018, including the reported factors leading to that attrition; (2) how female active-duty servicemember promotion rates compare with those of their male counterparts and among female servicemembers with differing characteristics from fiscal years 2004 through 2018, and what factors influence these rates; and (3) the extent to which DOD and the military services have plans to guide and monitor female active-duty servicemember recruitment and retention. To address these objectives, we focused our review on active-duty enlisted, officers, and warrant officers in all ranks and pay grades, serving within the four military services (the Army, the Navy, the Marine Corps, and the Air Force). For our first and second objectives, we obtained and analyzed servicemember personnel data for fiscal year 2004 through 2018 from the Defense Manpower Data Center (DMDC), including, for example, service start date, branch of service, status, grade, gender, race, marital status, and whether the servicemember has dependents. We selected fiscal year 2004 through 2018 because, at the time we submitted our request for data, this was the most recent 15-year time period for which DMDC had complete data available. These data were obtained from three different files maintained by DMDC, including the (1) Active-Duty File Monthly Snapshots, (2) Transaction data for active-duty separations for October 1, 2003 through September 30, 2018, and (3) the Defense Enrollment Eligibility Reporting System. The data obtained from DMDC are granular down to the single month and single servicemember. We aggregated these data into a single file that allowed us to analyze them for (1) descriptive statistics to show trends and (2) modeling using duration analysis to show trends to examine the likelihood of occurrence for specific events for various demographic and DOD-specific administrative characteristics. We analyzed these data based on specific demographic characteristics, including gender, race, ethnicity, pay grade, and other variables. While the focus of this review was female active-duty servicemembers, we analyzed data on male active-duty servicemembers, using the same demographic and administrative characteristics, as the primary comparison group. We also analyzed the data to identify and compare the reasons for separation by these different groups and characteristics based on assigned separation designator codes. To assess the reliability of the data obtained from DMDC, we reviewed related documentation, for example, the data dictionary associated with the active-duty file; interviewed knowledgeable officials from DMDC; and conducted both electronic and manual data testing to look for missing or erroneous data. For example, within the data, some servicemembers changed their race, ethnicity, and/or gender over time. Through discussions with DMDC, we determined that these are often errors in the data, but in some instances can be the result of personal decisions by the servicemember. DMDC recommended using the last known instance for each of these attributes for each point on the servicemembers’ timeline. We implemented this recommendation, as it improved the results and findings and avoided servicemembers being counted across multiple, exclusive demographics—i.e., double counted. Based on these steps, we determined that these data were sufficiently reliable for the purposes of analyzing and reporting on the representation of servicemembers with specific demographic characteristics and the rates of attrition and promotion among those servicemembers for fiscal year 2004 through 2018. We also determined that fiscal year 2004 through 2018 DMDC data were sufficiently reliable for the purposes of constructing a duration analysis statistical model to estimate the likelihood of attrition by servicemembers with specific demographic factors. We used the fiscal year 2004 through 2018 DMDC data to construct descriptive statistics of the demographic composition of the services’ active-duty forces and drew comparisons between female and male servicemembers, and across demographic and administrative characteristics. According to service officials, the department does not have a universal definition for attrition. We, therefore, constructed attrition rates for active-duty servicemembers by capturing (1) any enlisted servicemember who separated more than 1 week from the end of his or her first service contract, and (2) any officer who separated within 3 years of his or her start date. Attrition rates were calculated by taking the total number of members who attritted, per the definitions above, in a given fiscal year and dividing that number by the total number of officers or enlisted servicemembers in that year, times 100 to express as a percent. To prevent double counting of non-attritted members across multiple fiscal years, attritted and non-attritted members were counted in the year that they entered service and not the year that they separated. In order to construct promotion rates for active-duty servicemembers, we used the servicemembers’ time-in-grade, time-in-service, and each service’s policy for time-in-service and time-in-grade minimums for each pay grade to determine eligibility for promotion. For every fiscal year, if a servicemember was eligible for promotion whether they promoted or not, the servicemember was counted as eligible. If the servicemember did promote, then the servicemember was counted as promoted. The promotion rate for each category was calculated as the total number of promoted servicemembers divided by the total number of promotion- eligible servicemembers, times 100 to express as a percent. We also conducted an analysis of associations between each of separation and promotion outcomes and certain demographic characteristics for servicemembers using the servicemember personnel data from DMDC for fiscal years 2004 through 2018, which included quarterly data on individual servicemembers. These data also contain information for each servicemember on the timing of his or her separation and promotions, if any. Specifically, we implemented a discrete time method for the analysis of event histories, using a logit specification. This is a type of duration analysis methodology that is suited to the analysis of event occurrences and their timing—which is the time elapsed until the event occurs (e.g. number of years until separation or promotion). We examined the extent to which each active-duty servicemember’s separation and promotions (or lack thereof) may be associated with certain factors related to that servicemember’s demographic and occupational characteristics. These factors were time-invariant (e.g. race, gender, etc.) or time-varying (e.g. occupation, marital status, etc.). For our duration models for separation, we generally included (1) gender, (2) marital status, (3) the existence of dependents, (4) race and ethnic groups, (5) pay grade categories, (6) having a bachelor’s degree or higher education degree versus not, (7) whether the individual has been assigned to an overseas duty location, (8) occupation, (9) fiscal year fixed effect, and (10) quarter-year time-in-service fixed effect. We tested multiple models and included various sets of factors. Since the number of female active-duty servicemembers decreases at higher pay grades, this was taken into account for our duration models for promotion. To ensure convergence of our promotion models, we made the following adjustments in control variables. We started with the Marine Corps’ promotion data because the service has the smallest proportion of female active-duty servicemembers among the four services. After testing with multiple sets of different control variables with the data, we decided to use the following control variables. (See table 1.) We could not control for all factors that may affect separation and promotion, such as a servicemember’s performance and labor market conditions. We also did not model the promotion process in the services. Our modeling should thus be viewed as evidence that may inform on possible associations in the data, and does not establish a causal relationship. Additional inquiry into each of the observed separation and promotion cases would be needed to truly ascertain the role of certain factors, such as gender, in each of these cases. Additionally, we conducted a literature review and content analysis of existing research on promotion and retention in the military, with a focus on female servicemembers. To identify studies, we conducted searches of various databases, including ProQuest, EBSCO, Westlaw Edge, Scopus, Dialog, and the National Technical Information Service, for English-language sources published in calendar year 2008 through 2018. We searched for peer-reviewed material, government and non- governmental reports, conference papers, books, and dissertations or theses. The database search was conducted from December 21, 2018 to January 10, 2019. This search and review process yielded 213 potentially relevant studies after initial scoping by a research librarian and, after additional screening of titles and abstracts for relevance, resulted in the selection of 87 studies for full text review. Specifically, two analysts sequentially reviewed the full texts for substantive content and reconciled any differences. Two methodologists sequentially reviewed the full texts for methodological considerations and reconciled any differences. Then the analysts and methodologists discussed and reconciled any remaining differences. To be included in our review, studies had to either (1) include factors servicemembers reported about intended or actual separations, including retention; or (2) report analyses designed to identify characteristics that statistically predict service separation or attrition differences among female servicemembers or between female and male servicemembers. The studies had to include primarily one or more of the four military services within DOD and could not focus exclusively on the Coast Guard. The studies also had to include primarily active-duty personnel and could not focus exclusively on reserve component personnel. Studies that focused only on recruitment or accessions, exit or lateral transfer from a career field but not separation from service, or data collected only from military spouses were also deemed out of scope. The studies we included in our literature review were published between 2008 and 2018 and included information relevant to our research objective on female servicemember retention, attrition, or promotion. From the group of 87 studies, we excluded 81 studies because they did not meet our inclusion criteria or the results were deemed not relevant to this review. The resulting six studies were further reviewed for content. We conducted a content analysis in order to be able to summarize the relevant results from the literature search by identifying recurring themes. To conduct this content analysis, the team developed a list of six overarching themes with three to seven sub-themes associated with each main theme. The resultant 54 sub-themes were documented in the team’s data collection instrument as a paired main theme and sub-theme. First, an analyst recorded an assessment of whether the study included the theme and sub-theme. A second analyst independently reviewed the same information and recorded an assessment. The two analysts reconciled their two independent assessments to produce the analysts’ consensus and recorded that consensus in the team’s final spreadsheet. All results reported from the studies reviewed were found to be sufficiently reliable for how they are used in this report and any limitations are mentioned in the text. For our third objective, we reviewed documentation on the Office of the Secretary of Defense’s (OSD) and services’ efforts to collect and analyze data on diversity in the department, as well as servicemember retention. We reviewed the department’s plans for developing and promoting diversity and inclusion in the force, including the department’s 2012-2017 Diversity and Inclusion Strategic Plan. We also reviewed a draft version of the department’s forthcoming plan for 2019-2024 Diversity and Inclusion Strategic Plan. We evaluated their efforts to determine whether they met federal internal control standards, including that management should design appropriate types of control activities such as defining objectives clearly and helping ensure that terms are understood at all levels. We reviewed other publications on female recruitment and retention in the military, including reports and briefings developed by the Defense Advisory Committee on Women in the Services (DACOWITS) and the 2011 final report of the Military Leadership Diversity Commission to determine what others had found and recommended with regard to female retention and participation in the military. We also analyzed our past reports and recommendations, for example, on military personnel management and DOD’s Career Intermission Pilot Program, among others. For all three objectives, we also interviewed officials from the Office of Military Personnel Policy Office and the Office for Diversity, Equity, and Inclusion (ODEI), both under the Office of the Under Secretary of Defense for Personnel and Readiness, as well as officials from the four military services. We also interviewed representatives from DACOWITS and the Service Women’s Action Network. Further, we reviewed previously made recommendations by DACOWITS and the Military Leadership Diversity Commission aimed at improving promotion and retention, specifically of female servicemembers, and interviewed OSD officials about any progress made by the department and the services to address these recommendations. We conducted this performance audit from September 2018 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Descriptive Statistics Data Tables Tables 2 through 14 present snapshots of active-duty data from the Defense Manpower Data Center, spanning the years of 2004 through 2018. Appendix IV: Analysis and Comparison of Active-Duty Servicemember Separation with Characteristics We developed a set of statistical models—all discrete time duration analysis–using data from fiscal years 2004 through 2018, which accounted for active-duty servicemembers’ time in service—that is, the period of time from when they joined the military until their separation. We controlled for specific servicemember characteristics such as gender, branch of military service, pay grade, race or ethnicity, marital status, and the existence of dependents to estimate the association of these characteristics on the likelihood of active-duty servicemembers separating from the service. Table 15 depicts the results of our analysis. Positive numbers higher than 1.0 indicate the comparison group (e.g., married female servicemembers without dependents) is more likely to separate than the baseline group (e.g., unmarried female servicemembers without dependents). Positive numbers lower than 1.0 indicate the comparison group (e.g., female officers) is less likely to separate than the baseline group (e.g., female enlisted). Odds ratios from the duration analysis allow us to compare the relative relationships between various characteristics and separation from the military. For categorical variables, increase or decrease in the likelihood of separation is in comparison to an omitted category, or reference baseline group. Odds ratios that are statistically significant and greater than 1.00 indicate that servicemembers with those characteristics are more likely to separate than the baseline group. Odds ratio that are less than 1.00 indicate that servicemembers with those characteristics are less likely to separate. For example, the odds ratio for married female servicemembers with dependents in the Air Force are 1.203. This implies that the odds of separation for married female servicemembers with dependents in the Air Force are 1.203 times the odds of separation for unmarried female servicemembers without dependents in the Air Force, holding other factors constant, or that the odds of separation for married female servicemembers with dependents in the Air Force are about 20 percent higher than single female servicemembers without dependents in the Air Force, if other conditions remain constant. Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kimberly Mayo (Assistant Director), Jennifer Weber (Analyst in Charge), Adriana Aldgate, Emily Biskup, Charles Culverwell, Edda Emmanuelli-Perez, Cynthia Grant, Chad Hinsch, Yvonne Jones, Zina Merritt, Amie Lesser, Samuel Moore, Moon Parks, Steven Putansu, Leigh Ann Sheffield, Michael Silver, Pamela Snedden, Carter Stevens, Elaine Vaurio, and Lillian M. Yob made key contributions to this report. Related GAO Products Military Personnel: Observations on the Department of Defense’s Career Intermission Pilot Program. GAO-17-623R. Washington, D.C.: May 31, 2017. Military Personnel: Oversight Framework and Evaluations Needed for DOD and the Coast Guard to Help Increase the Number of Female Officer Applicants. GAO-16-55. Washington, D.C.: November 13, 2015. Military Personnel: DOD Should Develop a Plan to Evaluate the Effectiveness of Its Career Intermission Pilot Program. GAO-16-35. Washington, D.C.: October 27, 2015. Military Personnel: DOD Is Expanding Combat Service Opportunities for Women, but Should Monitor Long-Term Integration Progress. GAO-15-589. Washington, D.C.: July 20, 2015. Military Child Care: DOD Is Taking Actions to Address Awareness and Availability Barriers. GAO-12-21. Washington, D.C.: February 3, 2012. Women in the Military: Attrition and Retention. GAO/NSIAD-90-87BR. Washington, D.C.: July 26, 1990.
Why GAO Did This Study The role of female servicemembers in the military has expanded in the last half century as restrictions on female servicemembers serving on active duty, including in combat, have been eliminated. DOD has also stated that recruiting and retaining women is important in order to reflect the nation's population and ensure strong military leadership. House Report 115-676 includes a provision that GAO review female retention and promotion in the military. This report examines (1) trends in the percentage of female active-duty servicemembers in the military and their attrition rates, including reported factors leading to attrition; (2) how female active-duty servicemember promotion rates compare with those of males and among females with differing characteristics, and what factors influence these rates; and (3) the extent to which DOD and the military services have plans to guide and monitor female active-duty servicemember recruitment and retention. GAO analyzed fiscal year 2004 through 2018 personnel data to identify attrition and promotion rates and conducted statistical modeling to determine the likelihood of separation and promotion, reviewed DOD reports and other literature on servicemember attrition, and interviewed officials from DOD and other military organizations. What GAO Found The Department of Defense (DOD) experienced slight increases in the overall percentage of female active-duty servicemembers from fiscal year 2004 through 2018 (15.1 percent in fiscal year 2004 to 16.5 percent in fiscal year 2018), with those percentages varying by pay grade category (see figure). During that period, female enlisted and commissioned officers had higher annual attrition rates than corresponding males. However, the gaps between male and female attrition rates have narrowed. For example, in fiscal years 2004 and 2018, female enlisted servicemembers' annual attrition rates were 33.1 and 8.6 percent, respectively, and enlisted males' annual attrition rates were 22.7 and 6.1 percent respectively. GAO's statistical model found that the likelihood of separation for female servicemembers is 28 percent higher than that of males. GAO's literature review of selected studies on reasons why females separate from the military identifed six themes, including family planning, sexual assualt, and dependent care, as influencing separations. GAO's analysis of fiscal year 2004 through 2018 data estimated that promotion rates were slightly lower for female enlisted in most years, but higher for officers as compared to their male counterparts. Specifically, female enlisted promotion rates ranged from 0.1 to 2.5 percentage points lower than male enlisted promotion rates during much of that period. However, from fiscal year 2004 through 2018, female commissioned officer promotion rates ranged from 3.3 to 5.3 percentage points higher than the rates of their male counterparts. GAO's statistical model also estimated that the likelihood of promotion outcomes varies by certain characteristics, such as gender and pay grade. For example, GAO estimated that the likelihood of promotion for female enlisted in the Navy may be lower than male enlisted, and the evidence is mixed for the other services. DOD has identified female recruitment and retention as important to diversity in the military, but the services do not have plans that include goals, performance measures, and timeframes to guide and monitor current or future efforts to recruit and retain females. According to officials, DOD is currently updating its diversity and inclusion strategic plan; however, neither its prior plan nor the updated plan include goals, such as recruitment or retention goals, performance measures, and timelines for any one particular demographic group. DOD officials stated that retention goals have, in the past, been misconstrued as quotas and, as such, the department does not set goals or targets for gender. However, goals are not quotas and can help guide continued improvement. Without DOD guidance and service plans with goals, performance measures, and timeframes to monitor female recruitment and retention efforts, DOD may continue to miss opportunities to recruit and retain a valuable segment for its active-duty force. What GAO Recommends GAO recommends that DOD provide the services with guidance to develop plans with goals, performance measures, and timelines to address female recruitment and retention efforts, and for the services to develop such plans. DOD concurred with the recommendations.
gao_GAO-19-520
gao_GAO-19-520_0
Background Medication Synchronization Medication synchronization is a process whereby a pharmacist aligns the refill dates of two or more of a patient’s medications to a single day each month—referred to as the synchronization date. Patients who are interested in medication synchronization must enroll or opt into the service, if offered at their pharmacy. To initiate medication synchronization, the pharmacist selects an anchor medication to which the other medications are synchronized, and dispenses short fills—that is, a quantity of less than a month’s supply—so that the patient has enough medication until the next synchronization date. Figure 1 illustrates the process by which a pharmacist may synchronize three medications for a patient. Before each synchronization date, the pharmacist generally contacts the patient to determine if the patient has had any changes in his or her medications or medical history. The pharmacist then makes any needed adjustments so that the patient can continue to pick up his or her medications on the synchronization date and avoid disruptions in their medication regimen. Prescription Drug Expenditure and Coverage In 2017, national spending on prescription drugs dispensed by pharmacies totaled over $330 billion. Medicare accounted for over $100 billion and private health plans accounted for over $140 billion of total spending on prescription drugs that year. Medicare provides prescription drug coverage under Part D, a voluntary program in which beneficiaries can elect to enroll. In February 2019, about 45 million or three-fourths of Medicare beneficiaries were enrolled in Part D plans—including stand-alone prescription drug plans and Medicare Advantage prescription drug plans, which combine medical and prescription drug benefits. In comparison, in 2017, about 200 million patients were enrolled in a private health plan that provides prescription drug coverage, among other benefits, according to CMS. Health plans that provide prescription drug coverage interact with both patients and pharmacies. For patients, health plans may vary their benefits with regards to cost-sharing arrangements—such as copayments for medications—and quantity limits for medications covered—such as restricting the dosage or number of refills of a medication provided within a given period of time. For pharmacies, health plans pay pharmacies a share of the medication costs and a dispensing fee for the pharmacies’ administrative costs in preparing and dispensing the medication. Limited Information Suggests the Use of Medication Synchronization Has Increased; Some Studies and Stakeholders Reported Several Potential Benefits and Limitations One Study and Several Stakeholders Suggest the Use of Medication Synchronization Has Increased in Recent Years Limited information available indicates that the use of medication synchronization has increased, but comprehensive data on its use by pharmacies and patients do not exist. Among the 22 peer-reviewed studies we identified, one study reported that the use of medication synchronization increased among pharmacies and patients. Specifically, the study examined survey data on the use of medication synchronization in retail pharmacies and reported that the number of retail pharmacies using medication synchronization increased from 3,324 in 2013 to 5,534 in 2014, a 66 percent increase. In addition, the study found that the number of patients using medication synchronization at these retail pharmacies increased from 124,608 in 2013 to 438,100 patients in 2014. We did not identify other studies that examined the use of medication synchronization. Officials from all five selected pharmacies that reported using medication synchronization told us that their pharmacies have increased their use of medication synchronization, but they generally could not provide us with data on their patients’ use of medication synchronization over time. The pharmacies included three retail pharmacy chains—two large national chains and one mid-size regional chain—one independent pharmacy, and one mail order pharmacy. Officials from four of these pharmacies told us that they started using medication synchronization within the last 5 years; officials from the fifth pharmacy told us that they started using medication synchronization in 2011. For example, officials from the two large retail pharmacy chains, each with about 10,000 pharmacies nationwide, told us that they first piloted medication synchronization to a small number of pharmacies in either 2015 or 2016. One of these chains now uses medication synchronization at all of its pharmacies, and the other is in the process of doing so. Officials from the mid-size retail pharmacy chain stated that they piloted medication synchronization in 2011 with about 2,500 patients enrolled across 50 pharmacies. They have since expanded it to about 83,000 patients across all their more than 90 pharmacies. Seven other stakeholders, including those representing patients and pharmacies, also told us that the use of medication synchronization has increased in recent years, but generally did not provide data on the increase. In addition, officials from an organization representing pharmacies told us that as of 2018, approximately 80 percent of independent pharmacies offered medication synchronization; however, they could not provide data from prior years. Studies and Stakeholders Suggest that Medication Synchronization Can Be Beneficial to Patients, Pharmacies, and Health Plans Limited information exists on the effects of medication synchronization, but available studies and stakeholders indicate several potential benefits, primarily for patients. According to CMS officials, CMS does not have data on the effects of medication synchronization, such as patient medication adherence; other stakeholders we interviewed indicated that such national data do not exist. Seventeen of the 22 peer-reviewed studies we identified evaluated the effects of medication synchronization—14 of these studies evaluated effects for patients and the rest for pharmacies and health plans. However, the data reported by these studies are limited in scope and are not generalizable to broader populations. Twelve of 14 peer-reviewed studies evaluating the potential effects of medication synchronization for patients reported two potential benefits— improved medication adherence or improved medical outcomes. Improved medication adherence. Twelve peer-reviewed studies that evaluated the potential effects of medication synchronization on patients’ adherence reported that medication synchronization improved adherence. For example, nine of the 12 studies compared medication adherence among patients using and not using medication synchronization and found that medication adherence was greater among patients using medication synchronization—one of the most recent studies showed adherence was 3 percent higher for those using medication synchronization. Two studies compared medication adherence among patients before and after using medication synchronization and found that adherence improved after synchronization was started—the most recent of these studies showed an improvement of 2 percent in average adherence after a year of enrollment. The last study reported that 56 percent of patients surveyed stated that they would be more adherent to their medications if their refills were synchronized. In addition, eight of these studies evaluated the effects of medication synchronization for patients with different chronic conditions and found differences by type of chronic condition. According to the studies, medication adherence improves as a result of medication synchronization because it simplifies the refilling process. Improved medical outcomes. One peer-reviewed study reported that medication synchronization may also lead to improved medical outcomes for patients. The study found that rates of hospitalization and emergency department visits and rates of outpatient visits were 9 percent and 3 percent lower, respectively, among patients using medication synchronization compared with those who were not. Stakeholders also cited improved medication adherence and medical outcomes as potential benefits for patients, and also identified additional benefits that may result from medication synchronization. Specifically, 14 of the 15 stakeholders representing patient and pharmacy organizations and selected pharmacies we interviewed said that medication synchronization may help improve patients’ medication adherence, and 12 of these stakeholders said that it may improve patients’ medical outcomes. These stakeholders also indicated other potential benefits for patients: Improved convenience. Medication synchronization improves convenience for patients—for example, by reducing the number of trips patients need to make to the pharmacy or making it easier to manage their medications, according to 10 of the 15 stakeholders representing patient and pharmacy organizations and selected pharmacies. Fewer trips to the pharmacy help to minimize the need for transportation arrangements, which is particularly important for older patients, patients who live in rural areas, and patients who lack reliable transportation. Five of the 10 stakeholders added that medication synchronization simplifies patients’ experience with managing their medications—patients no longer need to keep track of multiple refill dates for all their medications. Under medication synchronization, the pharmacists proactively perform this work and send reminders to the patients. Increased interaction between pharmacists and patients. Seven of 15 stakeholders representing patient and pharmacy organizations and selected pharmacies told us that medication synchronization increases the interaction patients have with their pharmacists, which may help patients better manage their medication regimens and improve their overall health. For example, a stakeholder representing pharmacies said that prior to the medication synchronization date, pharmacies generally contact patients to confirm that their medications should be filled; as part of this outreach, they also inquire about any changes in the patients’ medical history or therapy. If such changes are identified, pharmacists follow up with patients, and their physicians if necessary, to ensure that patients receive refills reflecting any necessary medication changes. In addition, according to some stakeholders, if a consultation is also provided on the medication synchronization date, pharmacists have more opportunities to answer patients’ questions about medication use, provide counseling, and offer patients other auxiliary services. For example, some stakeholders told us that pharmacists may provide screenings for blood pressure and diabetes, or recommend immunizations to patients when they pick up their medications. Because pharmacists regularly assess patients’ medical history in preparing for medication synchronization, they can target patients who may be at high risk for medical problems or immunization-preventable diseases. Regarding pharmacies, some studies and stakeholders identified the following potential benefits of medication synchronization. Operational efficiencies. Three peer-reviewed studies reported that medication synchronization can lead to operational efficiencies. For example, one study reported that medication synchronization can help pharmacists better manage inventory and personnel costs and improve workflow. Nine out of 12 stakeholders representing pharmacy organizations and selected pharmacies also said that medication synchronization can lead to improvements in operational efficiencies. For example, officials from one organization representing pharmacies said that pharmacists save time when they can dispense all of a patient’s medications at one time instead of several times throughout the month. Increased marketability of pharmacies to health plans. According to one peer-reviewed study and five of the 12 stakeholders representing pharmacy organizations and selected pharmacies, the extent to which a pharmacy’s medication synchronization program improves patient care, such as by improving medication adherence, may make the pharmacy more desirable to health plans. For example, according to the study and officials from one organization representing pharmacies, health plans may include pharmacies with highly adherent patients in the plans’ preferred pharmacy networks. Pharmacies in preferred pharmacy networks can offer lower medication prices, attracting more customers. Increased revenue. Three peer-reviewed studies and five of the 12 stakeholders representing pharmacy organizations and selected pharmacies reported that, to the extent that medication synchronization can improve patients’ medication adherence, it can also lead to increased pharmacy revenues generally because of an increase in filled prescriptions. For example, one of the three peer- reviewed studies reported that medication synchronization resulted in an average increase in medication adherence of almost 5 percent over the first 6 months of its use. Similarly, an industry study found that medication synchronization leads to an additional 20 fills per patient per year, and may lead to an average of $1,120 of additional revenue per enrolled patient annually. The three peer-reviewed studies and the industry study did not examine the causes of the increase in prescription fills, but their authors generally attributed the increase to the improved adherence of patients using medication synchronization. Two of the three studies reported that medication synchronization can increase pharmacy revenues generally because of an increase in filled prescriptions. In addition, officials from four stakeholders representing pharmacy organizations and selected pharmacies told us that pharmacies that use medication synchronization can leverage these opportunities to speak with patients and offer additional services, such as immunizations; these services can further help increase pharmacy revenue. Regarding health plans, some studies and stakeholders identified the following potential benefits of medication synchronization. Higher Medicare quality performance scores. Three peer-reviewed studies reported that medication synchronization can potentially improve health plans’ Medicare quality performance scores. CMS assesses the quality performance of Part D plans using information on various measures, such as adherence to medications for diabetes, high cholesterol, or hypertension. Specifically, CMS rates the plans’ performance using a star rating system, which gives each plan a score of between one and five stars, with five stars being the highest rating. Medication adherence measures are triple weighted in the calculation of a plan’s overall rating. Plans with the highest star ratings are rewarded with member enrollment incentives, while plans with lower star ratings are penalized. In addition, Medicare Advantage plans with high ratings may also receive financial bonuses from Medicare. To the extent that there are improvements in beneficiaries’ medication adherence as a result of medication synchronization, health plans may experience improved performance ratings and the commensurate financial benefits. However, only one of the four stakeholders representing health plan organizations and selected health plans indicated improved Medicare quality performance scores as a potential benefit of medication synchronization. Reduced medical costs. Medication synchronization may also benefit health plans by reducing their overall medical costs, according to one peer-reviewed study. The study found that medication synchronization can result in significant savings in medical costs for health plans, despite the increase in medication costs to the health plan. Specifically, the study reported that medical savings per additional dollar spent on medications under medication synchronization ranged from approximately $1 to $37, depending on the medication. According to the study, health plans could potentially experience such reduced medical costs as a result of medication synchronization because when patients are adherent to their medications, they may decrease their utilization of healthcare services. However, only one of the four stakeholders representing health plan organizations and selected health plans cited this as a potential benefit. Studies and Stakeholders also Identified Potential Limitations of Medication Synchronization for Patients, Pharmacies, and Health Plans A small number of studies and several stakeholders indicated that there are some potential limitations associated with medication synchronization. For example, Patients. One peer-reviewed study indicated that medication synchronization may not be beneficial for all patients. Similarly, 14 of the 15 stakeholders representing patient and pharmacy organizations and selected pharmacies said that not every patient may want to use medication synchronization. For example, 12 of the 14 stakeholders said that some patients may not be able to afford paying all copayments for their medications at one time each month, which deters them from using medication synchronization. In addition, one stakeholder said that some patients prefer going to the pharmacy regularly or consider trips to the pharmacy as opportunities for social interaction and may not be interested in medication synchronization. Pharmacies. One peer-reviewed study reported that using medication synchronization is time- and labor-intensive for pharmacies. Specifically, the study reported that almost 60 percent of pharmacists surveyed indicated that implementing medication synchronization involves a significant change in a pharmacy’s workflow. Seven of 12 stakeholders representing pharmacy organizations and selected pharmacies said this was because of several challenges. For example, it may be complicated to set up the initial synchronization, determine the best anchor medication and synchronization date, or adjust patients’ medication synchronization because of changes in their medical needs or therapy. In addition, pharmacists may have to conduct extensive follow-up with health plans because health plans may not be consistent in how they process pharmacies’ claims that involve short fills. For example, private health plans may initially deny coverage of short fills; such denials may require the pharmacist to expend additional resources to follow-up with the health plan to obtain approval for the short fill. Health plans. Officials from the two selected health plans told us that they do not require their Part D network pharmacies to use medication synchronization, nor do they compensate pharmacies for providing these services. While all stakeholders representing health plan organizations and selected health plans said that they view medication synchronization as having the potential to improve patients’ medication adherence and health outcomes, two of these stakeholders noted the lack of data explicitly tying medication synchronization to improved patient medication adherence, medical outcomes, and overall medical costs. A CMS Regulation and Laws in Selected States May Support the Use of Medication Synchronization, Such as By Reducing Patient Cost Sharing Our review shows that a CMS regulation and laws related to prescription drug coverage in five selected states may support the use of medication synchronization. For example, CMS and the five selected states allow for reduced patient cost sharing for short fills needed to synchronize their medications. CMS does not have a formal medication synchronization program or policy for Medicare; however, a CMS regulation related to prescription drug benefits may support medication synchronization by reducing beneficiary cost sharing for certain amounts dispensed, according to officials. Specifically, CMS issued a regulation that, starting in 2014, required Medicare Part D plans to establish a daily cost-sharing rate (for example, a prorated copayment) when a beneficiary receives less than a month’s supply of a prescription medication—generally referred to as a short fill. According to CMS, the primary goal of the regulation was to reduce medication cost and waste—such as by allowing beneficiaries to initially receive a short fill of a new medication so that they can assess, in consultation with their providers, the efficacy of the medication and any associated adverse side effects. Because short fills may be needed to initially synchronize multiple medications to the same refill date, the prorated copayment may reduce the financial burden on beneficiaries who require these fills, according to CMS officials. In addition, officials from a selected pharmacy and officials from a technology vendor added that from a value perspective, beneficiaries may be reluctant to enroll in medication synchronization if they had to pay a full copayment for less than a month’s supply of medication. For example, as illustrated in figure 2, to initiate medication synchronization for a beneficiary taking three medications, each with a different refill date, the pharmacist may dispense short fills for two of the three medications. In this case, the pharmacist may dispense 8 days’ supply of one medication and 3 days’ supply of another medication. Prior to the regulation, the beneficiary would have paid $45 in copayments for these two short fills, as compared to $7 with prorated copayments—a difference of $38. Selected States The five selected states—Georgia, Illinois, Maine, Texas, and Washington—enacted laws within the last 4 years that may support medication synchronization. Specifically, these laws: Require insurance coverage of short fills. Laws in all five selected states require health plans in their state to provide coverage for medication short fills. These laws may also support medication synchronization by allowing health plans and pharmacies to work around certain plan policies that may impose limits on medication refills. Specifically, officials from a technology vendor told us that some health plans may impose limits on the number of refills that can be dispensed in a month. For example, if a patient is taking five medications and is limited to five refills a month, a short fill would count towards that limit and the patient may not then be able to get all of his or her medications covered by the health plan that month. Such laws allow the health plan and pharmacy to work around these quantity limits so that the patient can receive the needed short fills to synchronize all of his or her medications. Additionally, two states— Maine and Texas—specifically require their health plans to allow pharmacies to override denials related to refilling a prescription too soon. A pharmacy may receive such denials when refilling a prescription after having just filled it—for example, dispensing a short and full refill of a medication too close together. Officials from two selected pharmacies and an organization representing pharmacies told us that such laws also help to reduce the time and resources that pharmacies otherwise would have expended on addressing issues with these drug claims. Require prorated cost sharing for short fills. Like CMS’s regulation, laws in all five selected states require health plans in their state to prorate a patient’s cost sharing, such as a copayment, when the patient receives a short fill of a medication. Officials from four selected pharmacies told us such laws help reduce the financial burden on patients when they first have their medications synchronized. Without such a law, patients would have paid a full copayment for these medications, which may have discouraged or prevented some patients from enrolling in medication synchronization. Prohibit prorated dispensing fees for short fills. Laws in four of the five selected states prohibit health plans in their state from paying pharmacies a prorated dispensing fee for medication short fills. Pharmacies receive a dispensing fee from health plans for each prescription they fill to cover the pharmacies’ administrative costs of preparing and dispensing a fill. The dispensing fee is in addition to the reimbursement pharmacies receive from health plans for the costs of the medications. In states without this law, health plans may prorate the dispensing fee for short fills—that is, pay a lower fee because a smaller quantity of medications (for example, 10 pills rather than 30 pills) is dispensed. Officials from a technology vendor and an organization representing pharmacies told us that ensuring that a health plan pays a full dispensing fee provides an incentive for pharmacies to use medication synchronization. They explained that a pharmacy’s administrative costs of dispensing a medication remains the same, regardless of the quantity dispensed. Require medication synchronization process or policy. Laws in two of the five selected states—Texas and Washington—require health plans in their state to establish a process or policy for providing medication synchronization services. Both states require that, as part of this process or policy, the pharmacist or prescribing physician must ensure that medication synchronization is appropriate or in the best interest of the patient before the process is used. In addition to approval from both the pharmacist and physician, Texas also requires that the health plan and patient approve the medication synchronization plan. Officials from an organization representing pharmacies said that involving all these entities further helps to ensure the appropriateness of medication synchronization for a particular patient. While stakeholders generally told us that these laws have helped to support medication synchronization, they also said that the absence of such laws has not prevented pharmacies from using it in other states. For example, the five selected pharmacies that reported using medication synchronization—including three pharmacy chains—offered these services in at least some states without such laws. Additionally, officials from a selected pharmacy told us that they continue to offer medication synchronization despite receiving a prorated dispensing fee for short fills. Agency Comments We provided a draft of this report to the Department of Health and Human Services (HHS). HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of HHS, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or CosgroveJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Information about Stakeholders Interviewed Appendix I: Information about Stakeholders Interviewed We interviewed organizations reflecting a range in interests: Six organizations representing pharmacies or pharmacists Three organizations representing patients Two organizations representing health plans One organization representing the pharmaceutical industry 1 We interviewed one organization that specializes in Medicare issues and conducts analysis related to access to and quality of care, among other things. 5 We interviewed selected pharmacies that reported using medication synchronization Two large national retail chains One mid-size regional retail chain One small independent, single-store, retail pharmacy One mail order pharmacy 1 We interviewed a selected large national retail chain pharmacy that reported not using medication synchronization. 2 We interviewed two selected Medicare health plans that offer prescription drug coverage (Part D) that are among the top five Part D plans covering the largest Medicare populations—the combined Medicare Part D enrollment in these two plans totaled almost 14 million—or 31 percent of all Part D beneficiaries, as of August 2018. 2 We interviewed two selected medication synchronization vendors that contract with pharmacies to provide technological support in performing tasks such as identifying patients who would benefit from medication synchronization; determining the anchor medication; and setting up automated reminder to patients, in advance of their prescription refills. We identified these vendors in peer-reviewed studies on medication synchronization or through interviews with stakeholders. 5 We interviewed experts in medication synchronization, identified in peer-reviewed studies on medication synchronization or through interviews with stakeholders. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tim Bushfield, Assistant Director; Pauline Adams, Analyst-in-Charge; George Bogart; Nina Daoud; Krister Friday; Melissa Trinh-Duong Ostergard; and Vikki Porter made key contributions to this report. Bibliography Andrews, S. B., T. R. Marcy, B. Osborn, and L. G. Planas. “The Impact of an Appointment‐Based Medication Synchronization Programme on Chronic Medication Adherence in an Adult Community Pharmacy Population.” Journal of Clinical Pharmacy and Therapeutics, vol. 42, no. 4 (2017): pp. 461-466. Barnes, Brenda, Ana L. Hincapie, Heidi Luder, James Kirby, Stacey Frede, and Pamela C. Heaton. “Appointment-Based Models: A Comparison of Three Model Designs in a Large Chain Community Pharmacy Setting.” Journal of the American Pharmacists Association, vol. 58, no. 2 (2018): pp. 156-162. Butler, Kendra T., Janelle F. Ruisinger, Jessica Bates, Emily S. Prohaska, and Brittany L. Melton. “Participant Satisfaction with a Community-Based Medication Synchronization Program.” Journal of the American Pharmacists Association, vol. 55, no. 5 (2015): pp. 534-539. Dao, Nancy, Sun Lee, Micah Hata, and Lord Sarino. “Impact of Appointment-Based Medication Synchronization on Proportion of Days Covered for Chronic Medications.” Pharmacy, vol. 6, no. 2 (2018): pp. 1- 9. DiDonato, Kristen L., Kristin R. Vetter, Yifei Liu, Justin R. May, and D. Matthew Hartwig. “Examining the Effect of a Medication Synchronization or an Education Program on Health Outcomes of Hypertensive Patients in a Community Pharmacy Setting.” INNOVATIONS in Pharmacy, vol. 5, no. 3 (2014): pp. 1-9. Doshi, Jalpa A., Raymond Lim, Pengxiang Li, Peinie P. Young, Victor F. Lawnicki, Joseph J. State, Andrea B. Troxel, and Kevin G. Volpp. “A Synchronized Prescription Refill Program Improved Medication Adherence.” Health Affairs, vol. 35, no. 8 (2016): pp. 1504-1512. Doshi, Jalpa A., Raymond Lim, Pengxiang Li, Peinie P. Young, Victor F. Lawnicki, Andrea B. Troxel, and Kevin G. Volpp. “Synchronized Prescription Refills and Medication Adherence: A Retrospective Claims Analysis.” American Journal of Managed Care, vol. 23, no. 2 (2017): pp. 98-104. Ghassemi, Emily, Jennifer Smith, Laura Owens, Charles Herring, and Melissa Holland. “Relationship Between Medication Synchronization and Antiretroviral Adherence.” Journal of the American Pharmacists Association, vol. 58, no. 4 (2018): pp. S78-S82. Girdish, Charmaine, William Shrank, Sarah Freytag, David Chen, Doug Gebhard, Andrew Bunton, Niteesh Choudhry, and Jennifer Polinski. “The Impact of a Retail Prescription Synchronization Program on Medication Adherence.” Journal of the American Pharmacists Association, vol. 57, no. 5 (2017): pp. 579-584. Hinson, Jessica L., Gretchen K. Garofoli, and Betsy M. Elswick. “The Impact of Medication Synchronization on Quality Care Criteria in an Independent Community Pharmacy.” Journal of the American Pharmacists Association, vol. 57, no. 2 (2017): pp. 236-240. Holdford, David A., and Timothy J. Inocencio. “Adherence and Persistence Associated with an Appointment-Based Medication Synchronization Program.” Journal of the American Pharmacists Association, vol. 53, no. 6 (2013): pp. 576-583. Holdford, David, and Kunal Saxena. “Impact of Appointment-Based Medication Synchronization on Existing Users of Chronic Medications.” Journal of Managed Care & Specialty Pharmacy, vol. 21, no. 8 (2015): pp. 662-669. Krumme, Alexis A., Robert J. Glynn, Sebastian Schneeweiss, Joshua J. Gagne, J. Samantha Dougherty, Gregory Brill, and Niteesh K. Choudhry. “Medication Synchronization Programs Improve Adherence to Cardiovascular Medications and Health Care Use.” Health Affairs, vol. 37, no. 1 (2018): pp. 125-133. Krumme, Alexis A., Danielle L. Isaman, Samuel F. Stolpe, J. Samantha Dougherty, and Niteesh K. Choudhry. “Prevalence, Effectiveness, and Characteristics of Pharmacy-Based Medication Synchronization Programs.” American Journal of Managed Care, vol. 22, no. 3 (2016): pp. 179-186. Luder, Heidi R., Natalie Kunze, Pamela C. Heaton, and Stacey M. Frede. “An Appointment-Based Model to Systematically Assess and Administer Vaccinations.” Journal of the American Pharmacists Association, vol. 58, no. 3 (2018): pp. 290-295. Nguyen, E., and D. M. Sobieraj. “The Impact of Appointment‐Based Medication Synchronization on Medication Taking Behaviour and Health Outcomes: A Systematic Review.” Journal of Clinical Pharmacy and Therapeutics, vol. 42, no. 4 (2017): pp. 404-413. Patterson, Julie, and David Holdford. “Understanding the Dissemination of Appointment-Based Synchronization Models Using the CFIR Framework.” Research in Social and Administrative Pharmacy, vol. 13, no. 5 (2017): pp. 914-921. Patterson, Julie A., David A. Holdford, and Kunal Saxena. “Cost-Benefit of Appointment-Based Medication Synchronization in Community Pharmacies.” American Journal of Managed Care, vol. 22, no. 9 (2016): pp. 587-593. Renfro, Chelsea P., Michael Patti, Jordan M. Ballou, and Stefanie P. Ferreri. “Development of a Medication Synchronization Common Language for Community Pharmacies.” Journal of the American Pharmacists Association, vol. 58, no. 5 (2018): pp. 515-521. Ross, Alexander, Humaira Jami, Heather A. Young, and Richard Katz. “Sync and Swim: The Impact of Medication Consolidation on Adherence in Medicaid Patients.” Journal of Primary Care & Community Health, vol. 4, no. 4 (2013): pp. 240-244. White, Nicole D. “Pharmacy Medication Synchronization Service Works to Improve Medication Adherence.” American Journal of Lifestyle Medicine, vol. 10, no. 6 (2016): pp. 385-387. Witry, Matthew, and Thao Hoang. “Community Pharmacist Attitudes on Medication Synchronization Programs.” INNOVATIONS in Pharmacy, vol. 8, no. 2 (2017): pp. 1-7.
Why GAO Did This Study Medication adherence—that is, taking medications as prescribed—is important because not doing so increases the risk of hospitalization and can result in avoidable medical costs. According to some pharmacy industry groups, medication synchronization may help improve medication adherence, particularly for patients with multiple chronic conditions. More than 40 percent of Medicare beneficiaries had two or more chronic conditions in 2015. Congress included a provision in the Bipartisan Budget Act of 2018 for GAO to review and report on the use of medication synchronization. In this report, GAO examines (1) what is known about the use and potential effects of medication synchronization and (2) steps CMS and selected states have taken to support its use. GAO identified and reviewed 22 peer-reviewed studies on medication synchronization. In addition, GAO interviewed CMS officials and 30 stakeholders to gather a wide range of perspectives on medication synchronization. Among others, GAO interviewed six selected pharmacies and two selected Medicare health plans. GAO also reviewed CMS regulations as well as medication synchronization laws from five selected states that vary by geographic region. GAO provided a draft of this report to the Department of Health and Human Services, which provided technical comments. GAO incorporated these comments, as appropriate. What GAO Found Medication synchronization is a process whereby a pharmacist aligns the refill dates of two or more of a patient's medications to a single day (see figure below). GAO found that no comprehensive national data exist on the extent to which medication synchronization has been used or its potential effects. However, limited information suggests that the use of medication synchronization has increased in recent years and that it may have benefits. According to a study published in the American Journal of Managed Care that examined survey data on retail pharmacies, the number of pharmacies using medication synchronization increased from 3,324 in 2013 to 5,534 in 2014. Most of the studies that GAO identified found positive effects from medication synchronization, primarily on patients. For example, a 2018 study reported a 3 percent improvement in medication adherence among patients using medication synchronization than those who were not. Several stakeholders also identified potential limitations of using medication synchronization. For example, some patients may not be able to afford paying all the copayments for their medications at one time each month, and some patients prefer the social interaction of multiple trips to the pharmacy each month. The Centers for Medicare & Medicaid Services (CMS) issued a regulation and some states enacted laws that may help support the use of medication synchronization. While CMS does not have a formal medication synchronization policy for Medicare, a CMS regulation allows for reduced beneficiary cost sharing (for example, a lower copayment) when the beneficiary receives less than a month's supply of a medication. Similar laws pertain to private health plans that provide prescription drug coverage for patients in the five states GAO selected—Georgia, Illinois, Maine, Texas, and Washington. Such measures support medication synchronization because initially aligning the refill dates of multiple medications may require one or more of these medications to be refilled with a quantity that is less than a month's supply. Officials from CMS and four of the selected pharmacies said that lowering the copayments for these refills reduces the financial burden on patients when they first have their medications synchronized. They noted that requiring full copayments for a shorter supply may have discouraged or prevented patients from using medication synchronization.
gao_GAO-20-85
gao_GAO-20-85_0
Background When a disaster overwhelms the ability of state, local, or voluntary agencies to adequately provide essential services on their own, the federal government, when requested, supports disaster response and recovery, providing selected resources where they are needed. The federal government has provided significant funds for transit services following past catastrophic disasters. For example, Congress provided roughly $232 million in response to the 2005 Gulf Coast hurricanes and over $10 billion in response to Hurricane Sandy. FEMA is the federal government’s primary agency for disaster response. In addition to coordinating disaster response and recovery operations, FEMA’s Public Assistance Program provides funding to state and local governments and some nonprofit organizations for recovery efforts after a disaster, including removing debris, implementing emergency protective measures, and repairing or replacing damaged public equipment or facilities. Once the President has declared a disaster, FEMA; the state or territorial government (the recipient); and the local or territorial entities (the subrecipient) work together to develop damage assessments and formulate project worksheets for eligible projects. Project worksheets detail the scope of work and estimated cost for repairing or replacing disaster-damaged infrastructure. After a project has completed FEMA’s review process and is approved, funding is available to FEMA for obligation from the Disaster Relief Fund. The recipient draws down—or withdraws—funding to pay the subrecipient for eligible work upon completion. Because FTA’s Public Transportation Emergency Relief Program is focused on public transportation specifically—unlike FEMA’s more general program—FTA has primary responsibility for reimbursing emergency response and recovery costs after an emergency or major disaster affects a public transportation system if FTA receives funds for the program in an annual or supplemental appropriation or continuing resolution. The Public Transportation Emergency Relief program is a reimbursable grant program and allows FTA to make grants for capital projects to protect, repair, reconstruct, or replace equipment and facilities of a public transportation system as well as for eligible operating costs. Such costs include reestablishing, expanding, or relocating public- transportation route service in the event of a natural disaster that affects a wide area or a catastrophic failure from any external cause. Congress has not provided an annual appropriation for FTA’s Public Transportation Emergency Relief Program but has provided supplemental appropriations following a specific event. Eligible recipients (referred to in this report as “FTA grantees”) of FTA’s Public Transportation Emergency Relief funding are entities that receive funds directly from FTA. Following the appropriation for the 2017 hurricanes, FTA staff and contractors visited sites to develop damage assessments—these assessments provide information on, among other things, the specific location, type of facility or equipment, nature and extent of damage, and a preliminary cost estimate to restore, replace, or reconstruct the damaged system. FTA then uses the information in these damage assessments to determine how to allocate funding among the affected FTA grantees. After FTA announces the allocations, FTA grantees can submit an application for funding to FTA. After FTA has approved the application and obligated funds, recipients must execute the grant agreement to draw down funding for reimbursement of eligible expenses. As required by MAP-21, FTA and FEMA have entered into a memorandum of agreement (MOA) to delineate the roles and responsibilities of the two agencies and establish procedures to coordinate assistance for public transportation following a disaster. We reported in 2014 that because FTA’s Public Transportation Emergency Relief Program is inherently limited by its inability to fund any activities without specific congressional action (in contrast to the other emergency program we examined), FTA and FEMA face challenges clearly delineating the responsibilities and costs each agency will assume during future disasters. We recommended that FTA and FEMA establish specific guidelines to monitor, evaluate, and report the results of collaborative efforts for future disasters. FEMA concurred with this recommendation and FTA took no position. The agencies addressed the recommendation by: (1) implementing a communications protocol to coordinate the two agencies in providing funding to transit agencies and (2) committing to jointly monitoring, evaluating, and reporting on the effectiveness of agency collaboration following events in which both agencies provided funding. In August and September 2017, Hurricanes Harvey, Irma, and Maria made landfall in Texas, Florida, the U.S. Virgin Islands, and Puerto Rico, affecting over 28 million people and causing significant damage to public transit infrastructure (see fig. 1). FEMA funding was made available through presidential disaster declarations. In February 2018, 6 months after the first hurricane made landfall, Congress appropriated funds to FTA’s Public Transportation Emergency Relief Program for the 2017 hurricanes. FTA Allocated Over $230 Million to Repair and Replace Transit Infrastructure, with Most of the Funds Allocated to Puerto Rico FTA announced on May 31, 2018, that it would allocate about $233 million of appropriated emergency relief funds to 52 transit agencies for response, recovery, and rebuilding projects, with approximately 85 percent of the funds ($198 million) going to Puerto Rico. Most of Puerto Rico’s funds, and around half the funds FTA allocated for response, recovery, and rebuilding ($116 million), will be distributed to San Juan’s rail transit service provider, Tren Urbano (see fig. 2). FTA allocated emergency relief funding to transit agencies based on preliminary cost estimates that the agencies submitted to FTA in damage assessment reports. Transit agencies developed these preliminary cost estimates through field surveys, which are meant to determine the general type and extent of damages. As shown in table 1, FTA allocated funds for various purposes including repairs to rail stations and bus terminals, repair and replacement of vehicles, and repairs to transit buildings and facilities. As previously noted, after FTA allocates funds, transit agencies must submit grant applications with detailed information about each eligible project activity and expense. As of October 2019, 19 transit agencies had submitted grant applications to FTA, and FTA approved and obligated funding for each of the 19 applicants. FTA officials told us they are working with the remaining transit agencies on submitting and finalizing their grant applications. Many FTA Grantees Applied to FEMA for Funding, and FEMA and FTA Faced Challenges in Coordinating to Avoid Duplicate Funding More Than Half of the FTA’s Grantees Responding to Our Survey Reported Some Interaction with FEMA Uncertainty regarding whether FTA will receive an appropriation can lead to FTA grantees’ applying to FEMA for funding since FEMA is the federal government’s primary agency for disaster response and recovery and can fund transit. This situation increases the importance of FEMA and FTA coordination. FTA did not receive an appropriation until roughly 6 months after the first hurricane’s landfall. FTA grantees, unaware of when or whether FTA would receive an appropriation, could apply during this period to FEMA’s Public Assistance Program for funding. Indeed, more than half of FTA grantees that responded to our survey (25 of 44) reported some interaction with FEMA’s Public Assistance Program by the time of our survey (see fig. 3). Fourteen reported reaching the quality assurance step on a grant application—the final step before receiving funds from FEMA. Six transit agencies received FEMA funds. Once FTA received an appropriation, FTA and FEMA instructed transit agencies to work with FTA, rather than FEMA, on funding requests. As a result, some transit agencies that initially worked with FEMA had to begin a new application with FTA. Fourteen FTA grantees in our survey reported spending more than 3 months working on their FEMA application; however, 10 stated that they could use the work from the FEMA application toward their FTA emergency relief application. In addition, most of the transit agencies we interviewed anticipated this issue, noting that FTA or FEMA officials explained the situation to them before FTA received an allocation. FTA and FEMA Shared Information with One Another but Faced Challenges Coordinating on Screening Applications to Avoid Duplicate Funding FTA and FEMA Took Steps to Coordinate and Share Information After Congress appropriated funds to FTA for the 2017 hurricanes, FTA and FEMA initiated their communication and coordination agreements, including the MOA and the communications protocol, which define coordination activities between the two agencies. Federal agencies, such as FTA and FEMA, that administer programs as a result of a major disaster or emergency, cannot provide funding for losses that have been covered by insurance or other programs, but are not prohibited from awarding funds to any entity that could receive funding from another agency so long as that entity has not yet received these funds and promises to repay any duplicate assistance. FTA’s and FEMA’s communications protocol also states that it may be appropriate for an agency to receive funding from both FTA and FEMA in a situation where the grantee provides both public transportation services and services other than public transportation. Thus, FTA’s and FEMA’s MOA states that the agencies will coordinate to avoid duplicate funding and to ensure a streamlined reimbursement process. When implementing coordination activities such as FTA’s and FEMA’s MOA and communications protocol, federal internal control standards state that management should design control activities to achieve objectives and respond to risks, such as the risk of providing duplicate funding. FTA and FEMA officials informed us of, and provided documentation of, their coordination efforts, such as biweekly conference calls, and email correspondence among staff. For example, when Congress appropriated funds to FTA, FEMA provided FTA a list of agencies that had applied to FEMA for funding. In addition, when FTA reviewed grant applications, FTA staff emailed FEMA staff to inquire whether applicants had already requested funding from FEMA. To avoid delays in processing applications, FEMA and FTA established an agreement that if FEMA did not respond to such requests in 5 days, then FTA could proceed with processing the application. Based on our document reviews, we found that FTA staff also emailed FEMA staff a copy of the final award. Finally, transit agencies applying to FTA for funding were required to certify whether they had received any transit funding from FEMA and that they would reimburse FTA for any federal funds that duplicated funding provided by FEMA. FTA and FEMA Faced Challenges Coordinating on Screening Applications to Avoid Duplicate Funding While FTA and FEMA took steps to coordinate, both agencies approved about $35,000 in funding to one applicant for the same expenses. In June 2019, we found a case in which FEMA and FTA both approved roughly $6,000 to repair a light pole at a bus stop in Collier County, Florida. Specifically, although FEMA had obligated funds to Collier County for the light pole in January 2019, FTA awarded funds for the same light pole in April 2019. One month prior to FTA’s award to Collier County, we notified FTA that Collier County had indicated in our survey that it had been in contact with FEMA. Subsequently, FTA staff twice emailed FEMA staff to inquire as to whether Collier had requested funds from FEMA, but FEMA staff did not respond. Per their agreement, FTA moved the application forward after receiving no response from FEMA within the 5-day timeframe. FTA awarded the funding to Collier County in April 2019. After we notified FTA and FEMA that they both appeared to have awarded funds for the same expense, FEMA de-obligated the funds for Collier County. In addition, FEMA conducted an additional review and found that both agencies had also approved $29,000 in funding for repairs to a transit facility in Collier County. FEMA officials stated they were in the process of de-obligating those funds as well. Although both agencies awarded funds to Collier County, the County had not yet executed the FTA grant or drawn down any of the funds. FTA and FEMA officials noted that both agencies can face challenges in identifying transit expenses submitted to both agencies. For example, FTA may be unaware of transit agencies receiving FEMA funds if these agencies are not direct recipients of such funds, but rather receive funds through a larger entity such as a city, county, or state government. Thus, although FEMA provides FTA with a list of entities that applied for FEMA funds, the list may only show a county’s name, rather than the name of a transit agency. In addition, while FTA also asks applicants whether they have received FEMA funds, applicants may be unaware of the status of their FEMA reimbursement. For example, officials from Collier County’s public transit department told us they were unaware that FEMA had obligated funding for their transit expenses until May 2019 (one month after the FTA award), because it took several months for the funding from FEMA to be processed at the state and county level. While FTA officials shared proposed and final awards with FEMA, we identified 10 cases, including Collier County, in which FEMA officials did not respond within the established 5-day time frame. When we asked why FEMA did not respond within the 5-day time frame, FEMA regional staff stated that the responsible person had since left that office. However, officials noted challenges they face identifying transit expenses contained within applications sent to FEMA by larger entities that may contain hundreds of pages, while at the same time processing a large number of applications related to the hurricanes. Specifically, in order to identify transit expenses within an application, FEMA staff may need to search these hundreds of pages using various transit-related word searches. For example, according to FEMA officials, Collier County currently has a total of 126 active ongoing and obligated projects and 86 inactive projects that were either withdrawn or determined ineligible. After we notified FTA and FEMA that they had approved funding to Collier County for the same expense, both agencies took steps to limit the potential for duplicate funding in future awards. As noted above, FEMA conducted an additional review of applications for which FEMA had not responded to FTA’s inquiries within the 5-day time frame and identified the $29,000 for transit facility repairs that FTA and FEMA both approved for Collier County. In addition, FTA officials updated their internal grants guidance to indicate that FTA staff should not process an application if FEMA has not responded and FTA has reason to believe there may be a potential for duplicate funding (for example, the recipient notifies FTA that it had previously worked with FEMA to reimburse transit expenses). In such cases, FTA may only proceed after FEMA has replied in writing that they have not identified any expenses in the FTA grant that are also in a FEMA grant, or, if FEMA does identify duplicate funding, after one agency removes such expenses from their grant to the recipient. In 2014, we noted that evaluating and reporting the results of collaborative efforts can identify areas for improvement and recommended that FTA and FEMA establish specific guidelines to monitor, evaluate, and report the results of collaborative efforts. FTA and FEMA implemented this recommendation and committed to jointly monitoring, evaluating, and reporting on the effectiveness of the agencies’ collaboration following future events in which both agencies provide funding. In addition, FTA and FEMA took action to address the duplicate award of funding we identified in our review. Nonetheless, FEMA staff continue to face challenges identifying transit expenses within applications submitted by larger entities, and FTA may be unaware of whether transit entities are included in such applications. Without identifying and implementing systematic measures to detect duplicate expenses, FTA and FEMA are at risk of awarding funds for the same expenses. Conclusions Given that FTA may not receive an appropriation until months after a disaster, transit agencies will continue to submit applications to FEMA when it is unclear whether Congress will provide funding to FTA. This underscores the importance of FTA’s and FEMA’s coordination to avoid providing duplicate funding. FTA and FEMA have taken important steps to coordinate, including establishing an MOA and communications protocol that outline how FTA and FEMA staff should share information. Although FEMA and FTA both approved a relatively small amount of funding for the same expenses in Collier County, the issues that contributed to this outcome involve a risk of providing duplicate funding in the future. FTA took steps to strengthen its processes after we identified this duplicate funding, and FEMA conducted additional retroactive reviews to identify any additional duplicate funding. However, FEMA will continue to face challenges in identifying transit expenses when they are included in the application of a larger entity such as a city, county, or state government. Moreover, FTA may continue to be unaware when transit entities are included in FEMA applications. FEMA and FTA have committed to monitor, evaluate, and report the results of collaborative efforts on an ongoing basis. Without identifying and addressing the factors that contributed to duplicate funding in the federal response to the 2017 hurricanes, FTA and FEMA will continue to face the risk that both agencies will approve funding for the same expense in the future. Recommendations for Executive Action We are making two recommendations, including one to DOT and one to DHS. The Secretary of Transportation should direct the Administrator of FTA to identify and develop controls, such as methods to more easily identify transit expenses within applications submitted by larger entities, such as a city, county, or state government, to address the risk of duplicate funding. (Recommendation 1) The Secretary of Homeland Security should direct the Administrator of FEMA to identify and develop controls, such as methods to more easily identify transit expenses within applications submitted by larger entities such as a city, county, or state government, to address the risk of duplicate funding. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to DOT and DHS for review and comment. We received written comments from DOT and DHS that are reproduced in appendixes I and II. In comments, reproduced in appendix I, DOT concurred with our recommendation. DOT described some of the steps that FTA has taken to coordinate with FEMA, which we note in our report, such as updating its procedures to ensure that an FTA grant does not contain any expenses for which the applicant may have previously requested reimbursement. We continue to believe FTA would benefit from identifying additional internal controls to address the risk of duplicate funding, particularly since FTA and FEMA may still face challenges identifying entities that have applied to both agencies for funding. In comments, reproduced in appendix II, DHS concurred with our recommendation. DHS stated that FEMA is enhancing its Public Assistance Grants Manager System to address the risk of duplicate funding we identified in our report. This includes implementing a new functionality for data exporting, sorting, and filtering to better identify transit-related damages and improved tracking to identify projects that have received FTA funding. DHS estimates these improvements will be completed September 30, 2020. DOT and DHS both provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, the Secretary of the Department of Homeland Security, the Administrator of FTA, the Administrator of FEMA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Comments from the Department of Transportation Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Cohen (Assistant Director); Crystal Huggins (Analyst in Charge); Matt Cook; Christopher Currie; Danielle Ellingston; Susan Irving; Kathryn Godfrey; Janet McKelvey; Cheryl Peterson; Brenda Rabinowitz; Malika Rice; Amy Rosewarne; Rebecca Shea; Joe Thompson; Matthew Valenta; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study In August and September 2017, Hurricanes Harvey, Irma, and Maria made landfall in Texas, Florida, the U.S. Virgin Islands, and Puerto Rico, causing hundreds of millions of dollars in damage to public transit facilities. Access to transit plays an important role in a community's post-disaster recovery. FTA has primary responsibility for providing disaster assistance funding to transit agencies if it receives an appropriation from Congress. If FTA does not receive an appropriation, transit agencies can apply to FEMA for funding. GAO was asked to evaluate the federal government's response and recovery efforts related to the 2017 hurricanes. This report provides information on FTA's emergency relief allocations and examines FTA's and FEMA's coordination. GAO reviewed FTA's allocation of emergency relief funds; conducted site visits to Texas, Florida, and Puerto Rico; obtained survey responses from 44 of 52 transit agencies; and interviewed and reviewed documentation from FTA and FEMA officials. What GAO Found In response to hurricanes in 2017, the Federal Transit Administration (FTA) announced in May 2018 that it would allocate about $233 million of appropriated emergency relief funds to 52 transit agencies for response, recovery, and rebuilding projects, with most of that funding going to Puerto Rico ($198 million). Most of Puerto Rico's funds, and around half the funds FTA allocated ($116 million), will be distributed to one transit system—Tren Urbano—San Juan's rail-transit service provider (see figure below). While FTA and the Federal Emergency Management Agency (FEMA) shared information and coordinated efforts, both agencies still approved about $35,000 to one applicant for the same expenses. GAO found that FTA awarded a grant in April 2019 that included expenses for which FEMA had already obligated funds in January 2019. Although FTA contacted FEMA prior to the award to inquire whether the applicant had received FEMA funding, FEMA did not respond within 5 days, and per an agreement between FTA and FEMA, FTA processed the application. After GAO identified the duplicate funding, FTA and FEMA took steps to limit the potential for duplicate funding; FTA, for example, changed its policy of moving applications forward after 5 days if FEMA does not respond. FTA and FEMA officials noted challenges they face in identifying transit expenses in the applications they receive. For example, they may be unaware that a transit agency received FEMA funds if it received those funds through a larger entity such as a city, county, or state government. Although the amount of funding FEMA and FTA approved for the same expenses was relatively small, without addressing these challenges, FTA and FEMA will continue to face the risk that both agencies will approve funding for the same expense in the future. What GAO Recommends GAO recommends that FTA and FEMA identify and develop controls, such as methods to more easily identify transit expenses within larger applications, to address the risk of duplicate funding. The Department of Transportation agreed with the recommendation and noted steps FTA has taken to address it. However, GAO believes FTA would benefit from identifying additional internal controls to address the risk of duplicate funding. The Department of Homeland Security agreed with the recommendation and outlined steps FEMA plans to complete in 2020.
gao_GAO-20-95
gao_GAO-20-95_0
Background OECA has a range of compliance monitoring, compliance assistance, and enforcement tools available to elicit compliance with laws and regulations from regulated entities, as shown in table 1. Enforcement actions can result in, among other things, the imposition of penalties, requirements to remedy the violation of law or regulation, or both. OECA has developed policies and guidance for EPA staff that describe the agency’s recommended responses to noncompliance based on a number of factors and the escalation of enforcement responses to continuing noncompliance. EPA guidance on informal and formal enforcement actions provides an example related to the Resource Conservation and Recovery Act. In that example, if a regulated entity does not return to compliance or notify the state or EPA that it cannot return to compliance within a certain number of days after an informal enforcement action, the state or EPA may take a formal enforcement action. Generally, according to this same 2010 EPA guidance, informal enforcement actions address small or isolated problems, and formal enforcement actions can address bigger problems. OECA stores and manages a range of compliance monitoring and enforcement data in ICIS. For example, ICIS includes descriptive information about regulated entities, violations, and the outcome of enforcement actions. ECHO, the public access website that integrates data from multiple agency databases, has an internal component for staff and other federal agencies and publicly available components. Staff in EPA’s 10 regional offices, OECA headquarters staff, and states input data into ICIS, which feeds data into ECHO. Regional office staff and OECA headquarters staff also use statute-specific databases to maintain data on compliance with a particular law or office-specific databases built to maintain data, according to the preferences of a particular regional or headquarters office. EPA Collects a Range of Compliance Monitoring and Enforcement Data, but Does Not Maintain Data on Informal Enforcement Actions and Compliance Assistance EPA requires regional offices to collect and enter a range of information on its compliance monitoring and enforcement activities—such as permit, inspection, and violations data—into the agency’s national databases. The agency uses these data to manage its oversight efforts and assess how well the efforts are meeting the agency’s strategic objectives. In addition, EPA is piloting an effort to collect data on coordination with states. However, EPA regional offices do not consistently collect or maintain data on informal enforcement actions. In addition, EPA does not require regional offices to collect and maintain data on their compliance assistance activities; therefore, it has no requirements for regional offices to enter data into the agency’s national databases. EPA Requires Regional Offices to Collect Some Data to Manage and Assess Its Oversight Efforts EPA requires regional offices to collect information from various data sources and enter it into national databases to monitor regulated entities’ compliance with environmental laws and track the agency’s enforcement actions. The information generally includes permit data on limits on emissions or for discharge of pollutants into waters, inspection or other evaluation data, violations data (e.g., failure to take or submit results for drinking water samples); informal enforcement actions, and formal enforcement actions, as shown in figure 1. OECA uses the data in its databases to manage the overall enforcement and compliance program and assess how well its efforts are meeting the objectives outlined in the agency’s strategic plan, according to EPA officials. For example, officials in one regional office told us that regional managers typically review ICIS data (for example, the number of inspections conducted) to monitor their progress toward meeting strategic objectives at the regional level. These regional officials said that staff in their office conduct monthly reviews of ICIS data to understand how their current efforts on certain indicators compare to prior years. OECA headquarters officials told us that the agency had begun to pilot a mechanism to collect data that can help measure agency progress in coordinating with states, one of the agency’s strategic objectives. Specifically, OECA officials told us that in 2018 the agency began a pilot effort to track instances in which regional office staff provide assistance with state enforcement actions, also characterized as “state assists.” According to agency guidance issued in June 2019, a state assist is defined as any instance in which the state could not or would not take the action without OECA’s help or any instance in which a state explicitly requests that OECA take over a case after OECA has identified a violation. During the pilot effort, state assists are documented as such when a regional office has expended substantial resources to identify a violation, develop the injunctive relief, or help the state take an action to obtain a remedy for the violation. According to OECA guidance, the pilot effort, which OECA officials expect to continue through 2021, will help the agency better track its efforts in this area. As of June 2019, according to our analysis of written responses, officials in eight of the 10 regional offices described having documented a state assist as defined by OECA. For example, officials in one regional office stated in their written response to our questions that one specific case against a company located in three different states would have been handled by the regional office. Instead, the regional office agreed to let two of the states take the lead for the cases in those states, and the regional office handled the case in the third state and documented this as two state assists. EPA’s Regional Offices Do Not Consistently Collect or Maintain Data on Informal Enforcement Actions OECA collects data on some informal enforcement actions, such as the number of warning letters sent to regulated entities, but EPA regions do not always collect data about these actions, according to EPA headquarters officials. As a result, the data do not tell the full story of OECA’s enforcement efforts, according to OECA’s Assistant Administrator in testimony during a February 26, 2019, congressional hearing. Furthermore, OECA headquarters officials we interviewed said that data on EPA and state informal enforcement actions are incomplete in EPA’s ECHO website in part because EPA policy and related guidance for each of the various programs defines informal enforcement differently and these definitions can differ from the definitions in ECHO. In a 2010 document, EPA explained how the various agency policy guidance and ECHO define formal and informal enforcement actions differently. For example, the document states that policy guidance for the Clean Air Act defines notices of violation as formal enforcement actions, but that policy guidance for the Clean Water Act and the Resource Conservation and Recovery Act defines notices of violation as informal enforcement actions. Similarly, this same 2010 document states that administrative penalty orders of field citations are considered informal enforcement actions in the policy guidance for the Clean Water Act, but formal enforcement actions in the policy guidance for the Clean Air Act and Resource Conservation and Recovery Act. In addition, the document states that ECHO characterizes notices of violations under the Clean Air Act as informal enforcement actions even though the policy guidance defines them as formal enforcement actions. OECA headquarters officials highlighted two issues that affect the agency’s ability to consistently maintain data on informal enforcement actions: (1) using different definitions of informal enforcement actions across programs and (2) maintaining data on such actions inconsistently. OECA headquarters officials said that they were addressing the first issue of not having one clear definition of informal enforcement actions that applies across all of the air, water, and hazardous waste programs. In September 2019, OECA headquarters officials said EPA was finalizing a single definition of informal enforcement actions for the purpose of collecting more consistent information. In January 2020, EPA provided us with a September 30, 2019, memorandum that defines enforcement response tools, including a definition of informal enforcement action across all programs. Regarding maintaining data inconsistently, while most of the regional offices collect data on some informal enforcement actions, they use different mechanisms to maintain these data. According to our analysis of written responses, officials in nine of the 10 regional offices stated that their offices collect data on some informal enforcement actions such as warning letters, notices of noncompliance, notices of violation, and notices of determination. However, the officials described using different mechanisms for maintaining the data they collect on informal enforcement. For example, officials in five of the nine regional offices that collect data on some informal enforcement actions stated that they maintain the data in ICIS. As we described, ICIS data feeds into ECHO, which has components available to the public. In three of the nine regional offices that collect data on some informal enforcement actions, staff collect data on such actions in a database other than ICIS, such as a statute- or office-specific database, according to our analysis of written responses. Finally, one of the nine regional offices that collect data on some informal enforcement actions maintains those data in paper records, according to an official in that office. In our October 2017 report on key considerations for agency enforcement decisions, we reported that transparency and availability of data are important to promoting compliance and achieving regulatory objectives. As described earlier, EPA changed the focus of its national priorities from enforcement to compliance and increased its use of informal enforcement actions to achieve its regulatory objectives. Having complete information about informal enforcement actions is essential because EPA has elevated the role of such activities in its overall enforcement efforts. EPA often works informally with regulated entities to help them comply with environmental laws and regulations, according to its 2018 EPA Enforcement Annual Results report. However, the agency does not have complete information on those actions for evaluating its compliance monitoring and enforcement performance. Moreover, more complete and consistent information about OECA’s informal enforcement actions would provide a fuller picture of EPA’s overall enforcement efforts. This, in turn, would better enable EPA and OECA to assess whether they are achieving the agency’s regulatory objectives and improve the transparency of OECA’s informal enforcement actions for Congress and the public. Guidance can help agencies communicate expectations and ensure consistency with a standard. While EPA has issued guidance on how various agency policies and ECHO define formal and informal enforcement actions, the agency has not provided guidance to regional offices on how they should collect or maintain data on informal enforcement actions. According to federal standards for internal control, management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals. On September 30, 2019, EPA issued a memorandum that provides definitions for enforcement response tools, including informal enforcement actions, and instructions on how to report such actions. Now that the agency has finalized its definition of informal enforcement actions and specified which mechanisms to use to maintain data on such actions, by clearly documenting in guidance to the regional offices how they should use the definition to collect data on these actions, EPA would have better assurance that the regional offices consistently collect and maintain these data. EPA Does Not Require Its Regional Offices to Collect and Maintain Data on Compliance Assistance Activities According to EPA headquarters officials, OECA stopped requiring regional offices to collect data and report on their compliance assistance activities around 2012. Prior to that time, each regional office had a full- time staff member dedicated to coordinating compliance assistance activities, according to these officials. However, the staff member’s activities were the only compliance assistance data that regional offices collected and maintained. EPA officials stated that the regional offices stopped collecting the compliance assistance activities associated with this position when the agency redirected the funding for the full-time staff position to compliance monitoring and other enforcement efforts. As a result, EPA officials told us that the agency does not have consistent data about its compliance assistance activities. EPA officials told us that the agency made a policy decision to stop dedicating funding to compliance assistance but encouraged staff to continue conducting compliance assistance activities as part of the agency’s outreach for other programs. EPA headquarters officials said that as of September 2019, the agency had no plans to require regional offices to collect and report data on compliance assistance. However, according to these officials, although the agency stopped funding the compliance assistance coordinator position, regional staff continue to conduct a range of compliance assistance activities as part of their regular enforcement duties. Figure 2 shows the types of compliance and enforcement data that EPA collects, including that the agency does not require regional offices to collect information about compliance assistance. According to our analysis of written responses, officials in nine of the 10 regional offices reported that they collect some data on the compliance assistance activities their offices conduct. Officials in one office said that they do not collect data on compliance assistance activities because it is not required. The types of data on compliance assistance that the nine regional offices collect and the methods those offices use for maintaining the data differ, according to our analysis of written responses. For example, some regional officials described collecting data on compliance assistance provided over the telephone, and other officials described collecting data on on-site compliance assistance provided during inspections. According to our analysis of written responses, officials in two regional offices described providing on-site compliance assistance for minor issues during inspections and tracking the number of times such assistance was provided. Officials in the nine regional offices that still collect data on some compliance assistance activities described storing the data differently, either in region-specific databases or in paper files. Officials in two of these regional offices said that regional staff decide how to document telephone calls from regulated entities for assistance. Officials in one region stated that they no longer conduct large-scale compliance assistance activities such as conducting workshops or developing informational materials because EPA eliminated the reporting requirement. Having complete information about its compliance assistance activities is essential because EPA has elevated the role of such activities in its overall enforcement efforts. However, EPA does not have complete information on its compliance monitoring and enforcement activities, partly because the agency does not require the collection of data on compliance assistance activities. EPA’s lack of complete information on its compliance assistance activities is inconsistent with its change in policy. In addition, in our October 2017 report on key considerations for agency enforcement decisions, we reported that transparency and availability of data are important to promoting compliance and achieving regulatory objectives. Having complete information about its compliance assistance activities may provide more complete information on those activities for evaluating its compliance monitoring and enforcement performance. As discussed earlier, most of the regional offices continue to collect some information on compliance assistance even though they are not required to do so and use varying mechanisms to maintain the information. Because EPA does not direct the regional offices to collect data on compliance assistance activities, the agency would not have issued guidance instructing regional offices to collect such data and specifying which mechanism to use to maintain them. However, according to federal standards for internal control, management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals. Without clearly documenting in guidance to the regional offices that they should collect data on compliance assistance activities and specifying which mechanism to use to maintain the data, such as ICIS, EPA will not have the information it needs to track progress toward its strategic objective of increasing the agency’s use of compliance assistance activities to help regulated entities comply with laws and regulations. EPA Communicates the Results of Its Compliance Monitoring Activities and Enforcement Actions to the Public and Congress through a Website and Annual Reports EPA communicates the results of its compliance monitoring activities and enforcement actions by making data available to the public and Congress through its website and annual reports. EPA’s ECHO website allows the public to view data over time, such as the number of facilities inspected by an authorized state or EPA from fiscal years 2011 to 2019. To help the public understand the data presented on its ECHO and other websites, EPA websites list a number of national and state-specific known data limitations concerning the data collected for its environmental programs. For example, the ECHO website identifies whether certain years of data are not appropriate for analyzing trends, such as its data on penalties under the Clean Water Act prior to 2015. EPA issues annual performance reports that include data on compliance monitoring and enforcement to fulfill requirements under the Government Performance and Results Act and other requirements. These reports describe progress toward the three strategic goals and related objectives in EPA’s Fiscal Year 2018-2022 Strategic Plan. In addition, since 2017, EPA has published a Year in Review report that outlines the agency’s accomplishments, including in the area of enforcement, using data on its compliance and enforcement actions to present the results of its efforts. In addition, at the end of each fiscal year, OECA publishes a Fiscal Year EPA Enforcement and Compliance Annual Results report and companion data graphs that provide enforcement data over a selected time period on such topics as the number of EPA inspections conducted, cases initiated, and value of fines and penalties collected. Environmental groups and media outlets have used EPA’s data to develop analyses, conclusions, and inferences about changes in EPA’s enforcement results. In December 2018, we reported that providing information about a dataset—for example, known limitations of the data in that dataset— allows users to determine whether the database is suitable for their intended purpose and make informed decisions about whether and how to use it. For example, EPA’s 2000 EPA Quality Manual for Environmental Programs states that published reports with environmental data shall be accompanied by a readily identifiable section or appendix that discusses the quality of the data and any limitations on the use of the data with respect to their original intended application. It also states that the agency’s reports should include applicable statements about possible misuse of the data for other purposes. EPA’s Fiscal Year 2018 Annual Performance Report includes a link to companion reports on its website that describe, among other things, the sources of the data used in the report and the known limitations of those data. Specifically, the companion reports include information such as the definition of terms used, units of measurement, data sources, method for analyzing the data, and the known limitations of the data. However, neither of EPA’s other 2018 annual reports we reviewed fully disclosed known limitations to the data the agency included in each report: Year in Review 2018. OECA’s Year in Review 2018 report, the most recent report available at the time of our review, includes a range of data—such as number of actions taken, monetary results, the reduction of emissions in tons, and data over selected time periods— to accompany its statements about the agency’s accomplishments. However, the report does not include any information about data sources or known limitations of the data. Fiscal Year 2018 EPA Enforcement and Compliance Annual Results. EPA’s Fiscal Year 2018 EPA Enforcement and Compliance Annual Results report, also the most recent at the time of our review, includes data sources and some known limitations of the data. For example, the report states that the data on results do not include state and local inspections or enforcement actions. Additionally, the report includes statements about changes in how the agency stores data that may prevent the data from being comparable across years. The report lists the various sources of the data used to create the report’s charts and graphs. EPA has published known limitations of these data on its ECHO website and indicated that broad data issues may affect the completeness, timeliness, or accuracy of the data in its various systems. However, based on our review of the report, it does not include information about known limitations of all of the data in the report. In addition, neither the Year in Review 2018 report nor the Fiscal Year 2018 EPA Enforcement and Compliance Annual Results report includes a readily identifiable section or appendix that discusses the known limitations of the data, as called for by leading practices for transparently reporting government data and as exemplified in EPA’s manual governing environmental data quality. In commenting on our assessment of the annual reports, EPA officials did not provide a reason why the reports do not discuss known data limitations but told us in a prior meeting that the current documentation on the ECHO website includes the current known data limitations. Furthermore, EPA’s Fiscal Year 2018 EPA Enforcement and Compliance Annual Results report does not fully describe how the data in the report should be interpreted given the known data limitations the report contains. For example, the 2018 annual results report provides a partial picture of overall enforcement of environmental laws because the data exclude state enforcement actions. In addition, for the yearly data across years (2008 through 2018 or 2012 through 2018), EPA does not fully provide information on any limitations in how the data should be analyzed; for example, whether the data are appropriate for the purpose of identifying trends or providing a snapshot of an activity for a single year. EPA does, however, include information on the impact of one or two large cases on the data presented for some data in the report such as the volume of contaminated soil and water to be cleaned up or the treatment and disposal of hazardous and nonhazardous waste. In our November 2019 report on data transparency, we concluded that without the transparent disclosure of known data limitations, users may view or analyze data without full knowledge of the extent to which the data are timely, complete, accurate, or comparable over time. Our November 2019 report also concluded that this could lead users to inadvertently draw inaccurate information or conclusions from the data. OECA’s Assistant Administrator has discussed the known limitations of EPA’s data in the annual reports. In a February 26, 2019, testimony before Congress, OECA’s Assistant Administrator stated that the averages for some of the metrics used in EPA’s annual results report cannot be interpreted to represent a statistical trend. OECA’s Assistant Administrator also stated that changes in the number of enforcement actions may be a function of changes in programmatic decisions and may not be reflective of changes in the underlying compliance of regulated entities with environmental statutes. By including the known limitations of data in its annual reports and providing information on the intended use of EPA’s data, as called for by leading practices for transparently reporting government data and as exemplified in existing EPA guidance for environmental data, EPA would have better assurance that Congress and the public are informed about the data presented and how the data should be interpreted. Conclusions EPA collects a range of information and uses the information to manage its enforcement and compliance program and assess how well its efforts are meeting the objectives outlined in the agency’s strategic plan and other documents. However, while most of the regional offices collect data on some informal enforcement actions, they use different mechanisms to maintain these data, and the agency has not provided guidance to regional offices on how they should collect or maintain the data. Without documenting in guidance to the regional offices how they should collect data on informal enforcement actions and specifying which mechanism to use to maintain the data, EPA lacks assurance that the regional offices will consistently collect and maintain these data. On September 30, 2019, EPA issued a memorandum that provides definitions for enforcement response tools, including informal enforcement actions and instructions on how to report such actions. We view this as a step in the right direction. Now that the agency has finalized its definition of informal enforcement actions and provided instructions on how regional offices should report such actions, by clearly documenting in guidance on how regional offices should use the definition to collect data on these actions, EPA would have better assurance that the regional offices consistently collect and maintain these data. Similarly, EPA does not have complete information on its compliance monitoring and enforcement activities because the agency does not require the collection of data on compliance assistance activities. As a result, the agency has not issued guidance instructing regional offices to collect such data and specifying which mechanism to use to maintain them. Without clearly documenting in guidance to the regional offices that they should collect data on compliance assistance activities and specifying which mechanism to use to maintain the data, such as ICIS, EPA will lack key information. Such information is needed to track progress toward its strategic objective of increasing the agency’s use of compliance assistance activities to help regulated entities comply with laws and regulations. While EPA communicates the results of its compliance monitoring activities and enforcement actions through its website and annual reports, neither of its 2018 annual reports includes a readily identifiable section or appendix that discusses the known limitations of the data. The 2018 annual results report also does not fully describe how the data in the report should be interpreted, given the known data limitations the report contains. By including the known limitations of the data in its annual reports and providing information on the intended use of EPA’s data, EPA would have better assurance that Congress and the public are informed about the data presented and how the data should be interpreted. Recommendations for Executive Action We are making the following three recommendations to EPA: The Assistant Administrator for EPA’s Office of Enforcement and Compliance Assurance should clearly document in guidance to the regional offices how they should use the definition of informal enforcement actions to collect data on these actions. (Recommendation 1) The Assistant Administrator for EPA’s Office of Enforcement and Compliance Assurance should clearly document in guidance to the regional offices that they should collect data on compliance assistance activities and specify which mechanism to use to maintain the data, such as ICIS. (Recommendation 2) The Assistant Administrator for EPA’s Office of Enforcement and Compliance Assurance should include the known limitations of data in its annual reports and provide information on the intended use of EPA’s data. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to EPA for review and comment. In its written comments, reproduced in appendix I, EPA stated that it agreed with all three of our recommendations and many of our findings and conclusions. EPA also provided technical comments, which we incorporated into the report, as appropriate. In response to our first recommendation to clearly document in guidance how regional offices should use the definition of informal enforcement to collect data on these actions and specify a mechanism to maintain the data, EPA said that the agency issued a September 30, 2019, memorandum for headquarters and regional enforcement offices to implement. This memorandum provides guidance on EPA definitions for enforcement response tools, to promote consistency and clarity in the use of enforcement terms, according to EPA. EPA also said that the guidance defines “informal enforcement action.” The guidance includes instructions on how to report such actions. The guidance states that, with two exceptions, headquarters and regional offices are expected to report, in ICIS, all informal enforcement actions across all programs that meet the new definition. In addition, the guidance states that because it is only a definitional document and does not include guidance on appropriate use of the enforcement response policy tools, the agency will work to identify the specific changes in practice needed (i.e., changes in use and reporting). The guidance states that EPA anticipates that informal enforcement actions meeting the new definition will be included in the agency’s certified annual enforcement results beginning in fiscal year 2020. We view EPA’s guidance as a step in the right direction, and the guidance states that EPA will provide training and additional guidance for enforcement staff to ensure consistent implementation across regional offices and headquarters. Additional guidance will provide EPA with an opportunity to specify how regional offices are to use the definition of informal enforcement to collect data on these actions. We modified our recommendation because EPA’s recent guidance specifies mechanisms for EPA employees to maintain data on informal enforcement actions. In response to our second recommendation to clearly document in guidance that regional offices should collect data on compliance assistance activities and specify a mechanism to maintain the data, EPA said that it would collect data on compliance assistance for each of the National Compliance Initiatives and maintain those data in ICIS. In response to our third recommendation to include known data limitations in annual reports and provide information on intended use of its data, EPA stated that it acknowledges the importance of providing information about a dataset to facilitate proper interpretation. For that reason, EPA said that, in time for its fiscal year 2020 report, the agency will create a webpage to describe how best to interpret the data presented in the agency’s Fiscal Year EPA Enforcement and Compliance Annual Results report and include a reference to that webpage in the report itself as well as the Year in Review report. In technical comments related to our third recommendation, EPA stated that several of the limitations we identified in the report do not affect the data included in its Fiscal Year EPA Enforcement and Compliance Annual Results report. In considering EPA’s technical comments, we modified the text of the report concerning examples of the annual report’s data limitations, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I:Comments from the Environmental Protection Agency Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Chad M. Gorman (Assistant Director); Tahra Nichols (Analyst in Charge); Mark Braza; Courtney Carroux; Tara Congdon; Jazzmin Cooper; Matthew Hunter; Caroline Prado; Dan Royer; Jeanette Soares; Kiki Theodoropoulos, Sonya Vartivarian, and Michelle R. Wong made key contributions to this report.
Why GAO Did This Study Enforcing environmental laws and regulations, including those governing water, air, and hazardous waste, is a central part of EPA's mission. In partnership with states, EPA oversees compliance with these requirements for about 800,000 regulated entities, such as refineries and sewage treatment plants. OECA carries out much of EPA's compliance and enforcement responsibilities through the agency's 10 regional offices. OECA has a range of compliance assistance, compliance monitoring, and enforcement tools available to elicit compliance with laws and regulations from regulated entities. These tools include conducting on-site inspection, training staff and providing technical assistance, developing cases, and issuing warning letters. GAO was asked to review EPA's enforcement efforts. This report examines, among other objectives, the types of information EPA collects on its compliance assistance, compliance monitoring, and enforcement actions. GAO analyzed written responses to its questions from all 10 regional offices, reviewed agency documents and databases, and interviewed EPA officials in headquarters and regional offices. What GAO Found The Environmental Protection Agency (EPA) collects a range of information on compliance and enforcement such as data on inspections, violations, and enforcement actions. The agency uses these data to manage its efforts and assess progress in meeting the agency's strategic objectives. In an August 2018 memorandum, EPA's Office of Enforcement and Compliance Assurance (OECA) reported a key strategic change to increase compliance assistance activities (e.g., training) and informal enforcement actions (e.g., warning letters). However, the agency does not consistently collect or maintain data on either type of action (see figure). Specifically, OECA has not directed regional offices to collect or report data on compliance assistance activities since 2012 and, consequently, does not have guidance instructing regional offices to collect such data and specifying which mechanism offices should use to maintain these data. Also, the agency did not provide guidance to those offices defining informal enforcement actions or how to maintain data on them until September 30, 2019, but the guidance does not specify how to collect data on such actions. By clearly documenting in guidance how the offices should use the definition to collect data on such actions, EPA could more consistently collect these data. As the figure shows, OECA does not require regional offices to collect data on compliance assistance or complete data on informal enforcement actions. Having complete information about its compliance assistance activities and informal enforcement is essential because EPA has elevated the role of such activities in its overall enforcement efforts. However, because EPA is not consistently collecting these data, the agency cannot be sure it is achieving its strategic objectives. EPA would have better assurance it has the information it needs by clearly documenting in guidance to the regional offices that they should: collect data on compliance assistance activities and informal enforcement actions and specify which mechanism to use to maintain compliance assistance data. By doing so, EPA would have better assurance that the regional offices consistently collect and maintain these data in order to track progress toward the agency's strategic objective of increasing the use of such activities and actions. What GAO Recommends GAO is making three recommendations to EPA, including that it should clearly document in guidance to its regional offices that they should collect data on compliance assistance activities and informal enforcement actions and specify which mechanism to use to maintain compliance assistance data. EPA agreed with GAO's recommendations and stated that the agency has either begun to or plans to implement them.
gao_GAO-20-256T
gao_GAO-20-256T_0
Background VA’s mission is to promote the health, welfare, and dignity of all veterans by ensuring that they receive medical care, benefits, social support, and lasting memorials. In providing health care and other benefits to veterans and their dependents, VA relies extensively on IT systems and networks to receive, process, and maintain sensitive data, including veterans’ medical records and other personally identifiable information. Accordingly, effective information security controls based on federal guidance and requirements are essential to ensure that the department’s systems and information are adequately protected from loss, unauthorized disclosure, inadvertent or deliberate misuse, or improper modification, and are available when needed. Implementing an effective information security program and controls is particularly important for VA since it uses IT systems and electronic information to perform essential activities for veterans, such as providing primary and specialized health care services, medical research, disability compensation, educational opportunities, assistance with home ownership, and burial and memorial benefits. The corruption, denial, or delay of these services due to compromised IT systems and electronic information can create undue hardship for veterans and their dependents. Federal Law and Policy Set Requirements for Securing Federal Systems and Information The Federal Information Security Modernization Act of 2014 (FISMA) requires the head of each agency to provide information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the information and information systems used by or on behalf of the agency. The act also requires federal agencies to develop, document, and implement an agency-wide information security program to provide security for the information and information systems supporting their operations and assets by implementing policies and procedures intended to cost-effectively reduce risks to an acceptable level. In May 2017, the president signed Executive Order 13800 on strengthening the cybersecurity of federal networks and critical infrastructure. The order sets policy for managing cybersecurity risk and directs each executive branch agency to use the National Institute of Standards and Technology’s (NIST) cybersecurity framework to manage those risks. The NIST cybersecurity framework identifies specific activities and controls for achieving five core security functions: Identify: Develop an understanding of the organization’s ability to manage cybersecurity risk to systems, people, assets, data, and capabilities. Protect: Develop and implement appropriate safeguards to ensure delivery of critical services. Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement appropriate activities to take action regarding a detected cybersecurity incident. Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore capabilities or services that were impaired due to a cybersecurity incident. According to NIST, these five functions provide a high-level, strategic view of the life cycle of an organization’s management of cybersecurity risk. The 23 Civilian CFO Act Agencies Have Spent Billions on Cybersecurity Activities In fiscal year 2018, the 23 civilian agencies covered by the Chief Financial Officers Act of 1990 (CFO Act), including VA, reported spending over $6.5 billion on IT security- or cybersecurity-related activities. The 23 civilian agencies individually reported spending between $9 million and almost $1.9 billion on these activities. Collectively, these 23 agencies spent on average about 14 percent of their total IT expenditures on cybersecurity-related activities. VA reported spending about $386 million on cybersecurity, which represented about 8 percent of its total IT expenditures. Federal Agencies Continue to Report Large Numbers of Security Incidents, Although VA Has Reported Fewer Incidents In Recent Years In fiscal year 2018, federal agencies continued to report large numbers of information security incidents. As we previously noted, federal agencies reported over 30,000 security incidents during each of the last three fiscal years. Specifically, agencies reported a total of 30,899, 35,277, and 31,107 information security incidents in fiscal years 2016, 2017, and 2018, respectively. During those same periods of time, VA reported an average of 2,415 incidents annually, although the number of reported incidents steadily decreased from 2,808 to 1,776, as shown in figure 1. In fiscal year 2018, VA reported 1,776 incidents involving several threat vectors. These threat vectors included web-based attacks, phishing attacks, and the loss or theft of computer equipment, among others. Figure 2 provides a breakdown of information security incidents, by threat vector, reported by VA in fiscal year 2018. Perhaps most concerning of the incidents reported by VA is the relatively large percentage of incidents (41 percent) for which VA identified “Other” as the threat vector. Government-wide, agencies identified approximately 27 percent of their incidents in the “Other” category in fiscal year 2018. A large percentage of these incidents may indicate a lack of agency awareness and ability to investigate and catalog incidents. Federal Agencies, Including VA, Continue to Have Deficient Information Security Programs FISMA requires IGs to determine the effectiveness of their respective agency’s information security programs. To do so, OMB instructed IGs to provide a maturity rating for agency information security policies, procedures, and practices related to the five core security functions— identify, protect, detect, respond, and recover—established in the NIST cybersecurity framework, as well as for the agency-wide information security program. The ratings used to evaluate the effectiveness of agency information security programs are based on a five-level maturity model, as described in table 1. According to this maturity model, Level 4 (managed and measurable) represents an effective level of security. Therefore, if an IG rates the agency’s information security program at Level 4 or Level 5, then that agency is considered to have an effective information security program. VA was one of 18 CFO Act agencies where the IG determined that the agency-wide information security program was not effectively implemented during fiscal year 2018. The VA IG also determined the department’s maturity level for each of the five core security functions: Level 2 (defined) for the Detect function; Level 3 (consistently implemented) for the Identify, Protect, and Level 4 (managed and measurable) for the Respond function. As shown in figure 3, VA’s ratings were generally consistent with the maturity level ratings of other CFO Act agencies. Most CFO Act Agencies, Including VA, Had Significant Security Control Deficiencies over Their Financial Reporting Agency IGs or independent auditors assess the effectiveness of information security controls as part of the annual audits of the agencies’ financial statements. The reports resulting from these audits include a description of information security control deficiencies related to the five major general control categories defined by the Federal Information System Controls Audit Manual (FISCAM): security management controls that provide a framework for ensuring that risks are understood and that effective controls are selected, implemented, and operating as intended; access controls that limit or detect access to computer resources, thereby protecting them against unauthorized modification, loss, and disclosure; configuration management controls that prevent unauthorized changes to information system resources and assure that software is current and known vulnerabilities are patched; segregation of duties controls that prevent an individual from controlling all critical stages of a process by splitting responsibilities between two or more organizational groups; and contingency planning controls that help avoid significant disruptions in computer-dependent operations. For fiscal year 2018, most of the 24 CFO Act agencies had deficiencies in most of the control categories, as illustrated in figure 4. VA’s IG reported deficiencies in each of these categories for the department. As a result of these deficiencies, the IGs at 18 of the 24 CFO Act agencies designated information security as either a material weakness (six agencies, including VA) or significant deficiency (12 agencies) in internal control over financial reporting for their agency. For VA, fiscal year 2018 was the 17th year in a row that the department had reported a material weakness in information security. In addition, IGs at 21 of the 24 agencies, including VA, cited information security as a major management challenge for their agency for fiscal year 2018. Most Civilian CFO Act Agencies, Including VA, Have Reported Meeting Many Cybersecurity Implementation Targets The administration has developed key milestones and performance metrics for agency chief information officers (CIO) to use to assess their agency’s progress toward achieving outcomes that strengthen federal cybersecurity. The milestones and metrics have specific implementation targets, most of which are expected to be met by the end of fiscal year 2020. As of fiscal year 2018, most civilian CFO Act agencies, including VA, had reported meeting most of the implementation targets for that year. VA reported meeting six of 10 targets. Table 2 shows the number of agencies meeting their targets as of fiscal year 2018, as well as VA’s status in doing so. VA Faces Key Security Challenges As It Modernizes and Secures Its Information Systems In several reports issued since fiscal year 2016, we described deficiencies related to key challenges that VA has faced in safeguarding its information and information systems. The challenges we reported related to effectively implementing information security controls; mitigating known security deficiencies; establishing elements of its cybersecurity risk management program; and identifying critical cybersecurity staffing needs. Our work stresses the need for VA to address these challenges as well as manage IT supply chain risks as it modernizes and secures its information systems. Effectively Implementing Information Security Controls VA has been challenged to effectively implement security controls over its information and information systems. As previously mentioned in this statement, the VA IG reported that the department did not have an effective information security program and has had deficient information security controls over its financial systems. The weaknesses described by the IG are consistent with the control deficiencies we identified during an examination of VA’s high-impact systems that we reported on in 2016. In those reports, we described deficiencies in VA’s implementation of access controls, patch management, and contingency planning. These deficiencies existed, in part, because the department had not effectively implemented key elements of its information security program. Until VA rectifies reported shortcomings in its agency-wide information security program, it will continue to have limited assurance that its sensitive information and information systems are sufficiently safeguarded. Adequately Mitigating Known Security Deficiencies VA has not consistently mitigated known security deficiencies in a timely manner. As mentioned earlier, VA has reported a material weakness in information security for financial reporting purposes for 17 consecutive years. In fiscal year 2016, we recommended 74 actions for the department to take to improve its cybersecurity program and remedy known control deficiencies with selected high-impact systems. However, as of October 2019, over 3 years later, VA had implemented only 32 (or 43 percent) of the 74 recommendations. One of the remaining unimplemented recommendations calls for the department to consistently and comprehensively perform security control assessments. This recommended activity is an important element of a cybersecurity program and helps to provide assurance that controls are operating as intended and to detect controls that are not functioning correctly. VA has also been challenged in assuring that its actions to mitigate vulnerabilities and implement recommended improvements are effective. The department has asserted that it had implemented 39 of the 42 remaining open recommendations from our fiscal year 2016 reports. However, the evidence VA provided was insufficient to demonstrate that it had fully implemented the recommendations. The department subsequently provided additional evidence, which was also insufficient, indicating that its remedial action process was not validating the effectiveness of actions taken to resolve known deficiencies. Until VA adequately mitigates security control deficiencies, the sensitive data maintained on its systems will remain at increased risk of unauthorized modification and disclosure, and the systems will remain at risk of disruption. Fully Establishing Elements of a Cybersecurity Risk Management Program VA has been challenged in managing its cybersecurity risk. In July 2019, we reported that the department had fully met only one of the five foundational practices for establishing a cybersecurity risk management program. Although VA established the role of a cybersecurity risk executive, the department had not fully: developed a cybersecurity risk management strategy that addressed key elements, such as risk tolerance and risk mitigation strategies; documented risk-based policies that required the department to perform agency-wide risk assessments; conducted an agency-wide cybersecurity risk assessment to identify, assess, and manage potential enterprise risks; or established coordination between cybersecurity and enterprise risk management. VA concurred with our four recommendations to address these deficiencies and asserted that it is acting to do so. Nevertheless, until VA fully establishes a cybersecurity risk management program, its ability to convey acceptable limits regarding the selection and implementation of controls within the established organizational risk tolerance will be diminished. Identifying Critical Cybersecurity Staffing Needs VA has been challenged to accurately identify the work roles of its workforce positions that perform IT, cybersecurity, or cyber-related functions—a key step in identifying its critical cybersecurity staffing needs. In March 2019, we reported that the department had likely miscategorized the work roles of many of these positions in its personnel system. Specifically, VA had reported that 3,008 (or 45 percent) of its 6,636 positions in the 2210 IT management occupational series— positions that most likely performed IT, cybersecurity, and cyber-related functions—were not performing these functions. VA concurred with our recommendation to review the work roles for positions in the 2210 IT management occupational series and assign the appropriate work roles, and stated that it had begun to do so. Nevertheless, until VA completely and accurately categorizes the work roles of its workforce positions performing IT, cybersecurity, and cyber- related functions, the reliability of the information needed to improve workforce planning will be diminished and its ability to effectively identify critical staffing needs will be impaired. Managing IT Supply Chain Risks as Part of IT Modernization Programs Assessing and managing supply chain risks are important considerations for agencies, including VA, when operating and modernizing IT systems. In July 2018, we reported that reliance on a global IT supply chain introduces risks to federal information systems. We noted that supply chain threats are present during various phases of a system’s development life cycle and we identified the following threats: Installation of malicious or intentionally harmful hardware or software; Installation of counterfeit hardware or software; Failure or disruption in the production or distribution of critical Reliance on a malicious or unqualified service provider; and Installation of hardware or software that contains unintentional vulnerabilities, such as defects in code that can be exploited. These threats can have a range of impacts, including allowing adversaries to take control of systems or decreasing the availability of materials or services needed to develop systems. Accordingly, agencies such as VA need to take appropriate measures to assess and manage IT supply chain risks as they operate and modernize their information systems. Failure to do so could result in data loss, modification, or exfiltration; loss of system availability; and a persistent negative impact on the agency’s mission. In summary, similar to other federal agencies, VA continues to be challenged in implementing an effective agency-wide program and controls for securing its information and information systems. As VA pursues efforts to modernize and secure its IT systems, it will need to successfully address multiple challenges in order to achieve effective outcomes. Chair Lee, Ranking Member Banks, and Members of the Subcommittee, this completes my written statement. I would be pleased to answer your questions. GAO Contact and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Jeffrey Knott (Assistant Director), Di’Mond Spencer (Analyst-in-Charge), Chris Businsky, Nancy Glover, Franklin Jackson, and Daniel Swartz. Also contributing were Melina Asencio, Scott Pettis, and Zsaroq Powe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In providing health care and other benefits to veterans and their dependents, VA relies extensively on IT systems and networks to receive, process, and maintain sensitive data, including veterans' medical records and other personally identifiable information. Accordingly, effective security controls based on federal guidance and requirements are essential to ensure that VA's systems and information are adequately protected from loss, unauthorized disclosure, inadvertent or deliberate misuse, or improper modification, and are available when needed. For this testimony, GAO summarized the status of information security across the federal government and particularly at VA. It also discusses the security challenges that VA faces as it modernizes and secures its information systems. To develop this statement, GAO reviewed its prior reports and relevant Office of Management and Budget, IG, and agency reports. What GAO Found , detect , respond , and recover —established by the National Institute of Standards and Technology's cybersecurity framework. VA's ratings were generally consistent with the ratings of other major agencies (see figure) and its information security program was one of 18 agency programs that IGs deemed ineffective. Most major agencies, including VA, had significant security control deficiencies over their financial reporting. For example, for fiscal year 2018, VA's IG reported deficiencies in control areas, such as security management, access control, configuration management, segregation of duties, and contingency planning. Additionally, as of fiscal year 2018, VA reported meeting six of the 10 cybersecurity performance targets set by the administration. VA faces several security challenges as it secures and modernizes its information systems. These challenges pertain to effectively implementing information security controls; mitigating known vulnerabilities; establishing elements of its cybersecurity risk management program; and identifying critical cybersecurity staffing needs. VA also faces the additional challenge of managing IT supply chain risks as the department takes steps to modernize its information systems. What GAO Recommends In 2016, GAO recommended 74 actions for VA to take to address deficiencies and improve its cybersecurity program. However, as of October 2019, VA had not demonstrated that it had addressed 42 of these recommendations. In 2019, GAO made four additional recommendations to improve the department's cybersecurity risk management program and one recommendation to accurately identify work roles of IT and cybersecurity workforce positions. VA concurred with these recommendations and planned to implement them.
gao_GAO-19-348
gao_GAO-19-348_0
Background IDEA was enacted to ensure that all children with disabilities have access to a free appropriate public education (FAPE); to protect the rights of those children and their parents; and to assist states, localities, educational service agencies, and federal agencies in educating those children. Part C of IDEA provides grants to states for Early Intervention services for infants and toddlers (birth through 2 years) with developmental delays or diagnosed conditions that have a high likelihood of developmental delay. Part B of IDEA provides grants to states to assist them in providing special education and related services to eligible children with disabilities beginning at age 3 and possibly lasting to the student’s 22nd birthday, depending on state law or practice. Special Education Administration and Funding In fiscal year 2019, the total appropriation for IDEA Parts B and C was approximately $13.2 billion ($12.8 billion for Part B and $470 million for Part C). These funds are awarded through formula grants to state agencies which, in turn, provide these funds to eligible entities (school districts under Part B and early intervention service providers under Part C) to carry out applicable IDEA requirements. (See table 1.) Part C (Early Intervention for Infants and Toddlers, Birth to 2 Years) Each state has a designated lead agency—called a Part C Lead Agency—that is responsible for administering, supervising, and monitoring Part C. Part C requires each state to have a continuous process of public awareness activities and evaluations designed to identify and refer as early as possible all young children with disabilities and their families who are in need of Early Intervention services. By law, public awareness efforts should include disseminating information to parents and those likely to make referrals, especially hospitals and physicians. States have disseminated this information in different ways, including through television ads, pamphlets, and posters describing Part C and how parents can obtain services for their child. Under Part C of IDEA, states must also provide services to any child under 3 years of age who is developmentally delayed. These delays must be measured by appropriate diagnostic instruments and procedures in one or more areas of cognitive development, physical development, communication development, social or emotional development, and adaptive development, or the child must have a diagnosed physical or mental condition that has a high probability of resulting in developmental delay. Once a child who is suspected of having a disability is referred, states must evaluate the child in accordance with applicable IDEA requirements. Figure 1 illustrates the typical process in Early Intervention programs. Infants and toddlers who are still receiving services by about age 2 and a half are evaluated again to determine if they are eligible for services under Part B. Part B (Special Education Services for Children and Youth ages 3 through 21) Under Part B, states and school districts must make FAPE available to all eligible children with disabilities in mandatory age ranges. FAPE includes special education (specially designed instruction) and related services (support services)—such as speech therapy, psychological services, and physical therapy—tailored to their needs based on an individualized education program (IEP). Figure 2 illustrates the typical process for identifying students for special education under Part B. Figure 3 shows the percentage of children served under IDEA by age and state as of fall 2016. Nationally, for each age group, the percentage of children receiving special education services remained relatively stable from 2012 through 2016, changing by less than 1 percentage point. Varied State Eligibility Criteria and Challenges Identifying and Evaluating Children May Help Explain Differences in Percentages Served Eligibility Criteria and Identification Processes Vary Across States IDEA requires states to have policies and procedures to ensure that school districts identify, locate, and evaluate all children suspected of having a disability who need special education and related services, regardless of the severity of their disability, but also gives states some latitude in establishing eligibility criteria and defining disability categories. In addition, states have some flexibility to determine their own processes for identifying and evaluating children, provided the state’s procedures are consistent with IDEA requirements. As a result, a child eligible for IDEA services in one state might be ineligible in another. Early Intervention (IDEA Part C) Eligibility criteria. IDEA allows states some flexibility to establish their own definitions of developmental delay (when a child does not reach developmental milestones for certain skills, such as motor or language skills, at the expected times), including the level or severity of the delay. For example, in Maryland, a child must have at least a 25 percent delay in one or more developmental areas to be eligible for Early Intervention services, while in Arizona, a child must demonstrate a 50 percent delay in one or more developmental areas to be eligible. In Massachusetts, Part C lead agency officials we interviewed said that the state had, as IDEA allows, tightened eligibility criteria in 2009 to reduce the number of children eligible for Early Intervention services by narrowing the definition of developmental delay. Officials said that there were no current plans to change the eligibility criteria, but that they would consider tightening eligibility criteria again if the number of eligible children outpaces state fiscal resources for these services. Part C of IDEA also allows but does not require states to provide Part C services to at-risk infants and toddlers. States that choose to provide services to at-risk children may use IDEA risk factors to determine eligibility, such as low birth weight or history of abuse and neglect, or they may develop their own list of risk factors. For example, Massachusetts developed its own at-risk criterion for eligibility, which requires the presence of four or more defined child and family factors, including biological, medical, and trauma-related factors. As of 2018, seven states or territories were serving at-risk infants and toddlers, according to an Education official. Early Intervention process. The processes states use to deliver Part C Early Intervention programs can vary in a number of ways. First, the types of agencies designated as the Part C Lead Agency vary from state to state; these lead agencies are responsible for administering and monitoring Early Intervention programs in their states. For example, Iowa’s State Educational Agency (SEA) administers both its Parts C and B programs; Massachusetts and New York administer their Parts C and B programs through separate agencies; and, Colorado shares these responsibilities between two agencies. Second, the extent to which lead agencies directly provide Early Intervention programs, including locating and evaluating children, or do so through contractors varies. For example, both Colorado and Iowa administer their Early Intervention programs directly, while Massachusetts and New York contract with private entities to do so. In Massachusetts, early childhood officials said that they contract with 31 different vendors that operate 60 Early Intervention programs throughout the state. In addition to providing Early Intervention services, these programs are responsible for locating and evaluating children, according to the early intervention officials. Those officials also said that each of these individual programs have unique relationships with referral sources, which can affect the likelihood that the sources will make referrals to a given program. Regardless of the type of entity responsible for Early Intervention programs, having strong relationships with referral sources is important, according to early childhood officials in all four of the states we visited. Otherwise, according to these officials, some children who are likely to be eligible for Early Intervention services may not be identified or evaluated for needed services. In Colorado, where Early Intervention responsibilities are shared between the Part C lead agency and the SEA, state officials said that this arrangement can make it difficult to ensure a seamless process and can cause delays between evaluation and services. They said that this can result in incorrect identification or services because they do not have control over the evaluations—responsibility for evaluations is assigned to the Part B agency. Part C officials also said this can cause confusion for families as they are moved between agencies. Relatedly, some infants and toddlers may not be identified for Early Intervention services because of the challenges of sharing data between state agencies when more than one agency is responsible for providing special education to children. In three of the four selected states we visited, responsibility for special education services for children was shared by more than one agency and officials in all three states told us that difficulties in sharing Early Intervention program data could hamper efforts to identify potentially eligible children for special education services. Officials in one of the states said that sharing data could allow them to identify children being provided school-aged special education services that had not received Early Intervention services. The officials said that if commonalities were found among these children, it could help them find similar children and ensure they receive Early Intervention services in the future. School-Age (IDEA Part B) Eligibility criteria. In practice, IDEA Part B’s disability definitions provide minimum standards that all states must meet. According to Education officials, IDEA allows states the flexibility to adopt more expansive definitions of disabilities than those provided in the IDEA statute and regulation, provided that the state definition would not exclude children who would be covered by the IDEA definition. For example, in New York an intellectual disability is defined as “significantly subaverage general intellectual functioning … that adversely affects a student’s educational performance,” while in Massachusetts an intellectual impairment is defined as occurring when “the permanent capacity for performing cognitive tasks, functions, or problem solving is significantly limited or impaired and is exhibited by…a slower rate of learning .” Also, states must establish their own eligibility criteria for determining the presence of a Specific Learning Disability (SLD)—a broad category of disorders related to understanding and using language. IDEA also requires that states allow the use of research-based procedures in establishing the presence of an SLD, but does not define the specific procedures to be used. Identification process. IDEA requires all states to have Child Find policies and procedures in place, and requires a practical method for determining which children with disabilities are currently receiving needed special education and related services, but does not specify the exact method to be used. In all four of the states we visited, school district officials we interviewed said that the schools in their respective districts were using the same type of approach as part of the Child Find identification process, but that some school districts were in different stages of implementation or that the approach was being used differently by schools within the same districts. Officials in one school district in New York said that, as part of their approach, there was a concerted effort to use student data to make decisions about intervention levels and special education evaluation decisions, while a school district official in Massachusetts said that the district had placed a greater emphasis on improving classroom instruction as a means to reduce the need for special education services rather than on intervention systems used for identifying and making decisions about potentially eligible children. Officials of school districts in two of the states we visited told us that they are in the midst of revising their identification processes to increase accuracy and consistency across the schools in their districts. Officials in one of those districts said that differences in the processes schools used resulted in variations in how the special education identification process worked in each of the schools. State and Local Officials Said Challenges Identifying and Evaluating Children Who May Be Eligible for Special Education Services May Lead to Differences in Who Is Served Appropriately identifying and evaluating children who may be eligible for special education services can be difficult, according to advocates, subject matter specialists, and state and local officials we interviewed. Representatives of two national special education advocacy organizations and special education subject matter specialists agreed that it may be difficult to identify disabilities and that differences in school district or in school special education processes can add to this challenge. Challenges to Early Childhood Identification and Evaluation (IDEA Part C) Early Intervention services are intended to enhance the development of infants and toddlers with disabilities, minimize developmental delay, and reduce the need for special education later in life. However, officials we interviewed at state agencies in the four states we visited— Massachusetts, Colorado, New York, and Iowa—said that because of challenges in identifying and evaluating children, some infants and toddlers who are eligible and would benefit from Early Intervention services do not receive them. These challenges include navigating referral processes, obtaining parental consent, and dealing with staffing limitations. State early childhood officials and subject matter specialists we interviewed said it can be difficult to secure a parental or physician referral, which can cause delays in evaluating children and may lead to some infants and toddlers not being provided Early Intervention services. In all four states we visited, officials noted that some parents or physicians did not make referrals because they did not understand the referral process. State officials in Iowa expressed concern that some doctors may take a “wait-and-see” approach instead of referring an infant or toddler for evaluation when indications first arise. Early childhood officials in Colorado as well as Early Intervention subject matter specialists we spoke to said that physicians may also choose not to refer patients because they (1) cannot guarantee families that their children will ultimately receive services, (2) find the referral process difficult, or (3) receive little feedback about whether their referrals ultimately lead to children getting Early Intervention services. Before an infant or toddler can be evaluated for Early Intervention services, the parent(s) must give consent. In Massachusetts and Colorado, state early childhood officials said that parents sometimes do not provide consent for an evaluation, which can delay or even prevent the delivery of needed services. Officials from these states cited various reasons parents might withhold consent, such as opting to wait and see if the child’s problems are resolved over time. State early childhood officials in Massachusetts also said that parents will sometimes refuse to provide consent for evaluation due to a lack of awareness of Early Intervention services or the Early Intervention process. To better address this, officials said that they are working collaboratively with state early education and care providers to inform parents about these issues. Massachusetts officials stated that parents may mistrust government agencies or associate Early Intervention services or providers with child protective services agencies and mistakenly think they are being investigated. Insufficient personnel with the right qualifications to conduct evaluations is another reason infants and toddlers may not be consistently identified and evaluated, particularly in certain types of locations. Officials from lead agencies in Massachusetts, Colorado, New York, and regional education officials in Iowa, noted that it was difficult to find enough Early Intervention personnel with appropriate expertise in low population density areas which can complicate the process of identifying and evaluating children. Officials in Massachusetts noted challenges hiring staff that reflect the communities they serve and in hiring for specific disciplines, such as occupational and physical therapists. In addition, officials in New York said that they sometimes face staffing difficulties when children are located in areas with high crime rates. Challenges to Preschool-Age, School-Age, and Young Adult Identification and Evaluation (IDEA Part B) State and local officials as well as special education advocacy organizations said identifying and evaluating students for Part B special education services can be complicated by many factors, which may result in some students inappropriately being determined eligible or ineligible for services. These factors include confusion over IDEA requirements, challenges implementing Response to Intervention (RTI), a child’s lack of English proficiency, the difficulty of detecting certain types of disabilities, or the Part C to Part B transition. School district officials in Massachusetts said that confusion about IDEA requirements is common. For example, a school district official from that state told us that general education staff do not always understand when special education services are appropriate, versus when other options may meet students’ needs, such as Response to Intervention (RTI) or other supports. (See sidebar for more information about RTI.) Officials in another school district in the same state said there was confusion over and little consistency in the eligibility decisions made for special education and other supports. Additionally, officials in that district said that the expertise level among the decision makers varies and can affect eligibility decisions. Response to Intervention For those students who may need additional academic and behavioral supports to succeed in a general education environment, schools may choose to implement a multi-tiered system of supports, such as Response to Intervention (RTI). Regulations implementing the 2004 amendments to the IDEA include a provision mandating that states allow, as part of their criteria for determining whether a child has a Specific Learning Disability (SLD), the use of a process based on the child’s response to scientific, research-based intervention. See 34 C.F.R. § 300.307(a)(2). RTI is a school-wide approach that attempts to address the needs of all students, including struggling learners and students with disabilities, and integrates assessments and interventions to maximize student achievement. Key characteristics of RTI are: (1) students receive high-quality research- based instruction in the general education setting; (2) schools continually monitor and document student performance; (3) schools screen all students for academic and behavioral problems; and (4) schools provide multiple levels (tiers) of instruction that are progressively more intense, based on the student’s response to instruction. Children who do not respond to interventions are to be referred for evaluation to determine eligibility for special education and related services. School district officials in all of the states we visited and representatives from various advocacy organizations said that there were challenges related to implementing RTI. Representatives from advocacy organizations in all four states we visited cited concerns with school RTI practices that may delay student evaluations or contribute to incorrect eligibility determinations. Advocates in Massachusetts told us that some school districts are more likely than others to put students suspected of a disability through the RTI process for extended periods of time before evaluating them. Further, advocates said using RTI to delay or deny evaluations occurs more frequently at the elementary level and for students with specific types of disabilities, such as mental health and social or emotional disabilities. implementation, the type of disability a student has, the quality and quantity of data gathered on students, and the amount of support provided for the process. In all of the states we visited, school district officials cited efforts to address issues with RTI practices. For example, school district officials in all four states noted that training related to RTI was being provided to their schools. In Massachusetts, New York, and Iowa, school district officials cited recent initiatives specifically aimed at strengthening and implementing the RTI process in schools, such as by integrating social- emotional and behavioral components in RTI and better using student- level data to improve eligibility determinations. In one district, officials specifically noted that efforts to improve their schools’ RTI processes and core curriculum had reduced the number of special education students in their district. According to Education’s 2016-17 school year data, 73 percent of public school districts in the nation had English Learner students; nationwide, English Learner students comprise about 10 percent of public school students, an increase of almost 3 percent since 2010. School district officials we interviewed in all four states we visited described inherent challenges in properly identifying and evaluating English Learner students for special education disabilities. In Massachusetts and New York, school district officials we interviewed explained that they do not always have staff with the necessary expertise to perform evaluations in a child’s first language, which makes it more difficult to determine if a child’s learning difficulties are caused by a disability or by language proficiency issues. State education officials in New York told us that they are concerned about identification issues related to English Learner students, noting that over 200 languages are spoken by their students and about 12 percent of their students with disabilities were also English Learners in 2017-18. In the same state, officials in one school district said that over 100 different languages are spoken by their students and that it was a challenge to properly identify and evaluate them. Representatives of special education advocacy organizations in two states we visited—Massachusetts and New York—made similar observations, noting that English Learner students were at risk of being both over identified and under identified. For example, advocates we interviewed in Massachusetts said that under identification can occur when school districts do not communicate with parents in their home language and, as a result, the parents do not understand how to engage with the special education process. Advocates in both states told us that over and under identification may also occur if the lack of language proficiency is mistaken for a disability or if a disability is mistaken for language learning issues. Education and the Department of Justice have issued guidance to assist schools in meeting their obligations under federal law to ensure that English Learner students who may be eligible for services under IDEA are located, identified, and evaluated for special education services in a timely manner. This guidance instructs schools to consider the English language proficiency of the students appropriately so that they are not identified as students with disabilities because of their limited English language proficiency. Local officials we interviewed in four states said that some disabilities, such as those related to mental health or behavioral disorders, can be difficult to identify and may go undiagnosed. These officials noted that behavioral disabilities can be particularly difficult to correctly identify because they sometimes affect academic performance or behavior in more subtle ways. Some school district officials said they may not have the right tools or staff to identify these students. For example, officials in one school district in Colorado stated that a commonly used disability identification process on its own was not effective for students with mental health and behavioral disabilities. School district officials we spoke to in Massachusetts and Iowa noted that they often struggle to employ staff with the appropriate expertise to address mental health or behavioral issues and that there are fewer resources for schools to use in these areas. Part C to Part B transition Another area of confusion may arise when children transition from Part C services to Part B services, at about age 3. School district officials in the four states we visited said that they identify a significant number of their districts’ school-aged special education students through referrals from the state’s Early Intervention programs during the transition process. State education officials in Massachusetts indicated that the majority of children referred from the early childhood programs for Part B services are not found eligible for school-aged services, which may indicate a lack of a common understanding of the Part B eligibility criteria as the early childhood programs are required to refer the children they think could be eligible for those services. Education and Selected States Reported Monitoring Child Find Implementation through Data Collection and Supporting It through Technical Assistance Education Reported Monitoring State Implementation through Data Reporting and Supporting States with Technical Assistance and Information Education’s Monitoring of State Implementation of Child Find Education’s monitoring of state efforts to implement Child Find requirements is part of a broad framework—known as Results Driven Accountability (RDA)—the department uses to monitor certain aspects of IDEA implementation. Education’s monitoring activities specific to Child Find are based on data and information that states submit annually, as required by IDEA and as part of the RDA process. Because IDEA gives states some discretion in how to meet Child Find requirements, according to Education officials, it focuses on ensuring states have policies, procedures, and systems in place for monitoring local school districts’ special education programs, including their Child Find activities. To monitor state Child Find activities, Education relies, in part, on four indicators specific to the Child Find requirements and requires states to report data on them annually in the State Performance Plan/Annual Performance Report. Three of the indicators pertain to Part C Early Intervention programs and one pertains to Part B. Two Part C Child Find indicators compare the numbers of children served to two data points—the national Part C average (as a percentage) as well as the percentage Education would expect a state to serve based on the state’s population. Education requires states to report these Part C data for two subsets of children—birth to 1 year and birth through 3 years. Education has encouraged states whose Part C enrollment is significantly lower than the national average or below expected levels based on the state’s population, to examine compliance with related Part C requirements. The third Part C Child Find indicator measures state compliance with the 45-day timeline. For this indicator states must report on the number and percentage of children referred to Part C whose evaluations, assessments, and initial individualized family service plan meetings were held within 45 days of referral. The Part B indicator measures the percent of children who were evaluated within 60 days of receiving parental consent for initial evaluation. This indicator is a compliance indicator for which states must establish a target of 100 percent. According to Education officials, the department developed these Parts C and B indicators in response to requirements in the 2004 IDEA reauthorization, which directed the Secretary of Education to monitor the states, and require each state to monitor local educational agencies located in the state or as applicable, the early intervention providers located in the state, using quantifiable indicators in specific priority areas (including Child Find), and using such qualitative indicators as are needed to adequately measure performance in those areas. In developing the indicators, Education officials told us that the department sought to strike a balance between the statutory requirement that they be quantifiable and the inherent challenges in knowing how many children should be identified, evaluated, and found eligible—at the state level or in individual school districts. Education officials said that states and school districts are in a much better position to estimate how many children who have disabilities and who could potentially be found eligible for special education and related services because of their disability. Education officials told us they consulted internal stakeholders, states, school districts, and other special education experts to develop possible quantifiable measures given the inherent challenges in doing so. In addition to the Child Find indicator data submitted annually, under Part B, states provide other information related to Child Find as part of their annual data reporting to Education and the public. These data include the number and percentage of children with disabilities by race, ethnicity, English Learners, gender, and disability category that receive a free appropriate public education; participate in general education; are placed in separate classes, schools, or residential facilities; receive Early Intervention services; and are between birth to 2 years who are no longer receiving Early Intervention services. States are also required to report the number and percentage of infants and toddlers, by race and ethnicity, who are at risk of having substantial developmental delays and who are receiving Early Intervention services. Additionally, Education may receive information about states’ Child Find activities in states’ annual reports as part of the description of IDEA oversight policies and procedures; in explanations of any actions taken in response to Education’s finding of noncompliance with Child Find indicators in prior years; or in the comprehensive multi-year improvement plan Education requires as part of its RDA framework. Education Supports States in Implementing Child Find Requirements Education supports states’ implementation of Child Find in a variety of ways, including a network of technical assistance centers, written guidance, and direct assistance from Education staff. The Technical Assistance and Dissemination (TA&D) program is the primary way Education provides educators, administrators, service providers, and parents with information regarding IDEA. This program assists state and local administrators on a range of topics including clarifying Child Find obligations, professional development for staff and administrators on various aspects of Child Find, and federal accountability requirements. Technical assistance offerings include training on data collection and Early Intervention issues for various audiences such as teachers, administrators, and special education service providers. Officials in each of the states we visited said they had used Education’s technical assistance. In addition to the TA&D program, Education has established six centers that specifically support states in the annual data collection process. Education provides written guidance to states through documents such as Dear Colleague Letters, Frequently Asked Questions, and Questions and Answers. These documents clarify provisions of Child Find and other IDEA requirements as well as respond to common inquiries from school administrators or the public. The written guidance may also address information gathered during oversight activities and changes in federal law or regulation. Topics Education has addressed in written guidance on Child Find include school districts’ uses of RTI and requirements for subgroups of children who may be difficult to find. For example, Education issued a memorandum in 2016 reminding states and districts that (1) RTI processes cannot be used to delay or deny a timely evaluation of a child suspected of having a disability and (2) implementation or completion of RTI is not required prior to evaluating a student for special education services. Officials in Colorado said they found this guidance helpful and issued guidance to their school districts based on Education’s memorandum. Additionally, in 2007 and 2008 Education addressed issues regarding Child Find requirements for certain groups of children, such as those who are homeless or those who are residing in Immigration and Customs Enforcement (ICE) residential facilities. Homeless children, for example, are inherently difficult to identify and evaluate for special education services because they and their families are highly mobile. Education’s guidance reminded states and school districts that their Child Find obligations include these hard to find subgroups and directed states to coordinate with emergency shelters and homeless advocacy programs, among others, to help find children suspected of having a disability. Education’s website notes that each state is assigned a customer service representative, a Part B contact, a Part C contact, and a team leader. Education officials we spoke to told us that staff hold monthly check-in meetings with state officials to provide information and discuss issues of concern. They also said that issues needing clarification sometimes arise during these check-in meetings. For example, they said that in a meeting with state directors they identified a lack of clarity around some English Learner issues. As a result, Education developed guidance to explain Child Find obligations regarding English Learner students as well as other obligations under IDEA. Education also has a customer service unit available to assist states with questions about IDEA, special education, and related services. State officials in all four states we visited told us they had good relationships with Education IDEA monitoring staff and rely on them to learn about available technical assistance and other resources. Officials we interviewed in one state said their Education contacts were instrumental in helping them improve their programs. facilities, although Education stated that an ICE facility and the state or local school district could enter into a voluntary agreement to provide Child Find or other educational services. U.S. Department of Education, Office of Special Education and Rehabilitative Services, Letter to David Anderson, General Counsel, Texas Education Agency (Dec. 21, 2007); U.S. Department of Education, Office of Special Education and Rehabilitative Services, Letter to David Anderson, General Counsel, Texas Education Agency, (Apr. 22, 2008). Education officials told us that if these children are released from ICE facilities into the care of a sponsor to await their immigration hearings, they do have a right under federal law to enroll in public elementary and secondary schools and to receive educational services, including special education services, if found eligible. Selected States Reported Monitoring Local Implementation through Audits and Data Reporting and Support Local Districts with Technical Assistance and Professional Development Data Collection and Regularly Timed Audits States must monitor their local school districts’ implementation of IDEA requirements. As part of the State Performance Plan/Annual Performance Report, each state must establish measurable and rigorous targets for the indicators, including Child Find, and must analyze the performance of each local school district in the state in implementing the requirements of Part B or as applicable, each Early Intervention provider located in the state in implementing the requirements of Part C. Data analysis and regular audits are the primary means states use to monitor local school districts, according to officials we interviewed in each of the four states we visited. The Part C lead agencies in the four states we visited reported monitoring local implementation of Early Intervention programs through indicator data or on-site visits. In their State Performance Plan/Annual Performance Reports for federal fiscal year 2016, the states we visited reported various monitoring activities. For example: Colorado gathers data from an online system to monitor local programs and analyze performance. In addition to desk audits of local service providers, Colorado’s lead agency does on-site monitoring, selecting local agencies for monitoring visits based on its annual priority areas, or focusing on a cross-section of programs based on size, region, and program structure. Colorado’s annual priority areas have included topics such as increasing public awareness regarding Early Intervention services by providing developmental information to parents of newborns in the hospital and ensuring that the transdisciplinary team members who are responsible for evaluating infants and toddlers are effectively communicating. Massachusetts’ local Early Intervention programs complete and submit to the state lead agency annual reports and self- assessments based on federal indicators. Additionally, the Part C lead agency conducts on-site monitoring of selected sites on a cyclical basis, and focused monitoring to examine specific aspects of local Early Intervention programs. New York conducts comprehensive on-site monitoring of municipalities that administer local Early Intervention programs and approved providers who perform Early Intervention services including reviewing written policies and procedures regarding Early Intervention processes as well as examining a sample of client records at each service location. Iowa monitors all regional grantees on an annual basis. The process includes review of parent surveys and review of family outcome data, among other things. When performance or compliance issues are identified, the lead agency conducts desk audits and data verification checks. Although Part B monitoring activities in the four selected states are similar, they reflect the structure, policies and procedures of individual states. For example, Iowa officials said they monitor both Area Education Agencies and local school districts through desk audits and site visits. Officials told us that the SEA has developed (1) a process to evaluate the performance of the regional agencies regarding the provision of special education services and their oversight responsibilities for the local school districts, and (2) a separate process that examines the performance of school districts with regard to IDEA implementation. The State Performance Plan/Annual Performance Reports for federal fiscal year 2016 for the remaining three states we visited note the following monitoring activities: Colorado collects data and reviews the results of school district self- audits from each of its districts. Massachusetts reported reviewing indicator data and instituting a new monitoring process called Tiered Focus Monitoring. In the first year of the monitoring cycle, all local school districts are to conduct self- assessments on specific criteria related to the special education identification processes and other topics. The self-assessments inform the SEA’s on-site monitoring in the second year. In the third year, school districts are to continue internal monitoring; and in the fourth year, they complete a self-assessment regarding special education and legal requirements. New York reported reviewing data and using school district self- assessments, desk audits, and on-site monitoring. According to the annual report the selection of sites for on-site monitoring depends on a variety of information, including performance on indicator targets. Professional Development and Technical Assistance for Local School Districts IDEA requires states and lead agencies to provide professional development and technical assistance to local school districts. The State Performance Plan/Annual Performance Reports for federal fiscal year 2016 for each of the four states we visited described professional development activities provided on topics related to Part C Early Intervention and Part B programs. For Part C, states reported that they provided the following professional development activities among others: Colorado provided training on data management to ensure valid and reliable data for monitoring purposes. Iowa provided service coordination training which provides knowledge and skills to understand Early Intervention eligibility, the IDEA, and Early Intervention services. Massachusetts held training sessions for Early Intervention service providers regarding Early Intervention transitions to support children who are exiting Early Intervention services or are referred for Part B services. Early Intervention service providers were also able to receive training concerning functional assessments. New York employed contractors to provide training on best practices for delivering Early Intervention services and training about providing those services in a child’s natural environments. Additionally, they provided training to primary referral sources. For Part B, the states reported that they provided the following professional development activities among others: Colorado provided professional development on topics that were identified by teachers. The SEA surveys teachers, providers, and Special Education Directors annually to determine professional development topics. Officials we interviewed in selected school districts told us that they had received training on Child Find obligations and classroom interventions. Iowa requires each district to develop professional development plans that support the needs of district staff responsible for instruction. Districts officials said they have provided training concerning intervention strategies and Child Find responsibilities. Massachusetts has provided training in social emotional learning and behavioral interventions. New York provides ongoing statewide training regarding classroom and behavioral interventions, as well as a program for school principals regarding special education law and regulations as well as the principal’s responsibilities for implementing IDEA. Officials we interviewed in each of the four states we visited told us that they offer a range of technical assistance, including written guidance, webinars, meetings/conferences, telephone assistance, and one-on-one training to support local school districts and schools in implementing Child Find requirements. For example, New York instituted a Blueprint for Improved Results for Students with Disabilities. This Blueprint establishes expectations to improve instruction and results for students with disabilities, which in turn informs the state’s technical assistance networks. In each of the four states, officials reported (1) offering targeted assistance where there were concerns related to performance or results of Part B programs and (2) examining results and compliance data to identify areas of concern and potential recipients for targeted assistance. For example, Massachusetts reported in its annual report that it had provided one-on-one technical assistance to local school districts where there were performance concerns, while New York reported that its technical assistance improvement specialists review low-performing schools and help to develop tools for improvement. Similarly, the Part C lead agency officials in all of the states we visited told us they provided training and technical assistance to Early Intervention programs. These states offered assistance in a variety of ways including written guidance, information provided via phone or email, and formal training sessions. Officials from Colorado and Iowa reported holding monthly technical assistance calls, while officials from Massachusetts reported holding monthly webinars for local Early Intervention providers. In its annual report, Iowa reported providing training on using technology to provide Early Intervention services, while New York reported offering training on best practices in identifying and evaluating infants and toddlers. Each of the four states we visited reported offering targeted assistance to schools where monitoring efforts identified concerns or compliance issues. The targeted assistance is intended to improve performance in the areas identified. We provided a draft of this report to Education for review and comment. Education provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology The objectives of this study were to examine (1) factors that may account for differences in the percentage of children receiving special education services, and (2) how the U.S. Department of Education (Education) and selected states monitor and support Child Find requirements. To conduct this work we (1) reviewed federal special education data from school years 2011 through 2016 (the most recent available at the time of our review); (2) reviewed relevant Department information, such as Dear Colleague Letters, Frequently Asked Questions, and Questions and Answers; federal laws; regulations and policies; and selected state laws; (3) interviewed Education officials; (4) interviewed officials from state agencies responsible for administering Parts C and B of the Individuals with Disabilities Education Act (IDEA) special education programs in four states (Colorado, Iowa, Massachusetts, and New York) and fifteen school districts within those states; and (5) interviewed representatives from special education advocacy organizations that represent parents and families of individuals with disabilities and subject matter specialists to discuss issues related to Child Find. The following sections contain detailed information about the scope and methodology for this report. Review of Federal Special Education Data To determine the differences in the percentage of children receiving special education services across states we used Education’s Annual Reports to Congress on the Implementation of the Individuals with Disabilities Education Act (IDEA) to review national and state level special education data. We used the most recent five reports, 2014 through 2018, which reported on data for school years 2012 through 2016, to review the percentages of children that were receiving special education services under IDEA Part C and Part B during school years 2012 through 2016 nationally and by state. These data, known as Section 618 data, are self- reported by school districts. We focused our review primarily on data regarding the percentage of children served under IDEA Part C (ages 0- 2), Part B (ages 3-5), and Part B (ages 6-21), nationally and by state during school years 2012 through 2016. We determined that the data we used from the Annual Reports to Congress on the Implementation of IDEA were sufficiently reliable for the purposes of the report by reviewing technical documentation and interviewing Education officials to determine what mechanisms are in place to ensure data quality. Review of Agency Documentation, Federal Laws, Regulations, Policies, and Selected State Laws and Regulations and Interviews of Education Officials To obtain information on the factors that may account for variation in the percentage of children receiving special education services and to examine how Education and selected states support and monitor Child Find requirements, we reviewed Education documents, such as Dear Colleague Letters, Frequently Asked Questions, and Questions and Answers. We also reviewed Education’s recent annual reports to Congress and documents containing guidance to states on required annual data submissions. Additionally, we reviewed relevant federal laws, regulations, and policies, and selected state laws and regulations. With both Education and state agencies responsible for supporting and monitoring Child Find requirements, we interviewed officials about the agencies’ responsibilities with respect to IDEA, as well as the processes the agencies put in place to monitor implementation of those requirements. We also discussed each agency’s guidance and support to school districts on these issues. In addition, we collected and reviewed relevant agency procedures and guidance documents. Site Visits and Associated Interviews with Officials at State Agencies and School Districts To obtain information on the factors that may account for differences among selected states and school districts in the percentage of children receiving special education services and how selected states support and monitor Child Find requirements, we conducted site visits in a non- generalizable sample of four states and 15 school districts. We selected states primarily for diversity in (1) the percentage of special education students; (2) changes in the percentage of special education students over a 5-year period; (3) geography; and (4) the agency responsible for state Early Intervention programs (i.e., the state educational agency or another state agency). We used data from the National Center for Education Statistics (NCES), Common Core of Data (CCD) for the 5-year period, 2011-2015 (the most recent available data at the time of our selection) to identify the percentage of special education students in each state as well as the change in the percentage of special education students in each state over the 5-year period. We determined that the data used were sufficiently reliable for the purposes of the report by reviewing technical documentation and interviewing Education officials to determine what mechanisms are in place to ensure data quality. In each state, we interviewed officials from the state educational agency, the agency responsible for Part B special education, as well as officials from the state agency responsible for Part C special education. In addition, we also interviewed officials from special education advocacy organizations that represent parents and families of individuals with disabilities. We selected school districts primarily for diversity of size. We used state department of education enrollment data for 2017-2018 to sort school districts based on the size of the student population. We selected three school districts in Colorado, five in Iowa, three in Massachusetts and four in New York. In each district, we interviewed district-level officials involved in special education and school Child Find processes. These officials included assistant superintendents, administrators, and directors of special education. While not generalizable, our interviews provided illustrative examples of a range of state and district Child Find processes, and the differences and challenges states and school districts face. Interviews with Special Education Advocates and Special Education Subject Matter Specialists To obtain information on the factors that may account for differences among states and school districts in the percentage of children receiving special education services and processes that states and school districts may use in implementing their Child Find requirements, we interviewed representatives from eight special education advocacy organizations that represent parents and families of individuals with disabilities and four special education subject matter specialists to discuss issues related to Child Find. Some of the issues we discussed included Early Intervention eligibility, assessment processes of students including Response to Intervention, and other topics to get a better sense of Child Find processes and issues. We conducted this performance audit from August 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Bill MacBlane (Assistant Director), Mindy Bowman (Analyst-in-Charge), Aaron Karty, Deborah Signer, Phillip Steinberg, and Shelia Thorpe made key contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Shilpa Grover, Serena Lo, Art Merriam, Sheila R. McCoy, Corinna Nicolaou, James Rebbe, Brian Schwartz, Daren Sweeney, and Kathleen van Gelder.
Why GAO Did This Study About 13 percent of children aged 3 through 21 enrolled in public schools received special education services in school year 2015-16, and about 3 percent of children from birth through age 2 received special education services. The percentage of the population served under IDEA varies across states. For example, in fall 2016, the percentages of the population aged 6 through 21 served in individual states ranged from 6.4 percent to 15.1 percent. Concerns about the difficulties identifying and evaluating children for special education have been raised by the media, experts, and special education advocates. GAO was asked to examine how states implement Child Find and how Education monitors it. This report examines (1) factors that may account for differences in the percentage of children receiving special education services across states, and (2) how Education and selected states monitor and support Child Find efforts. GAO reviewed federal special education data, agency documentation, federal laws and regulations, and selected state laws; and interviewed Education officials, officials from four state agencies and 15 school districts in those states (Colorado, Iowa, Massachusetts, and New York), and representatives of organizations that advocate for families of individuals with disabilities as well as special education subject matter specialists. GAO selected the four states based on a variety of factors, including the percentage of special education students. What GAO Found Differences in states' eligibility criteria and the difficulty of identifying and evaluating some children suspected of having disabilities may contribute to differences in the percentages of children receiving special education services across states. The Individuals with Disabilities Education Act (IDEA), the primary federal special education law, requires states to have policies and procedures in place to ensure that all children with disabilities residing in the state who need special education services are identified, located, and evaluated. These policies and procedures—known as “Child Find”—are generally implemented by local school districts (see fig.). IDEA gives states some latitude in setting eligibility criteria and defining disability categories. In addition, states may determine their own processes for identifying and evaluating children. As a result, a child eligible for services in one state might be ineligible in another. According to advocates, special education subject matter specialists, and state and local officials GAO interviewed, a number of challenges related to correctly identifying and evaluating children suspected of having a disability can affect eligibility decisions. For example, school district officials in all four states GAO visited cited challenges in properly identifying and evaluating English Learner students, as districts do not always have staff who are conversant in a child's first language and skilled in distinguishing language proficiency from disabilities. The Department of Education (Education) monitors and supports Child Find efforts primarily by reviewing states' annual performance data and providing professional development and technical assistance. The four states GAO visited reported monitoring and supporting school districts' efforts in a similar manner to Education's.
gao_GAO-20-386
gao_GAO-20-386_0
Background Roles and Responsibilities Related to Collecting and Expending Transportation Funds State reviews and approves FMS purchases, while DOD is responsible for program implementation. DSCA administers the FMS program for DOD, including exercising financial management responsibilities for the FMS trust fund, and DFAS provides DSCA’s accounting services for FMS. Additionally, various other DOD components have responsibilities related to collecting and expending transportation funds, as shown in figure 1. Regulations and Guidance for the FMS Program Several DOD publications provide regulations and guidance for the FMS program, including: DOD Financial Management Regulation (FMR). Managed by the Under Secretary of Defense (Comptroller), the FMR defines financial management requirements for all DOD components, and states that DSCA administers the FMS program and is responsible for monitoring the use of the FMS trust fund. The FMR also states that DOD components should maintain documentation that constitutes a complete audit trail. Defense Transportation Regulation. Managed by the U.S. Transportation Command (TRANSCOM), the Defense Transportation Regulation defines requirements for the transportation of items within the Defense Transportation System, such as the use of unique identifiers for all shipments and how to use and pay commercial carriers, when applicable. Security Assistance Management Manual (SAMM). Managed by DSCA, SAMM provides guidance to the DOD components that manage or implement the FMS program. Life Cycle of FMS Purchases Foreign partners that purchase items and services through the FMS program may use their own funds or, if provided, U.S. funds, such as grants or loans provided through Foreign Military Financing. In addition, some FMS purchases are made using funds appropriated to DOD, State, or other U.S. government agencies for Building Partner Capacity (BPC) programs. These programs purchase items or services for foreign partners through FMS. The FMS process begins when an eligible entity requests information on defense articles or services for purchase. The responsible DOD component then prepares a Letter of Offer and Acceptance (LOA), which is the legal instrument used by the U.S. government to sell defense articles to a foreign country or international organization under authorities provided in the Arms Export Control Act. The LOA itemizes the defense articles or services offered and, when implemented, becomes an official tender by the U.S. government. Signed LOAs are referred to as “FMS cases,” and the individual items or services included for purchase in the FMS case are referred to as “case lines.” Once the LOA is signed, the DOD component responsible for the FMS case then manages the contracting or requisition of the equipment or services specified in the agreement, which are then delivered to the foreign partner. Foreign partners have different options available to them for transporting items they purchase through FMS. Other than when purchasing certain hazardous or sensitive items that must be transported via the Defense Transportation System, foreign partners have the option to arrange for their own transportation of FMS items they purchase—such as using a freight forwarder—for all or part of the transportation needed to reach the final destination. On the other hand, BPC programs use the Defense Transportation System to move all their FMS purchases. When all items have been delivered, all ordered services have been performed, and no new orders exist or are forthcoming, the DOD component responsible for managing the FMS case may mark it as closed. FMS Transportation and Fee Calculation DOD most commonly calculates the FMS transportation fee using a percentage rate applied to the price of the item. The percentage rate varies depending on the extent of the U.S. government’s responsibility for transporting the items purchased, as agreed to between DOD and the foreign partner in the LOA. DOD first determines the estimated transportation fees for shipping FMS purchases based on the terms agreed upon in the LOA. When an item is shipped, the transportation fee is collected from the FMS purchaser’s account into the FMS transportation accounts. Payment and Tracking of FMS Shipments Eight transportation accounts within the FMS trust fund are used to hold transportation fees collected from FMS purchasers’ accounts, and to pay FMS transportation bills. Seven separate accounts hold transportation funds for certain larger BPC programs. These seven BPC accounts allow BPC program transportation fee collections and expenditures to be tracked. In addition, one main account holds transportation funds for all foreign partner purchasers and smaller BPC programs. Individual shipments trigger collections into and expenditures from the FMS transportation accounts. As shown in figure 2, after DOD ships an item and DFAS is notified of that shipment—through a process known as delivery reporting—DFAS moves the amount of the related transportation fee from the FMS country or BPC program account into the main transportation account or corresponding BPC program transportation account and records the amount as a collection. DFAS receives monthly bills that include the costs for FMS transportation, which DFAS pays out of the main transportation account, recording the amount paid as an expenditure. Subsequently, DFAS reviews the transportation bills and associated expenditure transaction data to identify any expenditures associated with the seven BPC programs with dedicated transportation accounts. For any BPC transactions identified, DFAS reimburses the main transportation account for the cost of the expenditure by moving funds from the relevant BPC transportation account into the main transportation account through a process DOD refers to as realignment. DOD and DSCA Guidance Identifies Controls, but DSCA Lacks Routine Oversight Over DOD Components’ Activities DOD and DSCA Guidance Identifies Controls for Fees Collected into the FMS Transportation Accounts DOD and DSCA guidance to DOD components identifies controls over the information used to calculate the fees collected into the FMS transportation accounts. For example, DOD has various codes that identify the percentage rate used to calculate the transportation fee charged to FMS purchasers. DSCA provides guidance to components on how to use those codes, and components are responsible for managing the use of those codes. Both DSCA’s guidance and the FMR require that components maintain documentation, such as documentation of significant events related to delivery transactions and authorized exceptions to normal billing procedures. Additionally, both the FMR and DSCA’s guidance to components identifies that components are responsible for submitting delivery reporting within 30 days of completion, which triggers collection of the transportation fee. DSCA’s guidance to components requires components to perform various case reviews and reconciliations, including annual case reviews to verify the accuracy of information, such as the accuracy of the codes applied to case lines, as well as the timeliness of delivery reporting. DSCA Lacks Routine Oversight of DOD Components’ Annual Case Reviews DSCA’s guidance to DOD components requires components to review FMS cases at least annually to verify the accuracy of data—including the accuracy of the transportation fee collected and the timeliness of delivery reporting—but DSCA does not have a routine process to oversee those reviews. DSCA’s guidance to components also states that DSCA may request copies of components’ annual case reviews for oversight purposes, and DSCA officials told us that they request copies on an ad hoc basis. Although DSCA has oversight responsibility over collections into the FMS transportation accounts, DSCA officials said they do not have a standard process for selecting and examining components’ annual case reviews, and do not document their reviews. Federal internal control standards state that management should establish and implement activities to monitor internal control systems and evaluate results, and ensure that activities are performed routinely and consistently. Management may use ongoing monitoring, separate evaluations, or a combination of the two to obtain reasonable assurance of the operating effectiveness of the controls in place. Without routine oversight of components’ annual case reviews—which could include a process to select annual case reviews for examination, and guidance on how to perform and document examinations—DSCA increases the risk that components may not complete such reviews consistent with DSCA’s guidance, thereby increasing the likelihood that fees collected may be inaccurate. Additionally, according to DOD, the FMS program is intended to operate on a “no profit, no loss” basis, and inaccuracies in the collection of the FMS transportation fee could lead to over- or under- collecting fees from an FMS purchaser. DSCA officials told us that they have begun to work on an initiative to analyze a sample of annual case reviews on a routine basis. According to DSCA officials, the process will include reviewing and documenting cases based on certain events and is expected to be implemented in April 2020. The successful implementation of this initiative may help DSCA ensure that components’ annual case reviews comply with DSCA guidance. However, until DSCA fully implements this initiative, the risk remains that components may not complete annual case reviews consistent with DSCA’s guidance. DSCA Lacks Oversight of DOD Components’ Delivery Reporting DSCA does not have a process to monitor the timeliness of DOD components’ delivery reporting of shipments of items, which triggers collections into the FMS transportation accounts. According to DOD regulations, components are required to submit delivery reporting in their systems within 30 days of shipment. Although DSCA has financial responsibility over collections into the FMS transportation accounts, DSCA officials told us that they do not monitor components’ compliance with this regulation. Federal internal control standards state that management should design internal control activities to achieve control objectives and respond to risks, ensure the accurate and timely recording of transactions, and evaluate and document the results of ongoing monitoring activities. Further, the FMR incorporates the federal accounting standards into DOD accounting and financial reporting policy. The federal accounting standards state that revenue transactions—such as the FMS transportation fee—should be recorded when services are provided. DSCA officials told us that they rely on DFAS to monitor components’ delivery reporting. During the course of our review, DFAS officials told us that they began providing a report to DSCA and other components that detailed information on each components’ delivery reporting, which was based on a prior FMR requirement. Both DSCA and DFAS officials told us that they are working on an agreement that would formalize DFAS’s reporting, but have not finalized this agreement as of February 2020. However, DFAS officials told us that they have not followed up with components to verify the accuracy of the delivery reporting, and are not required to do so. While DSCA officials told us that DFAS’s reporting may help provide transparency, without a process to oversee that reporting, DSCA’s lack of monitoring of components’ delivery reporting raises the risk that such reporting may not be timely. As timeliness is an element of accuracy, untimely component delivery reporting may result in the inaccurate collection of related transportation fees into the FMS transportation accounts. A documented process to review reporting and monitor the timeliness of components’ delivery reporting—which could include DSCA’s review of DFAS’s reporting to identify and follow up on discrepancies—could help reduce the risk that transportation fees may not be collected into the FMS transportation accounts in a timely manner. Further, such oversight could assist components during other required reviews, such as annual case reviews. DSCA’s Financial Oversight of Expenditures Does Not Provide Reasonable Assurance That Expenditures Are Allowable and Paid from the Correct FMS Transportation Account DSCA has limited financial oversight of expenditures from the FMS transportation accounts. While DSCA established internal guidance related to monthly reviews of expenditures from the accounts, that guidance lacks procedures to review expenditures and is not fully documented. In addition, DSCA has not provided guidance to DFAS on preparing the reports DSCA uses for its monthly review. Also, DFAS’s internal guidance on reviewing and realigning expenditures is inconsistent and lacks key controls and details, such as procedures to provide reasonable assurance that all transportation expenditures are reviewed. As a result, DSCA’s financial oversight of the FMS transportation accounts is insufficient to provide reasonable assurance that expenditures paid from the FMS transportation accounts are allowable and paid from the correct account, which limits DSCA’s ability to help ensure that relevant BPC program expenditures are paid from the related BPC accounts. During the course of our audits of the FMS program, DSCA officials told us that they began developing new internal guidance to address financial oversight of expenditures from the FMS transportation accounts, and expect it to be implemented by May 2020. However, until DSCA finalizes and implements that guidance, the risk remains that DSCA may use the FMS transportation accounts to pay for unallowable costs, or pay transportation costs from the incorrect account. DSCA Established Some Guidance for the Monthly Review of FMS Transportation Expenditures In fiscal year 2016, DSCA established a Managers’ Internal Control Program to oversee the FMS transportation accounts, according to DSCA officials. This internal guidance identified the risk that DSCA may use the FMS transportation accounts to pay for unallowable costs—such as those not related to FMS transportation and that may be a result of misuse—or that DSCA may pay transportation costs from the incorrect account. To address these risks, the guidance identified procedures for DSCA to review expenditures. As shown in figure 3, the procedures state that DSCA will review expenditures from the FMS transportation accounts on a monthly basis to ensure costs are valid and applied to the proper account, and to identify and correct discrepancies. DSCA officials told us that to perform their monthly review they use reports provided by DFAS, and DFAS officials told us they provide those reports to DSCA on a monthly basis based on internal guidance. These reports include information on the FMS transportation account balances, and, in addition, DFAS provides supporting documentation that includes: copies of bills paid from the accounts and detailed analysis of individual transportation expenditures; an analysis of discrepancies DFAS identified for each transaction; and financial transactions DFAS performed to reimburse the main transportation account for specific BPC transportation expenditures, through a process DOD refers to as realignment. Both DSCA and DFAS officials said they use two pieces of information from the reports and analyses: The transportation account code, which identifies the DOD component responsible for a particular expenditure, and may provide information on the country or program associated with the transportation expenditure. The transportation control number, which is a unique 17-character code that is associated with a shipment and used throughout the Defense Transportation System for shipment tracking and payment processing. For FMS shipments, the transportation control number includes information that identifies the DOD component and foreign partner, and may be used to tie a particular transportation expenditure to an FMS case. The transportation account code and the transportation control number are entered by DOD components directly involved in ordering and processing shipments into their individual systems. Figure 4 provides additional details regarding the composition of the transportation control number. DSCA Lacks Sufficient Internal Guidance for the Monthly Review of FMS Transportation Expenditures DSCA’s Internal Guidance Lacks Procedures for Conducting the Monthly Review of Expenditures DSCA’s internal guidance does not contain procedures explaining how DSCA staff should review transportation expenditures. Federal internal control standards state that management should design control activities to respond to risks, implement activities that address those controls, and identify the information requirements needed to achieve objectives. DSCA’s monthly review of expenditures from the FMS transportation accounts is meant to provide financial oversight of the accounts, and DSCA’s internal guidance establishes that, as part of its monthly review, DSCA should review expenditures to ensure they are allowable and paid from the correct account, and follow up on any discrepancies. However, DSCA’s internal guidance does not explain how to review expenditures, and DSCA officials told us that they do not have internal guidance identifying the data needed to oversee expenditures or explaining how they should evaluate expenditure data, which could include steps such as identifying and correcting discrepancies including mismatched, missing, or incomplete entries. To assess transportation expenditure data reviewed by DFAS and DSCA, we analyzed a nongeneralizable sample of expenditure data for the FMS transportation accounts provided by DFAS for the period from May through July 2019. Over that 3-month period, DFAS reported about 6,200 transportation expenditures totaling approximately $21.6 million. Our review of those expenditures identified discrepancies or missing data such as transactions with: Mismatched DOD component codes. Approximately 19 percent of expenditures we examined—representing around $4 million—had transportation account codes and transportation control numbers identifying different DOD components. According to DFAS officials, if the transportation account code and transportation control number for an expenditure do not identify the same component, the mismatch may be a discrepancy. For example, a mismatch could indicate that staff at a component entered an incorrect transportation account code, or misapplied a transportation account code, which may result in the payment of a non-FMS expenditure from the FMS transportation account. Missing or misformatted control numbers. Approximately 3 percent of the number of expenditure transactions in our sample— representing around $40,000—either lacked transportation control numbers or included transportation control numbers that did not contain 17 digits. Without a valid transportation control number, DSCA may not be able to determine whether an expenditure is allowable or paid from the correct account. Because DSCA’s internal guidance lacks procedures—including those explaining what expenditure data is needed to perform oversight and how to evaluate that data for and address discrepancies—DSCA cannot provide reasonable assurance that it appropriately reviews expenditures from the FMS transportation accounts. Without guidance that addresses the risk that unallowable costs may be paid from the transportation account, DSCA raises the risk of misuse of funds of the FMS transportation accounts. Additionally, without guidance that identifies and addresses discrepancies—such as missing transportation control numbers—DSCA raises the risk that transportation expenditures may not be paid from the correct account. DSCA officials told us that they were developing new internal guidance and collaborating with DFAS on an initiative to follow up on discrepancies, and expect both to be implemented by May 2020. However, until DSCA finalizes and implements that guidance, the risk remains that DSCA may use the FMS transportation accounts to pay for unallowable costs, or pay transportation costs from the incorrect account. DSCA’s Internal Guidance Lacks Procedures for Documenting the Monthly Review of Expenditures DSCA’s internal guidance does not identify how DSCA officials should document their monthly review of expenditures. Additionally, DSCA officials confirmed that they do not document their monthly review of expenditures. DSCA officials told us that while DSCA staff conducted regular monthly reviews, DSCA has not issued specific internal guidance explaining how staff should conduct and document these reviews. Federal internal control standards state that management should develop documentation of its internal control system, document internal control activities such as by documenting that activities occurred, and ensure that activities are performed routinely and consistently. Without internal guidance that identifies how staff should perform and document monthly reviews as well as a process to ensure reviews are documented, DSCA cannot provide reasonable assurance that staff perform monthly reviews consistently. DSCA officials told us that their planned internal guidance should address how the monthly review process is conducted, and should be implemented by July 2020. However, until DSCA finalizes and implements that guidance, DSCA will not have guidance on documenting its monthly review of expenditures consistent with federal internal control standards. DSCA Lacks Written Guidance to DFAS on Preparing Monthly Reports DSCA officials have not provided written guidance to DFAS on preparing the reports and analyses DSCA uses for its monthly review, and, as a result, those reports and analyses may be inconsistent and incomplete. Federal internal control standards state that management should design control activities that respond to risks, document internal controls, communicate required information to external parties, and obtain relevant data from external sources based on information requirements. DFAS officials confirmed that they do not have written guidance from DSCA regarding how to generate the reports for DSCA, such as what analyses to perform on expenditure data. We found that DFAS’s analyses vary and lack key procedures. For example, our review of the expenditure data DSCA received from DFAS for May through July 2019 showed that DFAS performed certain analyses—such as verifying the validity of the transportation control number—on some transactions, but not on others. Additionally, DSCA did not provide DFAS with a complete list of transportation account codes to use to identify transactions for review. As a result, DSCA’s review of expenditures based on DFAS’s reports—both for allowability, as well as to ensure those transactions are paid from the correct account—excludes some transactions. Because DSCA has not provided written guidance to DFAS on how to generate the reports needed for its monthly review process—including what analysis to perform on expenditure data—or provided DFAS with the necessary transportation account codes, DSCA cannot provide reasonable assurance that all expenditures from the FMS transportation accounts are allowable and paid from the correct account. Additionally, the lack of consistent identification and review of all transactions by DSCA raises the risk of misuse of funds in the FMS transportation accounts. DSCA officials told us that they are developing guidance in coordination with DFAS, and that it should be implemented by July 2020. However, until DSCA finalizes that guidance to DFAS, DSCA may review inconsistent analyses and may not review all transportation expenditures, and the risk remains that DSCA may use the FMS transportation accounts to pay for unallowable costs, or pay transportation costs from the incorrect account. DFAS’s Internal Guidance on Reviewing Transportation Expenditures Lacks Key Steps DFAS established procedures to review FMS transportation expenditures and to realign BPC expenditures to the correct FMS transportation accounts in part based on direction from DSCA. However, these procedures lack key steps to ensure that DFAS reviews all expenditures and identifies discrepancies, as well as to address discrepancies that may limit DFAS’s ability to identify transactions for realignment. Federal internal control standards state that management should design control activities to respond to risks, implement activities that address those controls, and ensure that activities are performed consistently. DFAS maintains a separate set of procedures for each of the three transportation service providers that submit FMS transportation bills. The results of DFAS’s procedures—such as how transportation expenditures were realigned—are included as supporting documentation for the monthly reports provided to DSCA. Our review of DFAS’s realignment procedures determined that the procedures are inconsistent or missing key steps that could help address the risk that expenditures paid from the FMS transportation accounts may be unallowable or paid from the incorrect account. We found that DFAS’s procedures do not ensure that DFAS reviews all transactions, including those that may require realignment. For example, DFAS’s procedures for DOD’s commercial transportation payment system—known as Syncada—do not include a step for reconciling the amount of the payment to the service provider against a list of detailed expenditure transactions, which may provide assurance that the list of transactions is complete. Specifically, for Syncada, DFAS queries the provider’s system using only nine transportation account codes provided by DSCA, which do not include any account codes associated with Navy, and only some associated with Air Force. Conversely, the realignment procedures for the Air Mobility Command and the Surface Deployment and Distribution Command include a step for reconciling the amount of the payment to the service provider against a list of detailed expenditure transactions, which helps to provide assurance that the list of transactions being reviewed is complete. Table 1 shows the results of our review of DFAS’s realignment procedures and analysis. Because DFAS’s procedures do not include steps to identify all Navy and Air Force transportation account codes, DFAS does not have reasonable assurance that all expenditures are reviewed by DFAS for realignment and provided to DSCA with the monthly report, which DSCA subsequently uses to review the validity of expenditures. As a result, any of these transactions that should be paid from a BPC transportation account are instead paid from the main FMS transportation account. As shown in figure 5, our review of Syncada expenditure data for May through July 2019 found that the use of DFAS’s procedures resulted in approximately 15 percent of expenditures not being reviewed. Those transactions represent 11 percent of the dollar value of transportation expenditures for that period, or approximately $392,000. In addition, we found that DFAS’s procedures do not address how to correct or follow up on discrepancies. Specifically, all three sets of realignment procedures state that analyzing the list of detailed transactions may identify transactions with discrepancies in their data, but none of the procedures fully address the types of discrepancies or their implications, such as if the expenditure does not include a transportation control number. Rather, according to DFAS officials, if DFAS identifies such a discrepancy with a specific expenditure, that cost remains as an expenditure from the main transportation account. Because DFAS’s procedures do not include steps to reconcile the amount of payments to all service providers against a list of detailed cost transactions or to identify all transportation expenditure transactions, DFAS may not review all FMS transportation expenditures and may not pay all expenditures from the correct transportation account. Additionally, because the list of transportation account codes provided to DFAS does not include all FMS account codes, neither DFAS nor DSCA review all expenditures from the FMS transportation accounts, which raises the risk of unallowable or unapproved expenditures. As a result, DSCA’s ability to provide reasonable assurance that all transportation expenditures are allowable and paid from the correct account is limited. DSCA officials told us that they are developing guidance in coordination with DFAS to identify and follow up on discrepancies and clarify how DFAS is to perform its analysis, and that the guidance should be implemented by July 2020. However, until DSCA finalizes that guidance to DFAS, DSCA may review inconsistent analyses and may not review all transportation expenditures, and the risk remains that DSCA may use the FMS transportation accounts to pay for unallowable costs, or pay transportation costs from the incorrect account. Conclusions DSCA has developed financial oversight procedures for overseeing the billions of dollars that are collected into and expended from the FMS transportation accounts, but we found weaknesses in oversight of both collections and expenditures. Regarding collections, gaps in DSCA’s oversight of DOD components’ annual case reviews and delivery reporting increase the risk that transportation fees collected may be inaccurate. Similarly, regarding expenditures, we identified gaps in DSCA’s oversight. Specifically, DSCA has not established procedures for conducting monthly reviews of expenditures and correcting discrepancies, or defined the information it needs from DFAS. Further, DFAS’s procedures to review and realign costs between FMS transportation accounts—which are based on guidance from DSCA—do not ensure that all transactions are included. By improving financial oversight of the FMS transportation accounts, DSCA could better ensure the accuracy of fees collected and help provide reasonable assurance that expenditures are allowable and paid from the correct account. DSCA officials told us that they are developing guidance to address these issues, and plan to implement that guidance in 2020. However, until DSCA finalizes and implements that guidance, the risks remain that DSCA may collect inaccurate transportation fees, use the FMS transportation accounts to pay for unallowable costs, or pay transportation costs from the incorrect account. Recommendations for Executive Action We are making the following five recommendations to DOD: The Secretary of Defense should ensure that the Director of DSCA implements the planned initiative to routinely examine annual case reviews performed by DOD components to help ensure that fees collected into the FMS transportation accounts are accurate. (Recommendation 1) The Secretary of Defense should ensure that the Director of DSCA works with DFAS and DOD components to establish a written process to monitor the timeliness of components’ delivery reporting to help ensure that fees collected into the FMS transportation accounts are accurate. (Recommendation 2) The Secretary of Defense should ensure that the Director of DSCA finalizes and implements internal guidance on how to conduct and document DSCA’s monthly review of expenditures from the FMS transportation accounts, including what information should be reviewed and how to identify and follow up on discrepancies. (Recommendation 3) The Secretary of Defense should ensure that the Director of DSCA works with DFAS to finalize written guidance to DFAS on how to generate the reports needed for DSCA’s monthly review of expenditures from the FMS transportation accounts, including the type of analysis needed. (Recommendation 4) The Secretary of Defense should ensure that the Director of DSCA works with DFAS and other DOD components to finalize the planned guidance to DFAS for the review and realignment of expenditures from the FMS transportation accounts to ensure reviews are consistent and include all expenditures. (Recommendation 5) Agency Comments We provided a draft of this report to DOD and State for review and comment. DSCA provided written comments on behalf of DOD, which are reprinted in appendix II. DSCA concurred with all of our recommendations, and indicated that it had developed plans to address them and had begun implementing some of those plans. DOD noted that annual case reviews and delivery reporting are not directly related to financial transactions tied to the FMS transportation account. However, both annual case reviews and delivery reporting provide an opportunity for oversight that can help verify the accuracy of data, which affects the accuracy of transportation fees collected from FMS purchasers’ accounts into the FMS transportation accounts. We also received technical comments from DOD, which we incorporated in our report as appropriate. State did not provide any written or technical comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of State, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or BairJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope & Methodology This report examines (1) the Defense Security Cooperation Agency’s (DSCA) oversight of Department of Defense (DOD) components’ activities that affect fees collected into the Foreign Military Sales (FMS) transportation accounts, and (2) DSCA’s financial oversight of expenditures from the FMS transportation accounts. To obtain information on both of our objectives, we reviewed DSCA’s guidance related to the FMS program and FMS transportation, and analyzed fiscal years 2007 to 2018 summary collections and expenditures data for the FMS transportation accounts maintained by the Defense Finance and Accounting Service (DFAS) in the Defense Integrated Financial System. We chose to review data from these fiscal years based on data availability. Under a prior review of the management oversight of FMS transportation fees, we assessed the reliability of these data by reviewing for duplicate entries, gaps, and obvious errors, comparing the data to similar data obtained under prior reviews, and interviewing agency officials to clarify questions about how to interpret the data. On the basis of this assessment, we determined these data to be reliable for the purposes of summarizing the total collections into and expenditures from the FMS transportation accounts during fiscal years 2007 to 2018. To examine DSCA’s oversight of DOD components’ activities that affect fees collected into the FMS transportation accounts, we reviewed DOD’s current guidance related to the FMS transportation fee, as well as other documentation and internal guidance developed by DSCA. We interviewed DSCA and DFAS officials on their implementation of oversight procedures. To determine and assess the controls DSCA should be using to manage and oversee the account, we reviewed DOD’s Financial Management Regulation, DSCA’s Security Assistance Management Manual, other internal DSCA guidance, federal accounting standards, federal internal control standards, and our prior report on DSCA’s management oversight of the FMS transportation account balances. To examine DSCA’s financial oversight of expenditures from the FMS transportation accounts, we reviewed DOD’s current regulations related to financial oversight and transportation, including DOD Financial Management Regulation and DOD Defense Transportation Regulation. Additionally, we reviewed DSCA’s Managers’ Internal Control Program procedures for monthly FMS transportation account reviews, and we interviewed DSCA officials responsible for these reviews. We also reviewed DSCA’s Security Assistance Management Manual, which provides guidance to DOD components related to the FMS program, and DFAS’s internal guidance on reviewing and realigning expenditures from the FMS transportation account. We analyzed a nongeneralizable, 3-month sample of expenditure data for the FMS transportation accounts provided by DFAS for the period from May through July 2019, including transportation service provider bills and detailed transaction-level expenditures. These data included the transportation account codes and transportation control numbers that DSCA and DFAS use to verify that individual expenses are allowable, and to realign transportation expenditures to the correct FMS transportation account. We initially obtained 1 month of transportation expenditure data, but decided to expand our analysis to 3 months of data to account for any variability between months. Additionally, we chose to review data from this period because they were the most current at the time of our request, and therefore the data were compiled using DFAS’s current process. We determined this period to be sufficient for our analysis of the data, which DSCA and DFAS use to provide assurance that expenditures from the FMS transportation accounts are allowable and paid from the correct account. To assess the reliability of these data, we reviewed the data for internal consistency by reviewing for duplicate entries, gaps, and obvious errors; compared them to DOD regulations on transportation account code and transportation control number construction; and interviewed DSCA and DFAS officials about their data collection and verification procedures. We found the data to be sufficiently reliable for our purpose of presenting the total number and dollar amount of transportation expenditures reviewed by DSCA and DFAS for each month, and to identify the number and dollar amount associated with expenditure records where we identified discrepancies. We found instances of blank or incorrectly formatted transportation account codes and transportation control numbers, and instances where the first characters of the transportation account code and transportation control number did not match, which DFAS officials identified as possible discrepancies in the data. As we discuss in the report, these instances raise questions about the reliability of the data for financial oversight, since DSCA and DFAS use this information to ensure that expenditures are allowable and paid from the correct FMS transportation accounts. We did not conduct any independent testing of the data to determine whether the amounts reflected correct payments made toward accurate billings. To review DFAS’s internal guidance for reviewing and realigning expenditures from the FMS transportation account, we reviewed copies of the procedures provided by DFAS for each of the three transportation service providers. We identified major steps in the procedures related to reviewing expenditure data, as well as various internal controls related to analyzing data. We reviewed each of the procedures against one another to determine the extent to which they addressed the same elements, and we compared relevant procedures against standards for internal control related to obtaining, evaluating, and correcting data, to determine whether they were sufficient to provide financial oversight. Additionally, we interviewed DFAS officials responsible for these procedures. In order to determine whether DFAS’s procedures included all expenditures, we requested and obtained information from Army, Navy, and Air Force on the transportation account codes that each component used in fiscal year 2019, and compared them to the account codes provided to DFAS by DSCA and used to query the third-party transportation service provider system for relevant FMS transportation expenses. We identified a list of transportation account codes not queried as part of DFAS’s procedures, and we requested that DFAS query the third-party transportation service provider’s system for the period of May through July 2019 using that list. We requested data from this period to be consistent with the expenditure data DFAS initially provided us for the same period. We reviewed the resulting data and compared it to the previously-provided data in order to determine the relative size of each data set for this period. We did not independently test to determine whether the lists of transportation account codes provided to us were complete, and therefore the data reviewed may not include all relevant transportation expenditures. We conducted this performance audit from May 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Jason Bair, (202) 512-6881 or BairJ@gao.gov In addition to the contact named above, Cheryl Goodman (Assistant Director), Benjamin L. Sponholtz (Analyst-in-Charge), Adam Peterson, and Heather Rasmussen made key contributions to this report. Martin de Alteriis, John Hussey, Christopher Keblitis, Heather Latta, and Grace Lui also contributed to this report.
Why GAO Did This Study From fiscal years 2007 to 2018, DOD collected about $2.3 billion in fees into the FMS transportation accounts and expended about $1.9 billion from the accounts. Foreign partners can pay DOD a fee to cover the costs of DOD transporting items. Fees are collected into transportation accounts in the FMS Trust Fund, and expenditures for related transportation are paid from those accounts. DSCA is responsible for financial oversight of the accounts, and DFAS—a service provider to DSCA—also has some accounting responsibilities related to the accounts. House Report 114-537 and Senate Report 114-255 included provisions that GAO review DSCA's management of FMS fees. This report examines (1) DSCA's oversight of DOD components' activities that affect fees collected into the FMS transportation accounts, and (2) DSCA's financial oversight of expenditures from the FMS transportation accounts. GAO reviewed DOD guidance, analyzed 3 months of DOD expenditure data, and interviewed DOD officials. What GAO Found The Foreign Military Sales (FMS) program is one of the primary ways the U.S. government supports its foreign partners, by annually selling them billions of dollars of military equipment and services. However, gaps in the Defense Security Cooperation Agency's (DSCA) oversight of Department of Defense (DOD) components' activities increase the risk that fees collected into the FMS transportation accounts may be inaccurate. While DSCA requires components to perform annual reviews of FMS cases to verify the accuracy of transportation fees collected, DSCA does not routinely oversee these reviews. Additionally, DSCA lacks oversight of the timeliness of DOD components' reporting of deliveries, which should occur within 30 days. DSCA officials indicated that they are developing guidance and processes to help address these challenges, but had not completed them as of February 2020. DSCA's financial oversight of expenditures from the FMS transportation accounts does not provide reasonable assurance that expenditures are allowable and paid from the correct account. In fiscal year 2016, DSCA established internal guidance for financial oversight of expenditures from the accounts. While that guidance includes a process to review expenditures on a monthly basis, DSCA has not established procedures for conducting that review, including how to analyze expenditure data, or identify and address discrepancies. As a result, DSCA may not review FMS transportation expenditures consistently or identify and address discrepancies. GAO found that approximately 19 percent of expenditures reported to DSCA over a 3-month period in fiscal year 2019 inconsistently identified the DOD component responsible for the transaction. For example, a transaction may indicate that both Navy and Air Force are responsible for the shipment. Further, DSCA has not documented how the Defense Finance and Accounting Service (DFAS) should generate the reports DSCA uses for its review, and DFAS's review of expenditures excludes some expenditures from two DOD components. Without a routine process to review expenditures and correct discrepancies, DSCA cannot provide reasonable assurance that all expenditures are allowable and paid from the correct account, raising the risk of misuse of funds. DSCA officials told GAO that they are developing guidance to help address these challenges, and expect to implement it in 2020. What GAO Recommends GAO is making five recommendations to DOD to strengthen financial oversight of the FMS transportation accounts, including two recommendations to strengthen DSCA's oversight of fees collected into the accounts, and three recommendations to strengthen DSCA's and DFAS's oversight of expenditures from the transportation accounts. DOD concurred with all of the recommendations and identified actions it plans to take to address them.
gao_GAO-19-292
gao_GAO-19-292_0
Background Air ambulance providers use either helicopters or fixed-wing aircraft, as shown in Figure 1, depending on where and how far they are transporting patients. Helicopters are generally used for transports from the scene of the accident or injury to the hospital or for shorter-distance transports between hospitals. Helicopter bases may be at hospitals, airports, or other types of helipads, and a provider may need to fly from its base to the scene or a hospital to pick up the patient being transported. Air ambulance providers typically respond to calls for helicopter transports within a certain area around their bases in part to ensure appropriate response times. Fixed-wing aircraft are generally used for longer-distance transports between hospitals. Fixed-wing bases are at airports, and the patient is transported by ground ambulance to and from the airports. Air ambulance providers respond to emergencies without knowing patients’ health insurance coverage, such as whether the patient has private insurance, Medicare, Medicaid, or no insurance. According to our previous analysis of information from eight selected air ambulance providers, in 2016, Medicare patients received 35 percent of helicopter transports, privately-insured patients received 32 percent, Medicaid patients received 21 percent, uninsured patients received 9 percent, and patients with other types of coverage such as automobile and military- sponsored insurance received a small percentage. Relatively few patients receive air ambulance transports, but those patients who do generally have no control over the decision to be transported by air ambulance or the selection of the air ambulance provider, as shown in Figure 2. For privately-insured patients, this means they cannot necessarily choose to be transported by air ambulance providers in their insurers’ network and can potentially receive a balance bill from the providers for the difference between the price charged by the provider and the amount paid by the insurer. This amount is in addition to copayments, deductibles, or other types of cost-sharing that patients typically pay under their insurance. Air ambulance providers are prohibited from sending balance bills to Medicare and Medicaid patients, while uninsured patients might be held responsible by the air ambulance provider for the entire price charged. With many types of health care services, both health care providers and insurers have incentives to negotiate and enter into contracts that specify amounts that providers will accept as payment in full, thereby avoiding the potential for balance bills for those services. Insurers can offer—and health care providers may be willing to accept—payment rates that are much lower than the providers’ charged amounts because the providers may receive more patients as an in-network provider. Furthermore, when patients are choosing insurance plans, they may consider how many or which providers are in-network, particularly for providers such as hospitals or certain physicians. The emergency nature of most air ambulance transports, as well as their relative rarity and high prices charged, reduces the incentives of both air ambulance providers and insurers to enter into contracts with agreed- upon payment rates, which means air ambulance providers may be more often out-of-network when compared with other types of providers. Decisions by first responders and physicians on which air ambulance provider to call are typically not based on the patient’s insurance plan, meaning that being in-network may not increase air ambulance providers’ transport volume. As a result, according to stakeholders we spoke to, if insurers offer payment rates that are much lower than the air ambulance providers’ charged amounts, the air ambulance providers may be less willing than other health care providers to accept those payment rates. Furthermore, given the relative rarity of air ambulance transports, patients may not anticipate needing air ambulance transports and may not choose insurance plans based on which or how many air ambulance providers are in insurers’ networks. Approaches by states or the federal government to limit balance billing may target providers, insurers, or both. Examples of approaches described in research on balance billing include a cap on the amount that providers can charge or a requirement for insurers to pay the full amount charged by providers. However, according to the research, targeting just providers or insurers can result in undesired outcomes. Capping the amount providers can charge could result in insurers that underpay for services, which could lead some providers to reduce service or exit the market altogether. Conversely, requiring insurers to pay the full amount charged by providers could result in providers that overcharge for services, which could lead to higher premiums charged to patients. The authority of states to address issues related to air ambulance balance billing is affected by the following federal laws: Airline Deregulation Act of 1978 (ADA): A provision in this law preempts state-level economic regulation—i.e., regulating rates, routes, and services—of air carriers authorized by DOT to provide air transportation. In general, courts have held that air ambulances are considered to be air carriers under the ADA’s preemption provision, and courts, DOT, and state attorneys general have determined specific issues related to the air ambulance industry that can and cannot be regulated at the state level. McCarran-Ferguson Act of 1945: This act affirmed that states have the authority to regulate the business of insurance. For example, states may review insurers’ health insurance plans and premium rates. In instances of balance billing, states can determine whether the insurer paid a provider in accordance with its policy for paying for out-of-network services. Employee Retirement Income Security Act of 1974 (ERISA): ERISA provides a federal framework for regulating employer-based pension and welfare benefit plans, including health plans. Although states may regulate health insurers, ERISA preemption generally prevents states from directly regulating self-insured employer-based health plans. In 2017, as previously mentioned, we reported on the increase in prices charged by helicopter air ambulance providers and on the lack of data on the factors that may be affecting prices charged. We also found only limited information was available related to several key aspects of the industry, ranging from basic aspects—such as the composition of the industry by type of air ambulance provider, the prices charged by air ambulance providers, and the number of overall transports—to the more complex, such as the extent of contracting between air ambulance providers and insurers or the extent of balance billing to patients. Given DOT’s authority to oversee certain aspects of the industry, we made four recommendations to DOT in 2017 to increase transparency and obtain information to better inform their oversight of the air ambulance industry: (1) communicating a method to receive air ambulance complaints, including those regarding balance billing; (2) taking steps to make complaint information publicly available; (3) assessing available federal and industry data to determine what information could assist in the evaluation of future complaints; and (4) considering consumer disclosure requirements for air ambulance providers, such as established prices charged and the extent of contracting with insurers. DOT has taken steps to respond to the first two recommendations, including adding information to its website describing how air ambulance complaints can be registered and used by DOT. It has also listed the number of air ambulance complaints filed with DOT each month starting in January 2018—23 air ambulance complaints have been filed with DOT through November 2018. DOT has not yet acted on the remaining two recommendations. Air Ambulance Providers Added Bases from 2012 through 2017 Air ambulance providers added helicopter bases from 2012 through 2017, according to our analysis of the ADAMS data. Specifically, there were 752 bases in the 2012 data and 868 bases in the 2017 data. When we compared the data for each year, there were 554 bases in both years of data (i.e., existing bases), 314 bases in the 2017 data only (i.e., new bases), and 198 bases in the 2012 data only (i.e., closed bases); the new and existing bases are shown in Figure 3. This addition in bases also increased the total area served by helicopter bases by 23 percent. Several air ambulance providers told us about their decisions to open new bases. For example, one air ambulance provider told us that one way it evaluates the need for a new base in an area is to ask hospitals in that area about the number of transports they typically require and the length of time it takes helicopters to arrive to pick up patients. Along with adding helicopter bases, air ambulance providers also added fixed-wing bases from 2012 through 2017, according to our analysis of the ADAMS data. Specifically, there were 146 bases in the 2012 data and 182 bases in the 2017 data. When we compared the data for each year, there were 114 bases in both years of data (i.e., existing bases), 68 bases in the 2017 data only (i.e., new bases), and 32 bases in the 2012 data only (i.e., closed bases); the new and existing bases are shown in Figure 4. Both the existing and new bases are more prevalent in the Western and Southern parts of the United States. Given that fixed-wing aircraft are used for longer-distance transports and that patients are brought to the base rather than picked up by fixed-wing aircraft, we did not measure the area or any changes in the area served by fixed-wing bases, which are usually airports. Based on our previous work, we further analyzed two trends related to where air ambulance providers have chosen to locate their new bases. New bases in rural areas: About 60 percent of the new helicopter bases and about half of the new fixed-wing bases in the ADAMS data were in rural areas. We previously reported that some helicopter air ambulance providers told us that the lower population density in rural areas leads to fewer transports per helicopter at rural bases. They also said that, despite the lower population density, rural areas may have greater need for air ambulance transports. This may be due to, for example, the closure of some rural hospitals and the establishment of regional medical facilities, such as cardiac and stroke centers that provide highly specialized care. New bases in areas with existing coverage: For just under half of the new helicopter bases in the ADAMS data, the area served overlapped with existing air ambulance coverage by more than 50 percent. On one hand, according to some stakeholders we spoke to, the new helicopters may help enhance available services by, for example, being able to respond to a call if the existing ambulance resources are in use or otherwise unavailable. On the other hand, as we have previously reported, some air ambulance providers told us that when helicopters are added to bases in areas with existing coverage, those helicopters are not serving additional demand. As a result, the same number of transports is spread out over more helicopters, reducing the average number of transports per helicopter. The FAA Reauthorization Act of 2018, which became law in October 2018, requires the FAA to assess the availability of information to the general public related to the location of heliports and helipads used by helicopters providing air ambulance services and to update current databases or, if appropriate, develop a new database containing such information. This could provide additional information about base locations going forward. Available Data Indicate About Two- Thirds of Air Ambulance Transports for Privately-Insured Patients Were Out-of- Network but Not Extent of Balance Billing for these Services In the FAIR Health data on air ambulance transports for privately-insured patients, about two-thirds of the approximately 13,100 and 20,700 transports with information on network status were out-of-network in 2012 and 2017, respectively. (See Table 1.) The proportions were similar for both helicopter and fixed-wing transports in each year. The proportion of out-of-network air ambulance transports in the FAIR Health data set is higher than what research shows for ground ambulance transports and other types of emergency services. For example, one study found that 51 percent of ground ambulance transports in 2014 were out-of-network, and the same study and another one found that 14 and 22 percent of emergency department visits in 2014 and 2015 involved out-of- network physicians, even at in-network hospitals. Air ambulance providers and insurers we spoke to confirmed that their proportion of out-of-network transports was high in 2017, but some also reported they have recently been entering into more network contracts. For example, one of the large independent air ambulance providers and a national insurer entered into a contract that covered patients in five states as of August 2018. These contracts could decrease the extent of out-of- network transports and balance billing in the future for these states. Increases in the prices charged for air ambulance transports may exacerbate the financial risks related to balance billing for those with private insurance. In 2017, the median price charged by air ambulance providers for a transport was approximately $36,400 for a helicopter transport and $40,600 for a fixed-wing transport, according to our analysis of FAIR Health data. The prices charged in 2017 were an increase of over 60 percent from 2012, when the median price charged was approximately $22,100 for a helicopter transport and $24,900 for a fixed- wing transport. There is limited information on what insurers pay for out- of-network services. While out-of-network transports may result in balance billing, the FAIR Health data we analyzed do not indicate the extent to which patients received balance bills and, if so, the size of the bills. In addition, as we previously reported, there is a lack of comprehensive national data about the extent and size of balance bills, and air ambulance providers are generally not required to report such data. However, some states have attempted to collect information from patients about balance billing for air ambulance services. Therefore, to provide insights into potential balance bill amounts, we reviewed data on consumer complaints that two of our selected states had received about specific incidents of balance billing for 2014 through 2018. Data for Maryland contained about two dozen complaints with information on the specific amount of balance bills, and those amounts ranged from $12,300 to $52,000. Data from North Dakota contained three dozen complaints with information on the specific amount of balance bills, and those amounts ranged from $600 to $66,600, though all but one amount was over $10,000. Given that providers may agree to reduce amounts that patients would otherwise owe or insurers may increase their payments to providers, along with limited national data, the extent to which patients actually pay the full amounts of balance bills received is also unclear. Generally, officials from air ambulance providers we spoke to said that they first encourage patients to appeal to their insurers for increased payment. If these appeals do not fully address the balance bill, the providers may offer various payment options. For example, officials from one air ambulance provider said that it offers a discount of up to 50 percent off the balance bill if the patient pays the remaining 50 percent immediately. Alternatively, the provider requests detailed financial information—such as income, obligations and debts, and medical bills—to determine whether to potentially offer other discounts or a payment plan. This process can take multiple months, and officials from another air ambulance provider said patients who do not respond to letters and calls may be more likely to be referred to a collections process. Air ambulance providers we spoke with said that they use discretion on how much assistance to offer, and not all patients receive discounts after providing all relevant documentation. Even with discounts, according to data from some air ambulance providers we spoke with, the amount patients pay can still be in the thousands of dollars. Selected States Have Attempted to Limit Potential Air Ambulance Balance Billing through Insurance Regulation and Public Attention Four of our selected states attempted to limit balance billing through the regulation of insurers (Montana, New Mexico, North Dakota, and Texas). Additionally, four states have attempted to limit balance billing through education and public pressure on stakeholders (Florida, Maryland, New Mexico, and North Dakota). Insurance Regulation Four of the six states we selected—Montana, New Mexico, North Dakota, and Texas—have attempted to limit balance billing by air ambulance providers through the regulation of insurers, as shown in Table 2. Three states have faced challenges in federal district court related to whether their attempts to limit balance billing by air ambulance providers are preempted by the federal ADA. As of January 2019, the case in New Mexico was dismissed on procedural grounds, and the cases in North Dakota and Texas have been decided. The hold-harmless requirement and dispute resolution process established by Montana’s law is an example of how states are attempting to limit balance billing by regulating the business of insurance. Under the hold-harmless requirement, the financial risk for potential balance billing is transferred from patients to the insurer by limiting the patients’ out-of- pocket costs to their cost-sharing responsibilities. However, according to state officials, the dispute resolution process established by this law had not yet been used as of December 2018. The requirement and process apply to transports for patients covered by Montana-regulated insurance plans. It does not apply to transports for individuals in most self-insured plans subject to ERISA, nor does it apply to transports for individuals, such as tourists, covered by insurance plans regulated by other states. The stated purpose of the law establishing this process is to prevent state residents from incurring excessive out-of-pocket expenses in air ambulance situations in a manner that is not preempted by the ADA. Officials in Montana and North Dakota reported receiving fewer consumer complaints about balance billing after implementing their laws to limit balance billing. One reason for this decrease in consumer complaints, according to officials in Montana, was that uncertainty over the possible effects of the law has made most air ambulance providers more willing to enter into contract negotiations with insurers. The officials added that shortly after the law’s enactment, a large insurer and a large air ambulance provider entered into a network contract. Additionally, another air ambulance provider in Montana confirmed that although it had provided out-of-network transports, it had not sent balance bills to patients since the law took effect. Officials in both states could not comprehensively report the extent to which instances of balance billing may have decreased in their state. As required by FAA Reauthorization Act of 2018, the Secretary of Transportation has taken steps to form an advisory committee on air ambulance patient billing. DOT issued a solicitation in December 2018 for applications and nominations for membership on this advisory committee. The committee is to consist of representatives from state insurance regulators, health insurance providers, patient advocacy groups, consumer advocacy groups, and physicians specializing in emergency, trauma, cardiac, or stroke care, among others. The Act directs the advisory committee to issue a report within 180 days of its first meeting and to make recommendations that address the following, among other things: The disclosure of charges and fees for air ambulance services; Options and best practices for preventing balance billing—such as improving network and contract negotiation, dispute resolutions between health insurers and air medical service providers, and explanations of insurance coverage; Steps that states can take to protect consumers consistent with current legal authorities regarding consumer protection; and The recommendations from our 2017 report, including any additional data that DOT should collect from air ambulance providers and other sources to improve its understanding of the air ambulance market and oversight of the industry. Education and Public Pressure Officials in three selected states—Florida, New Mexico, and North Dakota—have provided information to educate consumers and other stakeholders about balance billing for air ambulance transports. The Florida Office of the Insurance Consumer Advocate and the New Mexico Office of Superintendent of Insurance reviewed air ambulance transports in their states and issued public reports with recommendations to improve transparency and education, among other recommendations. Florida’s report, issued in June 2018, recommends that insurers and air ambulance providers improve transparency about the availability of in-network air ambulance providers in a given area and provide information about rate justifications and billing practices to help consumers anticipate potential out-of-network costs. New Mexico’s report, issued in January 2017, recommends educating emergency room physicians and other health care providers about the impact of air ambulance bills on consumers and on how to select in-network air ambulance providers. Additionally, since 2017, the North Dakota Insurance Department has produced a publicly available guide showing which air ambulance providers are in-network with the three insurers in the state. This guide is part of the state’s requirement that, for non-emergency transports, hospitals inform patients about the network status of air ambulance providers. Although the three large independent air ambulance providers we spoke with told us that non-emergency transports comprise only a small percentage of air ambulance transports, officials in North Dakota said some dispatchers and first responders reported using the guide to call in-network air ambulance providers when possible for emergency transports. Finally, one additional selected state—Maryland—has increased public awareness of air ambulance balance billing, which has generated public pressure on air ambulance providers and insurers to encourage the two sides to negotiate contracts. The Maryland Insurance Administration convened a public meeting in September 2015 with the goal of raising public awareness about air ambulance balance billing in the state. The meeting involved statements from patient, air ambulance, hospital, and insurer stakeholders. One of the large independent air ambulance providers said that public pressure following the meeting, as well as subsequent engagement from the state insurance commissioner, were factors in securing a contract with a large insurer in the state. Agency Comments We provided a draft of this report to the Department of Health and Human Services and DOT for review and comment. The Department of Health and Human Services told us they had no comments on the draft report, and DOT provided technical comments that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of the Centers for Medicare & Medicaid Services, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact James Cosgrove, Director, Health Care at (202) 512-7114 or cosgrovej@gao.gov or Heather Krause, Director, Physical Infrastructure at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Lori Achman (Assistant Director), Heather MacLeod (Assistant Director), Corissa Kiyan-Fukumoto (Analyst-in-Charge), William Black, George Bogart, Stephen Brown, Krister Friday, Matthew Green, Barbara Hansen, Giselle Hicks, and Vikki Porter made key contributions to this report. Related GAO Products Air Ambulance: Data Collection and Transparency Needed to Enhance DOT Oversight. GAO-17-637. Washington, D.C.: July 27, 2017. Air Ambulance: Effects of Industry Changes on Services Are Unclear. GAO-10-907. Washington, D.C.: Sept. 30, 2010.
Why GAO Did This Study Air ambulances provide emergency services for critically ill patients. Relatively few patients receive such transports, but those who do typically have no control over the selection of the provider, which means privately-insured patients may be transported by out-of-network providers. The Joint Explanatory Statement accompanying the 2017 Consolidated Appropriations Act includes a provision for GAO to review air ambulance services. Among other objectives, this report describes (1) the extent of out-of-network transports and balance billing and (2) the approaches selected states have taken to limit potential balance billing. GAO analyzed a private health insurance data set for air ambulance transports with information on network status and prices charged in 2017 (the most recent data available). Although this was the most complete data identified, the data may not be representative of all private insurers. In addition, GAO interviewed officials in six states (Florida, Maryland, Montana, New Mexico, North Dakota, and Texas) selected in part for variation in approaches to limit balance billing and location. GAO also interviewed air ambulance providers, health insurers, and Centers for Medicare & Medicaid Services and Department of Transportation (DOT) officials. DOT provided technical comments on a draft of this report, which GAO incorporated as appropriate, and the Department of Health and Human Services had no comments. What GAO Found Privately-insured patients transported by air ambulance providers outside of their insurers' provider networks are at financial risk for balance bills—which, as the figure shows, are for the difference between prices charged by providers and payments by insurers. Any balance bills are in addition to copayments or other types of cost-sharing typically paid by patients under their insurance coverage. According to GAO's analysis of the most complete data identified for air ambulance transports of privately-insured patients, 69 percent of about 20,700 transports in the data set were out-of-network in 2017. This is higher than what research shows for ground ambulance transports (51 percent in 2014 according to one study) and other emergency services. Air ambulance providers that GAO spoke with reported entering into more network contracts recently, which could lower the extent of out-of-network transports in areas covered by the contracts. While out-of-network transports may result in balance billing, the data GAO analyzed do not indicate the extent to which patients received balance bills and, if so, the size of the bills. In addition, as GAO reported in 2017, there is a lack of national data on balance billing, but some states have attempted to collect information from patients. For example, GAO reviewed over 60 consumer complaints received by two of GAO's selected states—the only states able to provide information on the amount of individual balance bills—and all but one complaint was for a balance bill over $10,000. Patients may not end up paying the full amount if they reach agreements with air ambulance providers, insurers, or both. The amounts of potential balance bills are informed in part by the prices charged. GAO's analysis of the data set with transports for privately-insured patients found the median price charged by air ambulance providers was about $36,400 for a helicopter transport and $40,600 for a fixed-wing transport in 2017. The six states reviewed by GAO and others have attempted to limit balance billing. For example, the six states have taken actions to regulate insurers, generate public attention, or both. As required by recent federal law, the Secretary of Transportation has taken steps to form an advisory committee to, among other things, recommend options to prevent instances of balance billing.
gao_GAO-19-674T
gao_GAO-19-674T_0
Selected Agencies Have Scientific Integrity Policies That Are Generally Consistent with Federal Guidance In our April 2019 report, we found that all nine of the selected agencies have policies that are generally consistent with OSTP’s guidance for the principles of scientific integrity that we reviewed: foundations of scientific integrity in government and professional development of government scientists and engineers. OSTP’s guidance describes several components for each of these principles, which the selected agencies addressed either (1) through their scientific integrity policies, (2) in related policies, or (3) through related actions. For example, when addressing the components of foundations of scientific integrity in government, NOAA’s scientific integrity policy states that the agency will ensure the free flow of scientific information online and in other formats, consistent with privacy and classification standards, and in keeping with other Commerce and NOAA policies. In another example, NASA’s scientific integrity policy states that NASA facilitates the free flow of scientific and technological information among scientists and engineers, between NASA staff and the scientific and technical community, and between NASA employees and the public. The policy goes on to cite additional NASA policies on dissemination of information and public access to data. Similarly, we found that all nine selected agencies addressed all of the components of the principle professional development of government scientists and engineers. For example, EPA’s policy states that the agency encourages publication and presentation of research findings in peer-reviewed, professional, or scholarly journals and at professional meetings. NIST’s scientific integrity policy states that the agency supports scientists’ full participation in professional or scholarly societies, committees, task forces, and other specialized bodies of professional societies, with proper legal review and approval. The policy goes on to cite separate NIST guidance for staff on how to seek approval for memberships and participation in professional organizations. All of the Selected Agencies Took Some Action to Achieve Policy Objectives, but Opportunities Exist for Furthering Those Objectives We found in our April 2019 report that the nine selected agencies have taken some actions to help achieve the objectives of their scientific integrity policies in the three areas we reviewed—communicating information to staff, providing oversight, and monitoring and evaluating performance. First, according to our analysis, seven of the nine selected agencies have taken some actions to educate and communicate to staff about their scientific integrity policies, and two have not. Specifically, FE and NIST have not provided scientific integrity training for staff, according to officials, or taken other actions to promote their scientific integrity policies with staff. Under the 2007 America COMPETES Act, civilian agencies that conduct scientific research are, among other things, required to widely communicate and readily make accessible to all employees their scientific integrity policies and procedures. According to FE and NIST officials, the agencies made their policies available to staff on their websites and believed no additional actions were needed. By taking action to educate and communicate their scientific integrity policies to staff through, for example, regular training, these agencies would have better assurance that employees have the information, skills, and competencies they need to help achieve agency scientific integrity objectives. We recommended the Secretary of Energy and Director of NIST take action to educate and communicate the agencies’ polices to staff through, for example, regular training. In DOE’s written comments on a draft of our report, reproduced in our final report, the department explained that it will designate a scientific integrity official to be responsible for leading and coordinating with other offices across DOE to develop measures to educate and communicate to staff about scientific integrity policies. In Commerce’s written comments, reproduced in in our final report, NIST identified ways it plans to provide training to its staff. Second, we found that eight of the nine selected agencies have designated scientific integrity officials, or the equivalent, who are responsible for overseeing the agencies’ implementation of their scientific integrity policies. FE, which follows DOE’s policy, does not have a scientific integrity official or the equivalent. DOE’s scientific integrity policy states that the Secretary of Energy will designate a scientific integrity official for the department. DOE officials explained that the scientific integrity official has not been designated because the scientific integrity policy was implemented in January 2017, as the administration was changing, and that the current Secretary has not yet designated a scientific integrity official. We recommended the Secretary of Energy should establish steps and a time frame for designating a scientific integrity official to oversee the department’s scientific integrity activities. In DOE’s written comments on a draft of our report, reproduced in our final report, the department concurred with our recommendation and estimated that it would address the recommendation by the end of 2019. Third, we found in our April 2019 report that four of the nine selected agencies—ARS, EPA, NASA, and NIH—monitor and evaluate the performance of their activities under their scientific integrity policies, or have plans to do so. The remaining five agencies—FAA, FE, NIST, NOAA, and USGS—have, for different reasons, not done so. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks, which may include establishing activities to monitor performance measures and indicators. By establishing mechanisms to effectively monitor the implementation of their scientific integrity policies, agencies may be better positioned to evaluate and measure whether their scientific integrity policies are achieving their objectives and, where necessary, improve their implementation. We recommended in our April 2019 report that the five agencies develop mechanisms to regularly monitor and evaluate implementation of their scientific integrity policies, including mechanisms to remediate identified deficiencies and make improvements where necessary. All five agencies agreed with our recommendation and responded as follows: In a May 2019 letter from DOT, the department identified several mechanisms it plans to implement by the end of March 2020. In DOE’s written comments on a draft of our report, the department said that its scientific integrity official will have the responsibility to lead in developing procedures to monitor and evaluate implementation of DOE’s policy. In Commerce’s written comments, NIST stated that, beginning in fiscal year 2019, the agency will review implementation of its policy at least annually and make recommendations to the Director of NIST as to whether any improvements are needed. In Commerce’s written comments, NOAA stated that it will identify additional metrics for monitoring and evaluating its policy. The Department of the Interior’s written comments stated that the department plans to implement a biennial scientific integrity survey of USGS employees, beginning in 2020, to gauge scientific integrity policy awareness and effectiveness at USGS, among other things. Most of the Selected Agencies Have Procedures for Addressing Alleged Violations of Scientific Integrity Policies, but Two Do Not, Raising Questions about the Consistency of Their Investigations Seven of the nine selected agencies—ARS, EPA, FAA, NIH, NIST, NOAA, and USGS—have specific, documented procedures for identifying and addressing alleged violations of their scientific integrity policies. Although the details of agencies’ procedures may vary, the procedures generally include five basic steps: (1) report allegation, (2) screen allegation, (3) investigate allegation, (4) respond to violation, and (5) appeal decision (see fig. 1). In contrast, two of the nine selected agencies—FE and NASA—do not have specific, documented procedures for identifying and addressing alleged violations of their scientific integrity policies. In March 2009, the President issued a memorandum on scientific integrity that states that each agency should have in place procedures to identify and address instances in which the scientific process or the integrity of scientific and technological information may be compromised. FE, which follows DOE’s scientific integrity policy, does not have specific procedures because DOE has not established any. DOE and FE officials said staff can report allegations to a supervisor, the whistleblower ombudsperson, or the U.S. Office of Special Counsel (OSC). Similarly, NASA officials said employees can report allegations through their chain of command, such as to a supervisor, for investigation on a case-by-case basis. However, without documented procedures for identifying and addressing alleged violations of their scientific integrity policies, DOE and NASA do not have assurance that all staff have a clear understanding of how to report allegations and that investigations will be conducted consistently. We recommended the Secretary of Energy and Administrator of NASA develop documented procedures for identifying and addressing alleged violations of their scientific integrity policies. In DOE’s written comments on a draft of our report, the department stated that it will be the responsibility of the scientific integrity official to lead, and coordinate with other elements of the department, in developing procedures for identifying and addressing alleged violations of its scientific integrity policy and estimated completing actions in June 2020. In written comments from NASA, the agency stated that it will develop documented procedures for identifying and addressing alleged violations of its policy and estimated completion by October 2020. Chairwoman Stevens and Chairwoman Sherrill, Ranking Member Baird and Ranking Member Norman, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact John Neumann, Managing Director, Science, Technology Assessment, and Analytics, at (202) 512-6888 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Rob Marek (Assistant Director), Wyatt R. Hundrup (Analyst in Charge), Cheryl Harris, and Douglas G. Hunker. Also contributing to this testimony were Eric Charles and Ben Shouse. Additional staff who made contributions to our April 2019 report are identified in that report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's April 2019 report, entitled Scientific Integrity Policies: Additional Actions Could Strengthen Integrity of Federal Research ( GAO-19-265 ). What GAO Found The nine selected agencies GAO reviewed have taken various actions to help achieve the objectives of their scientific integrity policies in three areas: Educating staff. Seven of the nine agencies have taken some actions to educate and communicate to staff about their policies, consistent with the 2007 America COMPETES Act. However, the Office of Fossil Energy (FE), which follows the Department of Energy's (DOE) policy, and the National Institute of Standards and Technology (NIST) have not taken action. Providing oversight. Eight of the nine agencies have a designated official, or the equivalent, to oversee implementation of their scientific integrity policies. However, FE does not have such an official because DOE has not appointed one and currently has no plans or timeframe to do so, although DOE policy states that DOE will appoint an official for oversight. Monitoring and evaluating implementation. Four of the nine agencies have monitored and evaluated implementation of their scientific integrity policies, consistent with federal standards that call for such control activities. However, FE, the Federal Aviation Administration (FAA), NIST, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Geological Survey (USGS) have not undertaken such activities. Seven of the nine agencies have specific, documented procedures for identifying and addressing alleged violations of their scientific integrity policies. Although the details of agencies' procedures vary, they generally include the steps shown below. However, two agencies—FE, following DOE's policy, and the National Aeronautics and Space Administration (NASA)—do not have documented procedures for identifying and addressing alleged violations. A 2009 presidential memo on scientific integrity states that agencies should have procedures to identify and address instances in which the scientific process or the integrity of scientific and technological information may be compromised. Without procedures, FE and NASA do not have assurance that their staff understand how to report allegations and that investigations are conducted consistently.
gao_GAO-20-233
gao_GAO-20-233_0
Background Medicaid is a joint federal-state health care program that provides health care coverage to low-income and medically needy individuals. At the federal level, the Centers for Medicare & Medicaid Services (CMS), within HHS, is responsible for overseeing Medicaid, while states administer their respective Medicaid programs’ day-to-day operations. Each state’s Medicaid program, by law, must cover certain categories of individuals and provide a broad array of benefits. Within these requirements, however, states have significant flexibility to design and implement their programs, resulting in more than 50 distinct state-based programs. The federal government requires coverage for certain mandatory services under Medicaid, but states may decide to include other optional services as well. Some of the largest and most commonly included services include prescription drugs, nursing facilities, home and community-based care, and hospital inpatient care. Medicaid and Prescription Drugs Although pharmacy coverage is an optional service under Medicaid, all 50 states and the District of Columbia provide coverage for prescription drugs. State Medicaid programs that opt to cover prescription drugs are generally required to cover all of the outpatient drugs of any drug manufacturer participating in the Medicaid Drug Rebate Program, including all of the drugs’ formats. State Medicaid programs do not directly purchase prescription drugs, but instead reimburse pharmacies for covered prescription drugs dispensed to Medicaid beneficiaries. Providers (including physicians, nurse practitioners, and physician assistants) and pharmacies provide health care services, seek payment, and are reimbursed for services by state Medicaid agencies. States may directly pay health care providers for services rendered using a fee-for- service system or may delegate these responsibilities to MCOs. Under managed care, the state contracts with MCOs to provide comprehensive health care services through its network of providers. MAT Medications Buprenorphine, buprenorphine-naloxone, and naltrexone may be prescribed, administered, or dispensed for use in MAT. These medications come in a variety of formats, including oral, implantable, and injectable. Buprenorphine. Buprenorphine is a partial opioid agonist, meaning it binds to opioid receptors and activates them. It reduces or eliminates opioid withdrawal symptoms, including drug cravings, and blunts the euphoria or dangerous side effects of other opioids, such as heroin. It can be used for detoxification treatment and maintenance therapy. Buprenorphine is available as a MAT medication in two oral formats—(1) tablets for sublingual (under the tongue) administration, and (2) film for sublingual or buccal (inside the cheek) administration; as a subdermal (under the skin) implant; and in an injectable format. Oral formats are often used for beneficiaries that are in the beginning stages of treatment. The implantable format is generally used for beneficiaries who are already stable on a low or moderate dosage of oral buprenorphine. The oral formats are taken daily, while the injectable format is administered monthly, and the implantable format is administered every 6 months. The medication carries a risk of abuse, particularly in oral formats where it can be used inappropriately or illegally re-sold. The injectable and implantable formats of buprenorphine are intended to minimize this risk and to increase beneficiary compliance, because the medication is administered by a provider. Buprenorphine-naloxone. Naloxone is a medication added to some oral formats of buprenorphine to reduce the chances of misuse or abuse. Buprenorphine-naloxone is available in an oral format as either a film or a tablet. It discourages people from inappropriately injecting a crushed and dissolved tablet of buprenorphine by inducing symptoms of opioid withdrawal when injected by individuals physically dependent on opioids. Naltrexone. Naltrexone is an opioid antagonist, meaning it binds to opioid receptors, but does not activate them, thereby blocking the euphoria the user would normally feel from opioids. It also may result in withdrawal symptoms if recent opioid use has occurred. Therefore, it is used for relapse prevention following complete detoxification from opioids. It can be taken daily in an oral format or as a once-monthly injection; though due to low patient compliance, SAMHSA does not recommend using oral naltrexone for OUD treatment. Naltrexone carries no known risk of abuse. For the injectable format of naltrexone, beneficiaries have to be free from opioids for at least a week before they can begin the medication. Prior Authorization and Preferred Drug Lists Subject to certain requirements, state Medicaid agencies may use different strategies, such as prior authorization and preferred drug lists (PDL), to manage the cost of prescription medications and ensure that patients are taking medications that are clinically appropriate. Prior authorization requires that certain conditions are met before services can be provided to patients, in part, to control utilization and prevent improper payments. PDLs reflect state Medicaid agencies’ determinations on whether medications, including those used for MAT, will be covered and whether these medications will be categorized as preferred or non- preferred. A PDL indicates the first-choice or preferred medication for a beneficiary’s particular medical condition. PDLs are utilized by state Medicaid agencies to incentivize providers to prescribe certain types of medications. In addition, the preferred or non-preferred categorization of medication can vary between fee-for-service and managed care plans within a state. If a medication is not listed on a PDL, prior authorization may be required. However, medications listed on a PDL may still have prior authorization requirements, such as requirements to ensure patient safety. Distribution Methods There are three distribution methods by which beneficiaries can obtain MAT medications. (See fig. 1.) 1. Retail pharmacy. After receiving a prescription from a health care provider, pharmacists at a retail pharmacy, such as CVS or Walgreens, prepare and dispense (or deliver) the medication directly to a beneficiary. The pharmacist is reimbursed for the cost of the medication by a payer, such as Medicaid or private insurance, and by any co-payment from the beneficiary. 2. External delivery from a specialty pharmacy. After receiving a prescription from a physician or other health care provider, a specialty pharmacy delivers the medication directly to the provider so the medication can be administered (injected or implanted by the provider) to the beneficiary for whom it was prescribed. The specialty pharmacy ensures that any specific requirements for a medication are maintained; for example, injectable naltrexone requires the use of refrigerated warehouses, insulated shipping containers, and temperature monitoring equipment. The specialty pharmacy is reimbursed by Medicaid or another payer. 3. Buy-and-bill. A health care provider purchases the medication from a manufacturer or distributor and stores the medication until it is dispensed or administered to the appropriate patient. After the medication is dispensed or administered, the provider bills Medicaid or another payer for the cost of the medication. Each of the distribution methods used for MAT medications has characteristics with implications for beneficiaries, providers, pharmacies, and payers. (See table 1.) Laws and Regulations for Prescribing Buprenorphine and Buprenorphine- Naloxone Medications containing buprenorphine, including buprenorphine- naloxone, are considered controlled substances, which are governed at the federal level by the Controlled Substances Act (CSA), and may be subject to state laws as well. The CSA assigns controlled substances— including narcotics, stimulants, depressants, hallucinogens, and anabolic steroids—to one of five schedules based on the substance’s medical use, potential for abuse, and risk of dependence. In addition to the laws and regulations that apply to controlled substances generally, buprenorphine—when used in the treatment of OUD—is subject to additional requirements under the CSA and implementing regulations issued by the Drug Enforcement Administration (DEA) and SAMHSA. Buprenorphine can be administered or dispensed in a SAMHSA-certified and DEA-registered opioid treatment program when used for OUD treatment. In addition, eligible providers may obtain a Drug Addiction Treatment Act of 2000 (DATA 2000) waiver from SAMHSA in order to dispense or prescribe buprenorphine, including buprenorphine- naloxone, to a limited number of patients for OUD treatment in an office- based setting, such as a doctor’s office. Until 2016, only physicians were eligible to receive a DATA 2000 waiver. However, the Comprehensive Addiction and Recovery Act of 2016 amended the CSA to allow nurse practitioners and physician assistants to receive a DATA 2000 waiver through October 1, 2021. In 2018, the SUPPORT Act eliminated the time limit, thereby permanently allowing nurse practitioners and physician assistants to obtain DATA 2000 waivers. To qualify for a waiver, providers must be appropriately licensed under state law and meet applicable certification, training, or experience requirements. Providers who prescribe, dispense, or administer buprenorphine under a DATA 2000 waiver are also subject to the CSA’s inventory and recordkeeping requirement. The waiver requirements include the following: Physicians must complete an 8-hour training course or have certain certifications or experiences, while nurse practitioners and physician assistants must complete a 24-hour training course. Physicians that receive a DATA 2000 waiver can generally treat 30 patients in their first year and may apply to increase to 100 patients after a year. Physicians that meet certain criteria can treat 100 patients in the first year and up to 275 patients after one year of prescribing at the 100- patient limit. Nurse practitioners and physician assistants may treat 30 patients in their first year with the waiver and 100 patients thereafter. State and Federal Policies, Including Some Related to Distribution Methods, Can Restrict Medicaid Beneficiaries’ Access to MAT Medications State Medicaid programs have policies related to the coverage and distribution of prescription drugs that can restrict beneficiary access to MAT medications. CMS has undertaken various coordination efforts aimed generally at addressing OUD. Federal requirements and state laws can also restrict beneficiaries’ access to the treatment medications. State Medicaid Policies on Coverage for MAT Medications, Prior Authorization Requirements, Preferred Drug Lists, and Distribution Methods May Restrict Beneficiary Access Our review of research and interviews with stakeholders found that several state Medicaid program policies related to prescription drug coverage and distribution can restrict beneficiaries’ access to MAT medications. These are policies governing coverage of MAT medications, prior authorization requirements, preferred drug lists, and limits placed on distribution methods. While some of these policies are generally used to manage utilization and costs related to a wide range of medications, the research we reviewed and stakeholders we interviewed said that these policies can also restrict beneficiaries’ access to the medications used in MAT. In what follows, we describe these policies, including selected states’ and CMS efforts to address the potential access barriers related to these policies. Coverage of MAT Medications Recent research suggests that several state Medicaid programs may not cover all MAT medications in all formats. Specifically, in 2018, SAMHSA reported that while all 50 states’ and the District of Columbia’s Medicaid programs covered oral formats of MAT medications and extended-release injectable naltrexone, it found no indication that 21 states (41 percent) covered either implantable buprenorphine, extended-release injectable buprenorphine, or both. CMS officials said that evidence of coverage may be difficult to find if the medications are billed as part of a medical procedure rather than separately as a medication. However, according to the study’s methodology, SAMHSA took steps to check whether the MAT medications were covered as a medical procedure, and did not find any evidence of such coverage. According to CMS officials, all the manufacturers of MAT medications in our review participate in the Medicaid Drug Rebate Program, and as a result, state Medicaid programs are required to cover these medications and all their formats. CMS officials stated that the agency generally investigates complaints about lack of drug coverage, but had not received any complaints regarding MAT medications. In addition, the officials said they were unaware of the SAMHSA report and had not taken action based on the report’s findings. Therefore, CMS lacks the information to confirm whether or to what extent gaps may exist in state Medicaid programs’ coverage of MAT medications in all formats, as SAMHSA’s report indicates. As such, Medicaid beneficiaries undergoing medication-assisted treatment may not have access to the medications they need for treatment and that are required by law to be covered. In addition, the SUPPORT Act includes a new requirement for state Medicaid programs to cover medication-assisted treatment, including all Food and Drug Administration-approved MAT medications, from October 2020 through September 2025. CMS officials stated that the agency is drafting guidance related to this requirement and plans to communicate the guidance to state Medicaid programs through a State Medicaid Director Letter prior to October 2020. The officials told us that they have not determined when the guidance will be issued. Prior Authorization Requirements When state Medicaid agencies cover a MAT medication, they may impose certain constraints, including requiring prior authorization from the MCO or the state Medicaid agency, before a beneficiary can receive the medication. However, these requirements can have unintended consequences, such as preventing timely access to MAT. According to SAMHSA’s 2018 report, several states use prior authorizations to ensure that patients receive behavioral therapy in addition to their MAT medications or to ensure that patients have abstained from opioids for a certain period of time, which is necessary before receiving a naltrexone injection. Further, when a patient switches from one medication to another (or another format of the same medication), prior authorization may be required for a variety of reasons, such as to ensure patient safety. Officials from a stakeholder organization representing providers and officials from a manufacturer said that prior authorization for injectable buprenorphine was particularly burdensome and that decisions on whether the state Medicaid agency will allow a prescription to be dispensed can take up to 14 days. Providers in our selected states and literature we reviewed noted that these delays could be life threatening, because patients may return to drug use and possibly overdose before receiving their medication. We were also told by officials from one manufacturer that small providers may not have the office staff to promptly process the prior authorization paperwork, creating additional delays. Potential Consequences of OUD Patients Missing Treatment A health care provider told us that patients receiving treatment for opioid use disorder (OUD) need consistent access to medication- assisted treatment (MAT) medications, just as diabetic patients need consistent access to insulin. According to a survey of providers, patients with OUD who experience delays in their MAT medications could lose motivation for their treatment, which could be life threatening. Literature we reviewed and stakeholders we interviewed described other ways that prior authorization requirements can delay access to MAT medications for Medicaid beneficiaries and other patients. The examples described include the following: Talking to patients before authorizing medications. According to literature we reviewed and officials we spoke with from a manufacturer and an organization representing providers, some insurance companies require that their staff or the pharmacist talk to the patient before approving a MAT medication when using external delivery from a specialty pharmacy. This is to affirm that the patient wants the medication and agrees that the pharmacy can bill the state Medicaid program. However, speaking directly to patients can be particularly challenging for this population. Officials representing a manufacturer of a MAT medication and officials representing a health care provider organization noted that patients undergoing residential treatment may not have access to a phone, and patients in outpatient treatment are often encouraged to change their phone numbers to reduce contact with people involved in their past drug use. Also, patients may not answer phone calls from unrecognized numbers. Medication reauthorization. Providers from the District of Columbia told us that they need to reauthorize MAT medication prescriptions every 6 months, but patients may not realize the authorization is about to expire so they run out of the medication, causing them to wait hours or days to get the new prescription filled. Transportation. Prior authorization requirements for MAT medications can result in multiple trips to the pharmacy, which is problematic for patients and beneficiaries without adequate transportation. Providers from the District of Columbia noted that sometimes prescriptions for MAT medications are not ready for patients when they arrive at the pharmacy, and repeated trips to the pharmacy can be problematic for those who lack adequate transportation. Nevertheless, the patient may need multiple trips to go back to the provider and then the pharmacy again, which can be especially challenging. Fail-first requirements. Literature we reviewed, officials we interviewed representing a provider organization, and state Medicaid officials and providers noted that some prior authorizations require that a provider cannot begin treatment with certain MAT medications until treatment with other MAT medications has failed. This literature indicated that this treatment failure can increase the risk of drug use, overdose, and death. Some states have taken steps to reduce these access barriers by removing prior authorizations through changes in state policies or laws. Officials from a nonprofit organization specializing in addressing addiction told us that, as of September 1, 2019, at least 12 states had laws that prohibited prior authorizations for substance use disorder medications, including MAT medications. States may also address prior authorizations through other means, such as policies or guidance. Among our selected states, all four have taken steps to remove prior authorization requirements. The District of Columbia began to generally allow providers to prescribe and dispense MAT medications without prior authorization in April 2019. Minnesota Medicaid officials told us that in August 2018 they removed prior authorization requirements for all MAT medications on their PDL. North Carolina Medicaid officials told us that in November 2017 they eliminated their prior authorization requirement for providers to submit a treatment plan before treating patients with any MAT medication. After the requirement was removed, the officials observed an increase in beneficiaries receiving MAT medications and an increase in the number of providers writing prescriptions for buprenorphine. The officials said North Carolina has never required prior authorization for injectable buprenorphine, but that the state does have some prior authorization requirements for certain forms of oral buprenorphine or buprenorphine-naloxone. Specifically, in order to prescribe an alternative oral medication, the provider needs to demonstrate the patient tried and failed with or is medically unable to use the buprenorphine-naloxone film. Ohio Medicaid officials told us they have no prior authorization requirements for injectable naltrexone, and they removed prior authorization requirements for oral buprenorphine in January 2019. According to the officials, the state has prior authorization requirements for implantable and injectable buprenorphine to ensure patients are initially stable on oral buprenorphine before beginning these other formats. Preferred Drug Lists According to stakeholders we interviewed, having multiple PDLs within a state or changing PDLs can create confusion for health care providers, because they need to keep track of and follow different requirements for the same MAT medication. Such confusion can result in reduced beneficiary access to MAT medications. For example, in the District of Columbia, four MCOs and the fee-for-service program have separate PDLs. Health care providers in the District of Columbia told us that the four MCOs have different dosage restrictions for the same MAT medications. A stakeholder group representing pharmacies told us that having a uniform PDL for the state makes it easier for pharmacists to comply with the relevant restrictions and minimize delays in accessing MAT medications. In addition, a PDL may change multiple times within a short time frame, which can create further problems for patients who had become comfortable with the medication they had been taking, according to officials from a provider organization. To address any possible confusion due to the use of multiple PDLs, some states have a uniform PDL for their Medicaid programs, which means that all PDLs used in the state cover the same MAT medications in the same way. Uniform PDLs can simplify the process for prescribers and eliminate some confusion for beneficiaries when they switch health plans. For example, Minnesota implemented a uniform PDL in July 2019 to ensure more consistent access for Medicaid beneficiaries and minimize disruptions if a beneficiary changes health plans. In addition, North Carolina plans to institute a uniform PDL when its Medicaid program moves to a managed care model in November 2019. Ohio also plans to institute a uniform PDL across the state in January 2020. Ohio Medicaid officials told us the uniform PDL will have both brand name and generic oral buprenorphine as preferred medications. States’ Distribution Method Policies for MAT Medications According to stakeholders we interviewed, the characteristics of each distribution method, as well as states’ policies on distribution methods, have implications for beneficiary access to MAT medications. The following describe the different ways in which the distribution methods may restrict beneficiaries’ access to MAT medications. Retail pharmacies generally offer access to oral formulations of medications. However, retail pharmacies do not typically administer injectable or implantable buprenorphine, and some retail pharmacies may choose not to offer any MAT medications. One survey of physicians found that some pharmacies may either treat individuals prescribed buprenorphine poorly or refuse to carry the MAT medications. External delivery from specialty pharmacies is often used by providers for the injectable or implantable MAT medications, because the specialty pharmacy deals with the administrative responsibilities of the prescription; however, processing delays can impede access to MAT medications through this method, according to literature we reviewed and stakeholders we interviewed. These specialty pharmacies handle the administrative responsibilities of acquiring the medication, including purchasing the medication and sending it to the provider for administration, and receiving reimbursement from the payer, such as Medicaid. Health care providers who administer these medications may still encounter logistical challenges in their acquisition and storage. Other challenges identified by literature and stakeholders include the following: The patient must return to the provider for a follow-up appointment to receive the medication, because the medication is delivered to the provider. However, if the patient does not return, stakeholders—including those representing specialty pharmacies—told us that the unused medication must be disposed of. Providers may face challenges ensuring staff are available to receive medication deliveries—particularly in rural locations or in small practices with multiple office locations that are not always staffed, according to stakeholders representing specialty pharmacies and providers. Prescriptions are not always filled by the specialty pharmacy until they have confirmed that they will be reimbursed by the payer, according to officials from one manufacturer we interviewed. The officials stated that when the claim is processed manually, it can take over 20 days to fill the prescription. In contrast, the officials said that if a claim can be processed electronically, payment and delivery of the medication can be almost immediate. Buy-and-bill distribution allows patients to have immediate access to MAT medications, because their provider has the medications in stock; however, some providers prefer not to use this method, because it places them at financial risk. In particular, smaller health care practices may not have the infrastructure or resources to deal with the administrative responsibilities associated with buy-and-bill, and they may not have the financial ability to pay for medications up front and then wait for reimbursement, according to stakeholders we interviewed. For example, one stakeholder we interviewed said that the cost for just 2 to 3 doses of injectable medication obtained through buy-and-bill could take up a significant portion of the profit margin for a smaller medical practice. Furthermore, if patients do not use these medications before they expire or if the reimbursement from the payer does not equal the cost of the medication, the provider may face a financial loss. According to providers we interviewed in our four selected states, the high cost of some medications—as much as $1,200 per dose for injectable medications—makes the financial risk of buy-and-bill too high. Providers also told us that some providers choose not to store buprenorphine, because they are concerned that they could be subject to a DEA inspection. Surveys of health care providers have found provider concerns related to these inspections. And as with specialty pharmacies, the provider must have someone available to receive deliveries, which can be difficult for smaller practices, according to providers in our selected states. How Access Barriers Can Affect Opioid Use Disorder Treatment A health care provider we interviewed described how access barriers affected a Medicaid beneficiary’s opioid use disorder treatment. This beneficiary was initially prescribed oral buprenorphine, but the medication was repeatedly stolen by the patient’s partner. The provider and beneficiary agreed that injectable buprenorphine would allow treatment to continue without the risk of theft. Initially, the provider was not able to find a specialty pharmacy with an electronic prescription system compatible with the provider’s system, which was necessary to receive the prescriptions. The provider told us that after a compatible specialty pharmacy was identified and the order was completed, the delivery was further delayed, because two staff members—as required—were not available to sign for the delivery when it arrived. Three months after the decision was made to switch medications, the delivery was completed and the provider administered the medication. Furthermore, state Medicaid policies that require or prevent the use of certain distribution methods for MAT medications can restrict providers from using methods that may be best suited for their patients or practice, which may in turn affect beneficiaries’ access to the medications, according to stakeholders. Medicaid officials and providers we interviewed told us that some states require the use of certain distribution methods when a provider prescribes a MAT medication. For example, Minnesota’s fee-for-service plan (which covers about 25 percent of the state’s Medicaid population) requires that health care providers use buy- and-bill for all physician administered medications, including those that are injected or implanted. Minnesota providers we interviewed told us that they are reluctant to prescribe either the injectable or implantable versions of MAT medications, due to payment delays or other problems they experienced when they attempted to use buy-and-bill. Stakeholders told us that access to MAT medications would be maximized if providers and beneficiaries are not restricted when choosing among the three distribution methods—and some states have removed such restrictions. Officials from one manufacturer told us that since 2016, nine states that required use of buy-and-bill for their medication have eliminated those requirements. Medicaid officials in North Carolina told us that because smaller medical practices do not want the inventory costs associated with buy-and-bill, the state has moved to allow providers to obtain the injectable buprenorphine through either buy-and-bill or a specialty pharmacy. According to the officials, this has resulted in the increased use of the medication. Similarly, Medicaid officials in the District of Columbia told us that prior to 2017, injectable MAT medications were only available through buy-and-bill—despite Medicaid reimbursements being lower than providers’ costs. In 2017, these medications became available from specialty pharmacies. CMS Has Undertaken Opioid Coordination Efforts CMS has undertaken various coordination efforts aimed generally at addressing OUD. These efforts include the following: Opioid Steering Committee—composed of CMS senior leadership and staff, according to agency officials—helps coordinate opioid policy across the agency. CMS officials told us that the bi-weekly committee meetings have included discussions about reducing barriers related to prior authorization, other utilization management practices, and implementation of the SUPPORT Act, among other opioid related topics. Action Plan to Prevent Opioid Addiction and Enhance Access to MAT—an effort by an interagency task force—is intended to address OUD barriers in Medicaid, among other things, as required by the SUPPORT Act. In September 2019, CMS held a public meeting and requested public input to develop this action plan, which it plans to issue by January 2020, as mandated by the act. State Opioid Workshop, organized by CMS, brought together state officials to share innovative practices and discuss efforts to decrease barriers to accessing treatment for OUD, according to CMS officials. The second of such workshops was held in September 2018, and CMS documentation shows that the workshop included sessions focused on MAT, including a session on states’ approaches to improving the availability and use of MAT through benefit, payment, and system design. Informational bulletins have been used by CMS to communicate information states need to manage their Medicaid programs, including recommended actions. For example, in July 2014, CMS issued a bulletin to states providing background information on MAT, examples of state-based initiatives to increase access to MAT, and resources to help ensure proper delivery of MAT services. In January 2016, CMS issued another bulletin that focused on best practices for addressing prescription opioid overdoses, misuse, and addiction, and urged states to take action to reduce the potentially dangerous usage of opioids used for pain. While this bulletin was not focused on MAT, it suggested generally that states consider reviewing benefits coverage and service utilization to ensure beneficiaries have sufficient access to MAT services, and indicated that some benefit requirements, such as prior authorizations, can reduce the use of and access to MAT. Drug Utilization Review Survey, conducted annually by CMS, contains information that the agency publishes on states’ activities related to all prescription drugs in the Medicaid program, including some limited information about MAT medications. Other CMS efforts also addressed Medicaid beneficiaries’ access to MAT. In November 2017, CMS announced a new policy to increase flexibility for states seeking a section 1115 demonstration to improve access to and quality of OUD treatment for Medicaid beneficiaries. CMS has approved section 1115 demonstrations that included OUD- related provisions for 26 states and the District of Columbia between August 2015 and November 2019. States implementing these demonstrations are expected to take action to ensure access to MAT for Medicaid beneficiaries, including by establishing a requirement that inpatient and residential settings provide access to MAT. CMS has also examined access to OUD treatment through its Innovation Accelerator Program, which provides resources to states to introduce delivery system and payment reforms in a variety of areas, including OUD. Requirements for Federal Waivers and State Laws Limit Who Can Administer the Medications Waiver Requirements for Buprenorphine Our review of literature and interviews with stakeholders show that in addition to Medicaid policies, other federal and state policies can limit Medicaid beneficiaries’ access to MAT medications. According to stakeholders we interviewed and literature we reviewed, requirements associated with DATA 2000 waivers may limit the number of providers willing to prescribe or administer buprenorphine for MAT. Stakeholders and the literature note that providers may be reluctant to obtain the DATA 2000 waiver, due to the hours of training associated with the waiver and the cost of registering with the DEA after obtaining the waiver, among other things. According to officials from a stakeholder organization representing providers and providers in one of our selected states, the requirements to obtain a DATA 2000 waiver, including the associated hours of required training—ranging from 8 hours for physicians to 24 hours for nurse practitioners and physician assistants—contributes to perceptions that prescribing buprenorphine for the treatment of OUD is dangerous, particularly since waivers are not required to prescribe buprenorphine for pain management. A 2019 National Academy of Sciences report also notes that treatment with buprenorphine is less risky than many other OUD treatments that do not require special training. Another study suggested that other opioids not used in the treatment of OUD—and not requiring special training—are more commonly misused, diverted, or responsible for overdoses, compared with buprenorphine. All providers who prescribe controlled substances are required to register with DEA. For providers with a DATA waiver who wish to administer injectable or implantable buprenorphine in multiple office locations, the requirement that each office location be registered with DEA may be an additional burden, as these fees are $731 for 3 years, according to DEA. DEA requires that the provider pay the registration fee for each location where controlled substances are stored, administered, or dispensed, which might not be recouped if only a small number of patients are treated at the various locations. Stakeholders we interviewed and literature we reviewed also noted a concern among some health care providers that having a waiver would subject them to increased oversight from DEA and other law enforcement agencies. Specifically, officials from an organization representing addiction providers and providers in our selected states told us that the possibility of interaction with law enforcement can intimidate some providers and can be anxiety-provoking and disruptive. Literature we reviewed reported that this can lead to providers not pursuing a waiver or ceasing to prescribe buprenorphine. Surveys of health care providers have found similar concerns. These factors can create a potential treatment barrier for patients and beneficiaries by limiting the number of available providers, according to officials from an organization representing addiction providers we interviewed and literature we reviewed. The waivers also limit how many patients, including Medicaid beneficiaries, providers may treat with MAT medications. These limits may create an additional barrier to OUD treatment, particularly for providers who specialize in addiction medicine. Studies have consistently found that providers who have waivers treat fewer OUD patients than their waiver allows—and some may not accept new patients. For example: A 2016 study of rural physicians found that more than half of providers with waivers were not accepting new patients; those with a 30-patient waiver limit were treating an average of fewer than nine patients; and more than half of the providers with waivers were not treating any patients. Providers with a 100-patient waiver limit treated an average of 57 patients, although more than one-quarter were at or approaching their patient limit. A survey of physicians, nurse practitioners, and physician assistants who obtained a waiver or increased their patient waiver limit in 2017 found that these providers were treating about one-third of their patient limit. Literature we reviewed noted that providers might not treat the maximum number of patients allowed by their waiver limit, because they are not specialists in addiction medicine, or they do not want to treat a larger number of patients with OUD. These providers may have obtained a waiver to respond to the needs of their existing patients who have OUD, rather than to add new patients. In contrast, one of these studies and officials from one organization representing health care providers in addiction medicine we interviewed noted that there are providers who are addiction medicine specialists that cannot work a full-time schedule if they are only allowed to treat 275 patients, which is the maximum allowed under the waiver rules. The study projected that a capacity range of 378 to 524 patients would be necessary for providers to practice addiction medicine full time. CMS officials told us they have taken some steps to increase the number of providers with DATA 2000 waivers through funding new planning grants in 15 states, as authorized by the SUPPORT Act. According to CMS officials, the grants cover training expenses to help providers obtain the waiver, among other things. State Laws That Prevent Certain Providers from Prescribing MAT Medications Federal laws allow certain non-physicians—such as nurse practitioners and physician assistants—to obtain a DATA 2000 waiver to prescribe and administer buprenorphine to treat OUD; however, some states’ laws may restrict their ability to do so. These laws determine the type of health care services that can be provided by different types of providers. According to literature we reviewed and stakeholders we interviewed representing physician assistants and nurse practitioners, some state laws do not allow non-physicians to write prescriptions for any controlled substances and some specifically limit their ability to write prescriptions for buprenorphine for the treatment of OUD, while others may impose no restrictions for non-physicians beyond the federal training and patient limit requirement associated with the DATA 2000 waiver. For example, officials from an organization representing providers reported that physician assistants in some states, such as Kentucky and Tennessee, cannot prescribe buprenorphine for the treatment of OUD. Further, according to officials from another organization representing providers, most states require nurse practitioners to be supervised by or have a collaborative agreement with a physician. Thus, to prescribe buprenorphine for MAT, the nurse practitioners in these states must obtain a DATA 2000 waiver and have supervision from or a collaborative agreement with a physician. North Carolina Medicaid officials told us that physician assistants and nurse practitioners in the state that have a DATA 2000 waiver must consult with a physician, but do not need to have direct affiliation with a supervising physician. The supervision requirements can affect patients’ access to MAT, including for Medicaid beneficiaries, according to stakeholders. For example, officials from an organization representing providers told us that some nurse practitioners may find it difficult to identify a qualified physician with whom they can have a collaborative agreement. In the states where nurse practitioners are not required to collaborate with a physician, these officials also told us that they see higher percentages of nurse practitioners prescribing MAT medications. Conclusions HHS has identified expanding access to medication-assisted treatment as a key component of its efforts to reduce opioid use disorder and opioid overdoses. Through our work, we identified state Medicaid policies and federal and state laws that may create barriers to treatment for Medicaid beneficiaries and other patients with OUD by restricting access to MAT medications. We also identified efforts by states and CMS to address these barriers. Under federal law, state Medicaid programs are required to cover all formats of MAT medications reviewed in our study, because all manufacturers of those medications participate in the Medicaid Drug Rebate Program. In addition, the SUPPORT Act will mandate broader coverage of MAT beginning in October 2020. However, a study by SAMHSA found that nearly half of all state Medicaid programs do not cover all formats of MAT medications in our review. Yet, CMS has not taken steps to determine whether state Medicaid programs do cover all of these MAT medications and their formats, as required. Until CMS determines the extent to which state Medicaid programs cover all MAT medications, as required—and address coverage gaps when found— Medicaid beneficiaries may not be able to obtain the most effective medications to treat their opioid use disorder. Recommendations for Executive Action We are making the following recommendation to CMS: The Administrator of CMS should determine the extent to which state Medicaid programs are in compliance with federal requirements to cover MAT medications in all formats and take actions to ensure compliance, as appropriate. (Recommendation 1) Agency Comments We provided a draft of this report to HHS for review. HHS provided written comments which are reprinted in appendix I. HHS also provided technical comments, which we incorporated as appropriate. In its written comments, HHS concurred with our recommendation. Specifically, HHS stated that it will examine the extent to which state Medicaid programs are in compliance with the requirements of the Medicaid Drug Rebate Program as it relates to the coverage of MAT medications and take actions to ensure compliance, as appropriate. HHS also reiterated its plans to develop guidance for states on the SUPPORT Act’s new requirement for states to cover MAT medications. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or DeniganMacauleyM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Will Simerl (Assistant Director), Carolyn Feis Korman (Analyst-in-Charge), Rebecca Hendrickson, Shirin Hormozi, Virginia Lefever, Drew Long, Leslie McNamara, and Carla Miller made key contributions to this report. Also contributing were Leia Dickerson, Carolyn Garvey, Ethiene Salgado-Rodriguez, and Emily Wilson Schwark.
Why GAO Did This Study Almost 70,000 people died from drug overdoses in 2018, an estimated 69 percent of which involved opioids. Medicaid, a joint federal-state health care program for low-income and medically needy individuals, is one of the largest sources of coverage for individuals undergoing treatment for opioid use disorder. Congress included a provision in statute for GAO to review access barriers to MAT medications, including the distribution methods. This report describes policies that can restrict Medicaid beneficiaries' access to MAT medications, including any related to the distribution methods. To do this work, GAO reviewed relevant laws, policies, and documents, as well as studies describing access barriers and the benefits and challenges of the distribution methods. GAO also interviewed federal officials; stakeholders representing state Medicaid directors, health care providers, patients, and pharmacies; and state officials and health care providers from Minnesota, North Carolina, and Ohio, as well as the District of Columbia. GAO selected these three states and the District of Columbia based on their Medicaid programs' coverage of the MAT medications, their programs' spending for the treatment of opioid use disorder, and other criteria. What GAO Found Medication-assisted treatment (MAT)—which combines behavioral therapy and the use of certain medications, such as buprenorphine—has been shown to be effective at reducing the misuse of or addiction to opioids and increasing treatment retention. The federal government has identified expanding access to MAT as important for reducing opioid use disorders and overdoses, and has taken action to increase access. However, GAO found that some state and federal policies can restrict Medicaid beneficiaries' access to MAT medications. Some of these policies, and three selected states' and the District of Columbia's efforts to address potential access barriers, include the following: MAT medication coverage. A 2018 study found that about 40 percent of states may not provide Medicaid coverage for some formats of MAT medications, such as injectable and implantable formats, as required by federal law; however, the Centers for Medicare & Medicaid Services (CMS), which oversees Medicaid, has not determined the extent to which states are in compliance with the federal requirements to cover MAT medications. Prior authorization requirements. Some MAT medications and formats are subject to prior authorization, which requires these medications to be pre-approved before being covered by Medicaid. While these requirements are generally used to reduce expenditures, unnecessary utilization, and improper payments, stakeholders told GAO the requirements may cause life-threatening delays in the case of MAT medications. Some states, including three states and the District of Columbia that GAO reviewed, have taken steps to remove prior authorization requirements for MAT medications. Distribution methods. States may mandate the ways MAT medications can be distributed. For example, Minnesota's fee-for-service plan requires the use of the buy-and-bill distribution method for all injectable and implantable medications. This method requires providers, such as physicians, to purchase and store these medications until administered to the patient, allowing immediate access to the MAT medication for Medicaid beneficiaries. However, for expensive injectable medications, which can cost $1,200 per treatment, this method places providers at financial risk if the medication is not used or the reimbursement is less than the providers' costs, requiring resources some providers may lack, according to providers in the selected states and District of Columbia. As a result, some states have removed such restrictions to maximize beneficiary access. Federal waiver for prescribing buprenorphine. According to stakeholders GAO interviewed, some providers are unwilling to obtain the federal waiver necessary to prescribe or administer buprenorphine for opioid use disorder—due to reasons such as the hours of training associated with the waiver—which can restrict beneficiary access to this MAT medication. In addition, while nurse practitioners and physician assistants are eligible for these waivers, some state laws require them to be supervised by a physician. Stakeholders told GAO that some nurse practitioners may find it difficult to identify a qualified physician, which may affect patient access to MAT. What GAO Recommends GAO recommends that CMS determine the extent to which states are in compliance with federal requirements to cover MAT medications, and take action as appropriate. HHS concurred with this recommendation.
gao_GAO-20-460
gao_GAO-20-460_0
Background Alaska’s location makes the United States an Arctic nation. Alaska has over 6,000 miles of coastline, and is bordered by the Beaufort, Chukchi, and Bering Seas; the Arctic Ocean; and the Bering Strait, whose jurisdiction is divided between the United States and Russia (see fig. 1). According to the 2010 Census, the U.S. Arctic coastal regions are home to about 26,000 people, including the cities of Nome, located near the Bering Strait, and Utqiagvik (formerly Barrow), the northernmost city in the United States. The U.S. Arctic coastal region is sparsely populated even by the standards of Alaska, which has the lowest population density of any state in the nation. Specifically, this region accounted for about 4 percent of Alaska’s total population of approximately 710,000 according to the 2010 Census. Alaska is also the largest state in the nation, and— given its size, terrain, environment, and population distribution—its transportation system is unique. Much of Alaska’s rail and highway infrastructure is located in the south central part of the state, and many U.S. Arctic cities and villages are accessible only by air or water. As Arctic waterways become more accessible due to declining sea ice, opportunities have increased to use maritime transportation to bring natural resources to market. The U.S. Arctic remains a frontier economy; many of its products and much of the value of commercial activities derive from natural resources. According to an assessment of undiscovered but technically recoverable oil and gas resources by the Bureau of Ocean Energy Management, the outer continental shelf regions of the U.S. Arctic’s Chukchi and Beaufort Seas contain about 24 billion barrels of oil and about 105 trillion cubic feet of natural gas. The U.S. Arctic also contains $1 trillion worth of minerals, such as zinc, nickel, and lead. The extraction of these natural resources presents technical challenges and requires large financial investments given the Arctic environment. Although warming over the past decades has made trans-Arctic maritime routes more accessible, Arctic sea ice extent remains seasonal, with most shipping occurring during a narrow window extending from summer to early fall. Arctic sea ice typically reaches its maximum extent in March and its minimum in September each year; as a result, the shipping season is typically from June through October. The minimum sea ice extent in September 2019 was tied with 2007 and 2016 as the second lowest on record since satellite observations began in 1979; the 13 lowest extents in the satellite record have all occurred in the last 13 years. As shown in figure 2, the September (minimum) sea ice extent in 2019 had a much smaller coverage area than the median September extent from 1981 to 2010. This contraction of sea ice over time has increased accessibility to the two key trans-Arctic maritime routes: the Northwest Passage (NWP) through the Canadian archipelago, and the Northern Sea Route (NSR) along the northern border of Russia. These two routes enable shipments between non-Arctic destinations, such as between Asia and Europe. However, most traffic in the U.S. Arctic is destinational, meaning it transports goods to and from the U.S. Arctic. Such traffic includes transporting natural resources extracted from the U.S. Arctic to the global marketplace and shipping supplies to U.S. Arctic communities. Maritime shipping in the U.S. Arctic involves challenges, given that the region lacks many of the typical elements of a maritime transportation system. See table 1 for examples of the types of maritime infrastructure gaps that CMTS and federal agencies have reported in the U.S. Arctic. Many federal agencies are involved with, and have a role in, U.S. Arctic maritime shipping and infrastructure (see table 2). Although these agencies’ missions are not specifically tied to the U.S. Arctic, they extend to the U.S. Arctic like any other geographic region of the country. Other state, local, and international organizations also play a role. For example, the state of Alaska’s Department of Environmental Conservation is involved with oil spill response. In addition, the North Slope Borough, a municipal government that encompasses an area of nearly 95,000 square miles along Alaska’s northern coast, has a search and rescue department that provides airborne emergency response. Alaska Native organizations represent communities that have inhabited the Arctic region for thousands of years and have cultures that are particularly sensitive to environmental changes, since they rely on hunting animals such as whales, seals, and walruses. To represent local concerns, the Arctic Waterways Safety Committee, which is comprised of subsistence hunters and others, was created in October 2014 to develop best practices for safe and efficient use of Arctic waterways. Alaska Native Corporations are private entities that manage land and assets on behalf of Alaska Natives. Lastly, international forums such as the Arctic Council and international organizations such as the IMO also have a role in establishing Arctic maritime policies and regulations. For nearly 50 years the U.S. government has articulated its interest in the Arctic through a series of strategies. For example, in 1971 a then- classified memo from National Security Council (NSC) under the Nixon Administration called for the sound and rational development of the Arctic, guided by the principles of minimizing adverse environmental effects, promoting international cooperation, and protecting security interests, including the preservation of the freedom of the seas. These same priorities, along with promoting scientific research, were underscored by the Reagan Administration in 1983. In January 2009, the George W. Bush Administration issued an Arctic Region Policy, which outlined priorities for maritime transportation in the Arctic including to facilitate safe, secure, and reliable navigation and protect maritime commerce and the environment. More recently, the Obama Administration issued a National Strategy for the Arctic Region (National Strategy) in May 2013, which identified three goals for the region: to advance U.S. security interests, pursue responsible stewardship, and strengthen international cooperation. Subsequent implementation plans for the National Strategy indicated maritime shipping and infrastructure fell under all three of these stated goals. For example, “preparing for increased activity in the maritime domain” fell under advancing U.S. security interests, “charting the Arctic region” fell under pursuing responsible stewardship, and “promoting waterways management” fell under strengthening international cooperation. As federal strategies related to the Arctic region have evolved over the years, so have the interagency groups to implement and guide these efforts. Interagency activity in the U.S. Arctic has historically been coordinated through the NSC, including the 1971 and 1983 strategies. In 1984, legislation established the U.S. Arctic Research Commission as well as the Interagency Arctic Research Policy Committee (IARPC). More recently, to enhance coordination of national efforts in the Arctic, particularly those related to the 2013 National Strategy, a 2015 Executive Order established the interagency Arctic Executive Steering Committee (AESC). AESC is chaired by the Director of the Office of Science and Technology Policy (OSTP), which is an office within the White House that leads interagency science and technology policy coordination efforts. AESC also includes NSC as a member, along with 20 other federal departments and entities. The 2016 National Strategy Implementation Framework assigned portions of the strategy’s areas of focus to interagency groups; specifically, NSC was assigned responsibility for advancing national security interests, OSTP for pursuing responsible stewardship, and the Department of State for strengthening international cooperation. The U.S. Committee on the Marine Transportation System (CMTS), which was required in 2010 to coordinate the establishment of domestic transportation policies in the Arctic to ensure safe and secure maritime shipping, has issued several relevant reports, including a 10-year projection of maritime activity in 2015, and a 10-year prioritization of infrastructure needs in the U.S. Arctic in 2016—both of which were directed by the 2014 National Strategy implementation plan. More recently, CMTS issued a 2018 report revisiting its 2016 near-term recommendations for prioritizing infrastructure needs in the U.S. Arctic and a 2019 update to its projections of Arctic maritime shipping activity from 2020 to 2030. These and other reports addressing maritime infrastructure in the U.S. Arctic are listed in appendix I. Maritime Shipping in the U.S. Arctic Generally Increased from 2009 through 2019 but Remains Limited and Was Affected by Several Factors Maritime Shipping in the U.S. Arctic Increased from 2009 through 2019 with a Range of Vessel Types Represented U.S. Coast Guard data indicate the number of vessels in the U.S. Arctic increased from 2009 through 2019 (see fig. 3). The types of vessels the U.S. Coast Guard tracks in the U.S. Arctic includes vessels conducting marine scientific research; tugs that provide communities with supplies; and adventurer vessels such as private yachts. U.S. Coast Guard data also include bulk cargo vessels from the Red Dog mine, one of the largest zinc mines in the world. The mine trucks its zinc ore to a facility on the Chukchi Sea, where it is stored for maritime transport during the shipping season. The U.S. Coast Guard District responsible for the U.S. Arctic counts more types of vessels in its area of interest—such as research, tug, and adventurer—than are typically counted for the purposes of tracking commercial shipping. Even at its peak, maritime shipping in the U.S. Arctic remained limited compared to global commercial shipping, although CMTS recently reported that the number of flag states, or countries where vessels are registered, has increased. Specifically, the 307 vessels in the U.S. Arctic in 2019 represented a small portion of the total number of shipping vessels operating globally. For comparison, according to the United Nations Conference on Trade and Development, in 2015 the world fleet of commercial shipping vessels was approximately 89,000. However, in its 2019 traffic projections report, CMTS analyzed U.S. Coast Guard data and other data sources and found that between January 2015 and December 2017, the number of flag states in the U.S. Arctic increased. CMTS noted this indicates a shift away from regionally focused operators toward a more diverse and international set of operators. CMTS found that the majority of vessels were flagged to the United States (about 41 percent) or Russia (about 24 percent) over this time period, with the remaining 35 percent from 35 other flag states, each with a considerably smaller percentage than the United States or Russia. Given that a single vessel can make multiple trips per shipping season, U.S. Coast Guard also measures maritime activity by the number of transits that vessels make per year through the Bering Strait, a key convergence point for trans-Arctic routes that connects the NWP and NSR to the Pacific Ocean. According to U.S. Coast Guard data, the number of transits through the Bering Strait has ranged from as few as 280 in 2009, to as many as 514 in 2015 (see fig. 4). There were far fewer transits through the Bering Strait than through some other convergence points for established major maritime transportation routes that have more developed maritime infrastructure. For example, the number of transits through the Panama Canal, which like the NWP connects the Atlantic and Pacific Oceans, was almost 14,000 in 2018 and the number of vessels that transited the Suez Canal, which like the NSR enables shipping between Asia and Europe, was over 18,000. Factors Affecting Arctic Maritime Shipping Included Changes in Domestic and International Demand and Unpredictable Conditions Stakeholders told us that along with factors such as demand that shape shipping trends worldwide, factors unique to the Arctic also play a role, such as potential cost savings due to shorter routes; additional operating costs incurred by Arctic-capable ships; environmental hazards like unpredictable weather and sea ice; and a lack of maritime infrastructure typically found along shipping routes. The 20 stakeholders we interviewed representing the shipping industry, research institutions, and state, local, and Alaska Native groups among others, described the following factors that affect U.S. Arctic maritime shipping. Domestic and International Demand As mentioned earlier, diminished sea ice has presented opportunities for maritime shipping of natural resources extracted from the Arctic, such as oil, gas, and minerals. However, such activities decreased domestically after Royal Dutch Shell, PLC (Shell) discontinued its offshore oil and gas exploration of the Burger prospect in Alaska’s Chukchi Sea in 2015. As shown in figure 4, the number of transits in the Bering Strait steadily declined from 514 in 2015 to 369 in 2018. Specifically, CMTS reported that Shell demobilized its drill ship, anchor handling vessels, and anti- pollution ships from the study area prior to the start of the 2016 shipping season. One stakeholder said there was a reduction in the number of seasonal transits after Shell suspended exploration activities, since Shell had previously accounted for more than a hundred transits through the Bering Strait. Other traffic related to domestic natural resource extraction stayed at consistent levels. Specifically, representatives from the Red Dog zinc mine reported that from 1999 to 2019 they consistently shipped between 21 and 26 cargo vessels per year, averaging 24 vessels per year over the 20-year period. Meanwhile, several stakeholders said international activities related to natural resource development, particularly in the Russian Arctic, have recently increased, and that Russia has been investing heavily in Arctic infrastructure. The U.S. Coast Guard attributed increased cargo traffic levels in 2016 to construction projects in the Russian Arctic, particularly a liquefied natural gas (LNG) facility on the Yamal peninsula (see fig. 3 above). In 2017, a Russian LNG tanker, the Christophe de Margerie, became the first ship to transit the NSR without being accompanied by an icebreaker. Demand for tourism cruises in the U.S. Arctic has increased slightly recently. A representative from an Arctic cruise industry association told us that the overall cruise industry worldwide grows 5 to 10 percent a year and that there is growing demand for expedition cruises to farther-flung areas like the Arctic. In both 2016 and 2017, the cruise ship Crystal Serenity transited the NWP with over a thousand passengers on board. Stakeholders noted that cruise ship voyages in the U.S. Arctic, such as the Crystal Serenity voyages, raised concerns for passenger safety given the lack of infrastructure, particularly for search and rescue. However, according to an Arctic cruise industry association representative, the number of smaller ships purpose-built for Arctic conditions is growing; the association estimates 25 to 30 such vessels are under construction. Domestic and foreign research vessels have also increased in number in the U.S. Arctic due to greater interest in the region’s changing environment. For example, according to National Science Foundation officials, their polar-capable vessel Sikuliaq entered service in 2016. Internationally, China has increased its activity in the Arctic since gaining observer status on the Arctic Council in 2013 and now operates two icebreaking research vessels. One stakeholder said that such investments by countries such as China may be the first step towards achieving longer-term economic goals for those countries. Cost of Operations Trans-Arctic routes can reduce travel time between certain destinations compared to traditional routes and may therefore reduce fuel and labor costs. For example, the route from Shanghai, China, to Northwestern Europe via the NSR is 27 percent shorter than via the Suez Canal. The operators of the Russian LNG tanker that transited the NSR in 2017, the Christophe de Margerie, reported they completed the journey in 19 days, 30 percent faster than the Suez Canal. For reasons such as these, according to news reports, Russia has announced plans to develop the NSR and ship 80 million tons of goods through the route by 2024. Similarly, an official from a Canadian ship owner and operator told us that, depending on the vessel’s origin and destination, using the NWP can be 10-15 days faster than using the Panama Canal, resulting in a cost savings of $100,000 to $150,000. Although trans-Arctic routes have the potential for cost savings due to shorter distances, they require additional investments not necessary for traditional routes that may offset those savings. For example, representatives of one carrier said Arctic-capable ships cost three to four times more than ordinary ships because they require more steel and higher power output to withstand ice conditions. The additional steel also limits the amount of cargo the vessel can carry; representatives from another carrier noted every ton of steel used to construct the ship is a ton of cargo that the ship cannot carry in order to recoup expenses. The size of vessels that can safely operate in the region is also constrained by draft limitations that specify the maximum weight and size at which ships can navigate the shallow waters of the Arctic. By contrast, the trend among ocean carriers over the past decades, which have capitalized on advances in fuel-efficient engine technology, is toward constructing increasingly larger vessels to capture economies of scale. In addition, stakeholders told us that shippers operating in the Arctic must invest in special onboard equipment and prepare for contingencies due to the lack of maritime infrastructure usually found in traditional routes, such as deep-draft ports, harbors of refuge, reliable communications, and search and rescue infrastructure. Stakeholders noted Arctic voyages also require additional training for crew members on navigating in ice conditions. Shippers must determine whether the cost savings obtained from shorter trans-Arctic routes outweigh the additional operating expenses. For example, although Maersk, one of the largest shipping companies in the world, successfully completed a trial passage of a container ship through the NSR in September 2018, the company emphasized at the time that the transit was a “one-off trial designed to gain operational experience in a new area and to test vessel systems” and that it did not view the route as a commercially viable alternative to existing routes. In a press release, Maersk noted that the NSR was only feasible for around 3 months a year and required the use of more costly ice-classed vessels. Despite this, news reports in June 2019 indicate Maersk is exploring sending more goods through the NSR in cooperation with a Russian icebreaker company in response to demand for the transport of goods from Asia to West Russia. Unpredictable Conditions and Lack of Infrastructure Although diminished sea ice has prolonged the shipping season and opened up shipping routes, environmental changes have also resulted in less predictable conditions, such as more volatile weather and sea ice. One stakeholder involved with Arctic research noted that the conditions that have led to open waters can also lead to harsher conditions such as strong low pressure systems, gale force winds, and storms. Such conditions pose challenges for shipping—one shipper representative said that it is difficult to load barges in shallow waters and that typically loading and unloading activities have to be suspended with swells above 3 feet. In addition, stakeholders told us variation in ice conditions from year to year makes planning Arctic voyages difficult to do with reasonable accuracy. For example, while warming trends might suggest that overall sea ice diminishes further each year, one carrier representative noted its vessel encountered severe ice conditions in June 2018. This representative noted that diminished overall ice coverage can lead to localized conditions with more mobile and older ice migrating into shipping lanes. The unpredictable and harsh weather and ice conditions, combined with the vast distances and lack of maritime infrastructure, pose safety risks. For example, according to stakeholders, the “tyranny of distance” in the Arctic stretches the limited search and rescue capabilities, resulting in slow incidence response. Furthermore, a lack of a designated harbor of refuge means vessels do not have a place to moor safely in case of emergency. As a result, a representative from the International Union of Marine Insurance noted that in the Arctic even a minor incident, such as a small engine failure, can result in substantial damages and even loss of life. Environmental Concerns Some stakeholders we interviewed expressed concerns about impacts of shipping on wildlife, including the species that Alaska Natives rely on for food. One stakeholder noted that awareness has grown in the past 10 years of the environmental impact of shipping. Such impacts include emissions containing sulphur oxide and black carbon from ships’ engines that could damage the fragile Arctic ecosystem. As a result of such environmental concerns, the IMO is currently considering a ban on heavy fuel oil in the Arctic. In addition, in 2019 several major carriers, including CMA CGM, Hapag-Lloyd, and Mediterranean Shipping Company, announced they would not pursue trans-Arctic shipping. Furthermore, in 2019 Nike and Ocean Conservancy launched the Arctic Corporate Shipping Pledge, a voluntary commitment by consumer goods and shipping logistics companies to not send ships through the Arctic. The pledge also supports precautionary Arctic shipping practices to enhance the environment and human safety, which may include a heavy fuel oil ban and an evaluation of low impact shipping corridors that protect important ecological and cultural areas. A representative of one carrier we spoke with said a heavy fuel oil ban in the Arctic could increase the cost of transporting cargo and, as a result, severely impact shipping in the region. Agencies Have Taken Some Steps to Address Gaps in Maritime Infrastructure in the U.S. Arctic, but Federal Efforts Lack a Current Strategy and Consistent Leadership While agencies have taken actions to address maritime infrastructure gaps, federal efforts lack (1) a government-wide assessment of risks posed by gaps in maritime infrastructure, (2) a current government-wide strategy for addressing maritime infrastructure that includes goals, performance measures, and appropriate responses to prioritized risks, and (3) an interagency mechanism and consistent leadership to guide agency actions related to maritime infrastructure. Without these elements, federal agencies may lack information on which to base decisions and prioritize actions, assurance that their investments are directed to prioritized risks, and the ability to demonstrate progress in addressing maritime infrastructure. Furthermore, agencies may miss opportunities to work together and leverage resources towards achieving broader outcomes. Agencies Have Taken Actions to Address Navigation and Other Gaps in U.S. Arctic Infrastructure Agencies have taken some actions since 2013, when CMTS first reported on gaps in U.S. Arctic infrastructure. For example, U.S. Coast Guard reported that it has taken a flexible approach to addressing infrastructure gaps by establishing seasonal, forward operating bases in the U.S. Arctic as needed to provide search and rescue support in areas where major shipping activity is occurring. See table 3 for selected examples of agency actions. Federal Efforts Lack a Government-wide Risk Assessment to Inform Decisions in the U.S. Arctic Although federal agencies have taken some steps to address gaps in U.S. Arctic infrastructure, those efforts are not based on a government- wide assessment of the economic, environmental, and safety risks posed by maritime infrastructure gaps to inform investment decisions in the U.S. Arctic. Rather, agency officials said that they currently base Arctic infrastructure decisions on their agency-specific missions, strategies, and collaborative efforts. Specifically, agency officials said that securing the resources to address U.S. Arctic infrastructure is challenging because such projects must compete with other established agency mission areas. For example, officials told us that infrastructure investments may not compete as well against other agency-established priorities in other parts of the country, in part, because the Arctic is an emerging region and because of the considerable costs of developing infrastructure in the harsh Arctic environment. Leading management practices we reviewed note the importance of assessing risks in order to select and prioritize countermeasures to prevent or mitigate risks. A 2016 Office of Management and Budget (OMB) circular emphasized the importance of risk assessment and called for agencies to use a structured and systematic approach to identify risks and assess the causes, sources, probability of the risk occurring, and potential outcomes, and then prioritize the results of the analysis. Such an approach can be used by decision makers to evaluate the effectiveness of, and to prioritize, countermeasures relative to the associated costs. Risk management is a widely endorsed strategy for helping policymakers make decisions about allocating finite resources and taking actions in conditions of uncertainty. Such a framework is especially applicable to the U.S. Arctic given the uncertain conditions in the region and safety and environmental risks described above. Without a risk assessment, agencies lack assurance that their investments are addressing the highest-priority risks. In particular, we found that agencies’ actions to address maritime infrastructure gaps were not fully consistent with the areas that the stakeholders we interviewed identified as the most critical (see fig. 5). For example, 11 of the 20 stakeholders we interviewed identified charting Arctic waters as the highest priority to address, and in May 2019 NOAA reported that it had acquired nearly 1,500 square nautical miles of hydrographic survey data in the Arctic over the prior 3 years. This is, however, less than 1 percent of the over 200,000 square nautical miles of waters NOAA has identified as significant to navigation in the U.S. Arctic. In addition, nine stakeholders identified addressing gaps in communications in the U.S. Arctic as a key priority. However, CMTS reports indicate no change in the status of communications capabilities in the U.S. Arctic between 2013 and 2018. CMTS has in the past noted the importance of conducting a risk assessment to inform Arctic decision-making. Specifically, CMTS’s 2013 report noted that greater access to the U.S. Arctic and increased activity presents additional risks for people, vessels, and the environment in the fragile region and that managing that risk requires an in-depth understanding of the issues and trade-offs associated with key decisions. Although CMTS reported that developing an assessment tool that provides a quantifiable level of risk and that accounts for the unique risk elements in the Arctic was a challenge for the nation, it proposed a model for determining risk that considered the likelihood of adverse events actually occurring, vulnerability to damage, and potential consequences. CMTS further stated that, given the rate at which other nations are progressing with Arctic shipping and development, the United States should decide the acceptable degree of risk for Arctic operations. Although CMTS has provided useful information on maritime infrastructure gaps to decision makers and described possible risks to the U.S. Arctic, it has not systematically assessed the risks posed by these gaps. For example, in 2016, CMTS made near-, mid-, and long-term recommendations for addressing maritime infrastructure needs, but noted this ordering was not intended to create a hierarchy of infrastructure needs from most to least important. CMTS officials told us that they have not systematically assessed risks posed by maritime infrastructure gaps in the U.S. Arctic because CMTS’s priorities are established by its member agencies, and that CMTS has not been directed to conduct such an assessment by its members. However, CMTS is required by statute to, among other things, coordinate the establishment of domestic transportation policies in the Arctic to ensure safe and secure maritime shipping and make recommendations with regard to federal policies that impact the marine transportation system. Furthermore, according to CMTS officials, there is nothing in CMTS’s authority that would prevent it from doing a risk assessment. Given its previous reports and work in the U.S. Arctic and its coordinating role with its member agencies, CMTS is well suited to conduct a government-wide assessment of the risks posed by gaps in maritime infrastructure in the U.S. Arctic. For example, CMTS published a traffic projections report in September 2019 that aimed to provide decision makers with a wide-ranging portrait of potential changes in vessel activity in the U.S. Arctic over the next decade. To inform its risk assessment, CMTS can draw on the expertise of its member agencies, such as U.S. Coast Guard and NOAA. For example, U.S. Coast Guard officials told us that they have responded to the unpredictable economic changes in the U.S Arctic—including fluctuations in the level and type of maritime activity in the region—by investing in icebreakers and seasonal forward operating bases, rather than developing permanent infrastructure. In addition, CMTS can also draw on numerous reports discussing maritime infrastructure in the U.S. Arctic that have been published since 2013, as detailed in appendix I. For example, in 2019 the University of Alaska’s Arctic Domain Awareness Center held a series of workshops on the factors that impact the ability of the U.S. Coast Guard and other stakeholders to conduct safe, secure, and effective operations in the Arctic environment. A government-wide risk assessment could better enable agencies to evaluate potential U.S. Arctic infrastructure expenditures and assess the extent to which these expenditures will mitigate identified risks. For example, a report on the U.S. Coast Guard’s Arctic capabilities suggested that a systematic analysis of needs and risks in the U.S. Arctic could help the agency generate momentum for closing Arctic capability gaps. By conducting such a risk assessment, agencies would have better information on which to base decisions for agency expenditures in the U.S. Arctic and prioritize appropriate actions in response to risks. Federal Interagency Efforts Lack a Current Strategy and Consistent Interagency Leadership to Guide Agency Actions Related to Maritime Infrastructure We found that the federal interagency efforts to address U.S. Arctic maritime infrastructure lack a current strategy and consistent interagency leadership to guide agencies’ actions. Although several agencies have developed strategies to guide their own agencies’ efforts, these do not provide government-wide direction or establish current government-wide goals, objectives, and performance measures as leading management practices call for. Moreover, the federal agencies lack clarity on which interagency entity is responsible for guiding federal efforts and providing consistent leadership to ensure government-wide objectives are met. Current Strategy The federal government lacks a current government-wide strategy for addressing U.S. Arctic maritime infrastructure gaps that includes key characteristics, such as goals, objectives, and performance measures, and appropriate responses to risks. Agency officials and stakeholders said the 2013 National Strategy is outdated because conditions in the U.S. Arctic have changed since 2013. In particular, agency officials said national security is a growing concern in the Arctic. OSTP officials agreed that conditions had changed in the Arctic, but could not state whether the 2013 National Strategy was still current. Our past work on interagency collaboration found that written agreements documenting how participating agencies collaborate, such as strategies, are most effective when they are regularly updated and monitored. The changing conditions in the Arctic described above make a current government-wide strategy for maritime infrastructure in the U.S. Arctic particularly important. In our past work, we have reported that complex interagency efforts— such as those to address maritime infrastructure in the U.S. Arctic—can benefit from developing a national strategy. Our prior work has identified key characteristics of national strategies, which we refer to in this report as a government-wide strategy, including: (1) problem definition and risk assessment which addresses the threats the strategy is directed towards; and (2) goals, objectives, and performance measures to gauge and monitor results. Furthermore, our prior enterprise risk management work has noted that risk assessment should include a ranking of risks based on priorities in relation to strategic objectives, and that senior leaders should determine if a risk requires treatment or not based on risk tolerance or appetite. Leaders then review the prioritized list of risks and select the most appropriate response to address the risk. These key characteristics help managers determine the extent of investment needed and facilitate effective targeting of federal resources; this is especially important when multiple agencies are involved, as is the case with maritime infrastructure in the U.S. Arctic. Although several federal agencies have recently updated their Arctic strategies, these agency-specific Arctic strategies are not linked to a current government-wide strategy for the Arctic region and are not specifically focused on addressing Arctic maritime infrastructure gaps. Specifically: U.S. Coast Guard. In April 2019, U.S. Coast Guard published its Arctic Strategic Outlook, which supersedes its 2013 Arctic Strategy. The updated strategy established three lines of effort: (1) enhance capability to operate effectively in a dynamic Arctic domain, (2) strengthen the rules-based order, and (3) innovate and adapt to promote resilience and prosperity. We recommended in 2016 that the U.S. Coast Guard develop measures for assessing how its actions have helped to mitigate Arctic capability gaps and design and implement a process to systematically assess its progress. However, as of February 2020, the U.S. Coast Guard has not implemented these recommendations. U.S. Coast Guard officials state that they are currently developing an implementation plan and Strategic Metrics Framework for the Arctic Strategic Outlook. U.S. Navy. In January 2019, the U.S. Navy published its Arctic strategy, which updated its previous strategy from 2014. The updated strategy included the following strategic objectives: defend U.S. sovereignty and the homeland from attack; ensure the Arctic remains a stable, conflict-free region; preserve freedom of the seas; and promote partnerships to achieve the above objectives. Department of Defense. In June 2019, Department of Defense updated its 2016 Arctic strategy which included the following as part of its approach: (1) building Arctic awareness, (2) enhancing Arctic operations, and (3) strengthening the rules-based order in the Arctic. NOAA. NOAA officials originally told us that they were working to complete an update to NOAA’s strategic plan for the Arctic in 2019. However, as of February 2020, officials told us that this update is currently on hold pending the completion of a new government-wide National Strategy. As mentioned previously, OSTP staff said they could not state whether the 2013 National Strategy was still current, and OSTP provided no additional information as to whether a new strategy was in development. NOAA officials told us that the agency’s current three priorities in the Arctic are (1) weather and water (including weather and water research, observations, and Arctic contributions to global weather monitoring); (2) blue economy (including ocean mapping, seafood competitiveness, tourism, and coastal resilience); and (3) innovative partnerships in Alaska and the Arctic. CMTS has taken some steps to monitor agencies’ progress in addressing maritime infrastructure, but the current lack of performance measures makes it difficult to monitor agencies’ progress over time. We reported in 2014 that CMTS was developing a process to monitor such progress and noted that such monitoring would help agencies develop a shared understanding of current priorities and actions needed. However, while CMTS did issue reports that described the status of maritime infrastructure in the U.S. Arctic in 2016 and 2018, the reports did not include goals or performance measures to assess agencies’ progress. According to officials, CMTS did not develop or include performance measures to monitor agencies’ progress because it does not have the authority to designate agencies’ priorities, and that agencies are best situated to identify priorities in pursuit of their individual missions. Priorities in the U.S. Arctic are currently based on each agency’s mission, which makes it difficult to take a government-wide approach to responding to risks. To improve unity of effort, the U.S. Coast Guard has expressed support for a national approach to Arctic planning in both its 2013 and 2019 Arctic strategies. Without a current government-wide strategy that includes goals and objectives, agencies lack assurance that their investments are directed to prioritized risks. Furthermore, without performance measures, agencies are not able to demonstrate, and decision makers are unable to monitor, the extent to which agency actions have addressed maritime infrastructure gaps. Interagency Leadership We have previously reported that federal agencies use a variety of mechanisms, including interagency groups, to implement interagency collaborative efforts and that such mechanisms benefit from key features such as sustained leadership and inclusion of all relevant participants, such as stakeholders. We also reported that leadership should be sustained over time to provide continuity for long-term efforts and that having top-level commitment from the President, Congress, or other high- level officials can strengthen the effectiveness of interagency collaborative groups. We also found that lack of continuity is a frequent issue with interagency mechanisms that are tied to the Executive Office of the President, particularly when administrations change, and that transitions and inconsistent leadership can weaken a collaborative mechanism’s effectiveness. In addition, our prior work has noted the importance of ensuring that all relevant participants are included in the collaborative effort, such as participants with the appropriate knowledge, skills, and abilities to contribute. There are many interagency groups involved in the U.S. Arctic, including: AESC was established by Executive Order in 2015 to shape national priorities and set strategic direction in the Arctic. NSC Arctic Policy Coordinating Committee (PCC) is the current interagency forum for executive-level Arctic collaboration, according to agency officials. CMTS is the main forum for collaboration regarding maritime infrastructure according to agency officials. These interagency groups vary in the extent to which they meet the key features noted above. Specifically: Sustained leadership: Both the NSC, which, as mentioned previously, has traditionally played a role in Arctic collaboration dating back to 1971, and the AESC, which was chaired by the OSTP within the White House, would have top-level commitment given their proximity to the White House. However, according to agency officials, the AESC has not met in the past 2 years and is now dormant. OSTP staff told us that they are not aware of any current AESC activities. Meanwhile, although CMTS has been active in the area of U.S. Arctic maritime infrastructure for the past decade, CMTS officials said CMTS does not sit within the Executive Office of the President. As a result, CMTS officials note their role is to facilitate an interagency partnership, share information among member agencies, and provide information to decision-makers to support agencies’ efforts. CMTS’s statutory authority addresses, among other things, the coordination of federal policies that impact the maritime transportation system, including in the U.S. Arctic, rather than the development and execution of government-wide policies. Inclusion of relevant stakeholders: The AESC, when it was active, included a wide range of over 20 federal departments and entities, including those less associated with maritime infrastructure such as the Department of Agriculture. For the NSC Arctic PCC, we were unable to verify the participants, so it is unclear whether relevant stakeholders are involved. However, agency officials noted the PCC’s focus is on national security rather than on maritime infrastructure, which may indicate not all maritime infrastructure stakeholders are included. Lastly, CMTS includes stakeholders involved directly with maritime transportation. For example, officials from the U.S. Army Corps of Engineers noted that they actively participate in CMTS, including its Arctic Integrated Action Team, but do not participate in other interagency groups, where they are often represented by higher- level Department of Defense officials. The Executive Office of the President has not designated an interagency group as responsible for developing or executing the administration’s strategy for maritime infrastructure in the U.S. Arctic. We have previously reported that interagency efforts can benefit from the leadership of a single entity to provide assurance that federal programs are well coordinated and based upon a coherent strategy. Agency officials said priorities in the U.S. Arctic have shifted to national security under the current administration, which may have led executive-level interagency collaboration efforts to move from AESC to the NSC Arctic PCC. However, it is unclear whether the NSC Arctic PCC includes the relevant stakeholders. Moreover, the shift in Arctic priorities to security issues does not diminish the importance of Arctic maritime infrastructure. As indicated in the 2013 National Strategy, maritime shipping and infrastructure are a key component of overarching goals in the region like advancing U.S. security interests, pursuing responsible stewardship, and strengthening international cooperation. Without an interagency mechanism with sustained leadership and inclusion of relevant stakeholders to direct federal efforts related to U.S. Arctic maritime infrastructure, agencies may miss opportunities to leverage resources toward achieving a broader outcome. For example, as noted earlier, stakeholders we spoke to identified communications as a key infrastructure gap. According to U.S. Coast Guard officials, communications is a whole-of-government effort, requiring partnerships across agencies including the Department of Defense. Without an interagency collaboration mechanism designated to lead these efforts, it is unclear who has responsibility for such whole-of-government efforts to address maritime infrastructure in the U.S. Arctic. Conclusions The U.S. Arctic, including the Bering Strait, is an essential part of the increasingly navigable Arctic and a key convergence point for maritime transportation routes connecting the Pacific and Atlantic oceans. The risks inherent to Arctic shipping—including vast distances, extreme ice conditions, and unpredictable weather—are exacerbated by gaps in maritime infrastructure in the U.S. Arctic. While agencies have taken some actions to address these gaps, without a government-wide assessment of risks posed by maritime infrastructure gaps in the U.S. Arctic and a current strategy to address those risks, agencies lack assurance that their actions are effectively targeting priority areas. Without a strategy that includes goals, objectives, and performance measures, agencies cannot demonstrate the results of their efforts, and decision makers cannot gauge the extent of progress in addressing maritime infrastructure gaps. In addition, without a designated interagency group to provide sustained leadership, agencies lack the ability to leverage resources to address maritime infrastructure and achieve government-wide priorities in the complex and changing U.S. Arctic. Recommendations for Executive Action The U.S. Committee on the Marine Transportation System should: Complete a government-wide assessment of the economic, environmental, and safety risks posed by gaps in maritime infrastructure in the U.S. Arctic to inform investment priorities and decisions. (Recommendation 1) The appropriate entities within the Executive Office of the President, including the Office of Science and Technology Policy should: Develop and publish a strategy for addressing U.S. Arctic maritime infrastructure that identifies goals and objectives, performance measures to monitor agencies’ progress over time, and the appropriate responses to address risks. (Recommendation 2) Designate the interagency group responsible for leading and coordinating federal efforts to address maritime infrastructure in the U.S. Arctic that includes all relevant stakeholders. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Executive Office of the President’s Office of Science and Technology Policy (OSTP); the U.S. Committee on the Marine Transportation System (CMTS); and the Departments of Homeland Security, Commerce, Defense, Interior, State, and Transportation for comment. With the exception of the Department of Defense, all of these entities provided technical comments, which we incorporated as appropriate. Only CMTS provided written comments, which were transmitted via letter from the Department of Transportation, and are reprinted in appendix II. In its technical comments, the Department of Homeland Security’s U.S. Coast Guard provided Arctic traffic data for the 2019 shipping season. As stated in our report, we originally selected the decade of 2009 through 2018 for our analysis when designing our review, as 2018 was the most recent year for which data were available at that time. In response to the U.S. Coast Guard’s comments submitted in April 2020, we revised our report to include data from the 2019 shipping season on (1) the number of vessels in the U.S. Coast Guard District 17 Arctic area of interest and (2) the number of transits in the Bering Strait, to ensure the report contained the most current information available. In its written comments, CMTS partially concurred with our recommendation that CMTS complete a government-wide assessment of the economic, environmental, and safety risks posed by gaps in maritime infrastructure in the U.S. Arctic to inform investment priorities and decisions. However, CMTS also noted several areas of disagreement with our conclusions, which we address here: First, CMTS noted that GAO’s draft report contained dated information and that the 2019 data contradicts GAO statements that suggest a decrease in vessel activity since 2015. CMTS noted that the 2019 data shows that vessel traffic has increased steadily over the last decade, and that although growth slowed between 2015 and 2017, “it did not stall.” However, we dispute this characterization. The 2019 shipping data included in this report emphasizes the finding from our draft report that maritime shipping activity generally increased over the time period of our review. However, this trend does not reflect a steady increase throughout the entire timeframe or “slowed growth” between 2015 and 2017 as CMTS indicates. Specifically, the data show year-to-year decreases in the number of vessels from 2015 to 2017 in the U.S. Coast Guard District 17 Arctic area of interest (see fig. 3) and in the number of transits in the Bering Strait from 2015 to 2018 (see fig. 4). CMTS’s own 2019 report indicated that the number of vessels had decreased from a peak in 2015, after Shell’s decision in 2015 to not pursue further exploratory drilling efforts. As such, we stand by our description of the overall growth in maritime activity in the U.S. Arctic since 2009, as well as the pattern of declining traffic within that period from 2015 through 2018. Second, CMTS also noted in its written comments that our use of data from 2009 to 2018 in the draft report do not lead to the conclusions and recommendation to assess infrastructure risks and prioritize future investment in the Arctic. We dispute this characterization. Our decision to include the 2019 data further emphasizes the finding in our draft report of a general increase in maritime activity in the U.S. Arctic and the need for an assessment of risks posed by gaps in maritime infrastructure. As we note in the report, CMTS has reported that the U.S. Arctic does not have the typical elements of a maritime infrastructure system such as a deep- draft port or robust communications infrastructure. These infrastructure gaps exacerbate the inherent challenges of maritime activity in the Arctic—vast distances, dangerous weather, and extreme ice conditions— that can pose safety risks to mariners and environmental risks to the fragile Arctic ecosystem. While agencies have taken some steps to address infrastructure gaps, without a risk assessment, agencies lack assurance that their investments are addressing the highest-priority risks. As such, we stand by our conclusion and recommendation that increasing maritime traffic poses risks, and a government-wide assessment of those risks would inform federal decisions on investments to appropriately address risks. Third, CMTS disagreed with GAO’s statement that a government-wide risk assessment could better enable agencies to evaluate potential U.S. Arctic infrastructure expenditures. Although CMTS agreed that understanding infrastructure gaps is critical to improving the Arctic marine transportation system, CMTS contends that such risk assessments are the responsibility of each agency as directed by the Office of Management and Budget (OMB). As we note in the report, many agencies have a role in U.S. Arctic maritime shipping and infrastructure and although agencies and others have conducted many reviews of maritime infrastructure in the U.S. Arctic (see appendix I), agency-by- agency assessments do not reflect or analyze risks from a government- wide perspective. CMTS itself has previously noted the importance of evaluating risks on a government-wide basis. Specifically, in 2013 CMTS noted that increased activity in the U.S. Arctic presents additional risks for the people, vessels, and the environment and that managing that risk requires an in-depth understanding of the issues and trade-offs associated with key decisions, such as how to prioritize investments. As our report states, CMTS stated that developing a tool to assess the unique risk elements in the Arctic was a challenge for the nation, and it proposed a model for determining risk that considered the likelihood of adverse events actually occurring, vulnerability to damage, and potential consequences. This model is similar to the 2016 OMB circular, which called for agencies to, among other things, assess the causes, sources, probability of the risk occurring, and potential outcomes. As stated in our report, given its previous work in the U.S. Arctic and its coordinating role with its member agencies, CMTS is well suited to conduct a government-wide assessment of the risks posed by gaps in maritime infrastructure in the U.S. Arctic. As such, we stand by our recommendation. Based on these items, CMTS did not agree to perform and lead a government-wide risk assessment. Instead, as an “alternate action” to address GAO’s recommendation, CMTS noted it plans to update a table of information published in its past reports on infrastructure gaps in the U.S. Arctic and provide an inventory of existing risk assessments and their criteria, which agencies can then use to improve their own assessments to inform decisions. In our view, the proposed action described by CMTS would not provide the same level of information proposed by CMTS itself in 2013 and by OMB’s 2016 circular, which calls for, among other things, assessing the causes, sources, probability of the risk occurring, and potential outcomes. As stated in our report, CMTS is uniquely positioned as a federal interagency coordinating committee focused on the maritime transportation system to draw on the expertise of its member agencies, such as U.S. Coast Guard and the National Oceanic and Atmospheric Administration, to complete this risk assessment. Moreover, CMTS is required by statue to coordinate the establishment of domestic transportation policies in the Arctic to ensure safe and secure maritime shipping and make recommendations with regard to federal policies that impact the marine transportation system. Furthermore, according to CMTS officials, there is nothing in CMTS’s authority that would prevent it from doing a risk assessment. As such, we stand by our recommendation as written and do not believe CMTS’s alternate action is sufficient to address the recommendation. In comments provided via email, OSTP neither agreed nor disagreed with the report’s recommendations. OSTP acknowledged the Arctic is of critical national importance and noted interagency coordination can be implemented through the entities of the National Science and Technology Council, which is located within OSTP. OSTP noted the need for, and role of additional federal coordination, such as the Arctic Executive Steering Committee, is under consideration by OSTP. We continue to believe that the appropriate entities within the Executive Office of the President, including OSTP, should designate the interagency group responsible for leading and coordinating federal efforts to address maritime infrastructure in the U.S. Arctic that includes all relevant stakeholders. As we note in our report, without an interagency collaboration mechanism designated to lead these efforts, it is unclear who has responsibility for whole-of- government efforts to address U.S. Arctic maritime infrastructure. In addition, we stand by our other recommendation to OSTP and other entities within the Executive Office of the President to develop and publish a strategy for addressing U.S. Arctic maritime infrastructure that identifies goals and objectives, performance measures to monitor agencies’ progress over time, and the appropriate responses to address risks. As we note in the report, without such a strategy agencies lack assurance that their actions are effectively targeting priority areas and decision makers cannot gauge the extent of progress in addressing maritime infrastructure gaps. We are sending copies of this report to the appropriate congressional committees; the Executive Office of the President; the U.S. Committee on the Marine Transportation System; the Secretaries of Homeland Security, Commerce, Defense, Interior, State, and Transportation; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or vonaha@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Reports Relevant to Maritime Infrastructure in the U.S. Arctic Published Since 2013 National Ocean Council. National Ocean Policy Implementation Plan. Washington, D.C.: April 2013. White House. National Strategy for the Arctic Region. Washington, D.C.: May 10, 2013. U.S. Coast Guard. United States Coast Guard Arctic Strategy. Washington, D.C.: May 2013. U.S. Committee on the Marine Transportation System. U.S. Arctic Marine Transportation System: Overview and Priorities for Action. Washington, D.C.: 2013. The White House. Implementation Plan for the National Strategy for the Arctic Region. Washington, D.C.: January 30, 2014. U.S. Navy. The United States Navy Arctic Roadmap for 2014 to 2030. Washington, D.C.: February 2014. GAO. Maritime Infrastructure: Key Issues Related to Commercial Activity in the U.S. Arctic over the Next Decade. GAO-14-299. Washington, D.C.: March 19, 2014. U.S. Department of Commerce. National Oceanic and Atmospheric Administration. NOAA’s Arctic Action Plan: Supporting the National Strategy for the Arctic Region. Silver Spring, M.D: April 2014. GAO. Arctic Issues: Better Direction and Management of Voluntary Recommendations Could Enhance U.S. Arctic Council Participation. GAO-14-435. Washington, D.C.: May 16, 2014. Brigham, L. W. Alaska and the New Maritime Arctic Executive Summary: Executive Summary of a Project Report to the State of Alaska Department of Commerce, Community and Economic Development. Fairbanks, A.K.: February 1, 2015. The International Council on Clean Transportation. A 10-Year Projection of Maritime Activity in the U.S. Arctic Region. Washington, D.C.: January 2015. This report was contracted and coordinated under the U.S. Committee on the Marine Transportation System. Executive Order No. 13689. Enhancing Coordination of National Efforts in the Arctic. 80 Fed. Reg. 4191. January 26, 2015. The White House. Arctic Executive Steering Committee. National Strategy for the Arctic Region Implementation Report. Washington, D.C.: January 30, 2015. Alaska Arctic Policy Commission. Final Report of the Alaska Arctic Policy Commission. Anchorage and Bethel, A.K.: January 30, 2015. U.S. Army Corps of Engineers. Alaska District. Alaska Deep-Draft Arctic Port System Study: Draft Integrated Feasibility Report, Draft Environmental Assessment (EA), and Draft Finding of No Significant Impact (FONSI). Nome, AK: February 2015. GAO. Arctic Planning: DOD Expects to Play a Supporting Role to Other Federal Agencies and Has Efforts Under Way to Address Capability Needs and Update Plans. GAO-15-566. Washington, D.C.: June 19, 2015. Arctic. Status on Implementation of the AMSA 2009 Report Recommendations. Tromsø, Norway: April 2015. World Economic Forum. Global Agenda Council on the Arctic. Arctic Investment Protocol: Guidelines for Responsible Investment in the Arctic. Geneva, Switzerland: November 2015. U.S. Coast Guard. Arctic Strategy Implementation Plan. Washington, D.C.: December 2015. Arctic Economic Council, Telecommunications Infrastructure Working Group. Arctic Broadband: Recommendations for an Interconnected Arctic. Tromsø, Norway: Winter 2016. Copenhagen Business School and Arctic Institute. Arctic Shipping: Commercial Opportunities and Challenges. CBS Maritime, January 2016. The White House. Arctic Executive Steering Committee. 2015 Year in Review: Progress Report on the Implementation of the National Strategy for the Arctic Region. Washington, D.C.: March 2016. The White House. Arctic Executive Steering Committee. 2015 Year in Review: Progress Report on the Implementation of the National Strategy for the Arctic Region; Appendix A, Implementation Framework for the National Strategy for the Arctic Region. Washington, D.C.: March 2016. U.S. Committee on the Marine Transportation System. Arctic Marine Transportation Integrated Action Team. A Ten-Year Prioritization of Infrastructure Needs in the U.S. Arctic. Washington, D.C.: April 15, 2016. GAO. Coast Guard: Arctic Strategy is Underway but Agency Could Better Assess How Its Actions Mitigate Known Arctic Capability Gaps. GAO-16-453. Washington, D.C.: June 15, 2016. Department of Defense. Report To Congress On Strategy To Protect United States National Security Interests In The Arctic Region. Washington, D.C.: December 2016. RAND Corporation. Maintaining Arctic Cooperation with Russia: Planning for Regional Change in the Far North. RR-1731-RC. Santa Monica, CA: 2017. U.S. Committee on the Marine Transportation System. Recommendations and Criteria for Using Federal Public-Private Partnerships to Support Critical U.S. Arctic Maritime Infrastructure. Washington, D.C.: January 2017. Arctic Council, Emergency Prevention, Preparedness and Response. Circumpolar Oil Spill Response Viability Analysis: Technical Report. March 7, 2017. Council of Foreign Relations. Arctic Imperatives: Reinforcing U.S. Strategy of America’s Fourth Coast, New York, N.Y.: March 2017. Ocean Conservancy. Navigating the North: An Assessment of the Environmental Risks of Arctic Vessel Traffic. Anchorage, A.K.: June 28, 2017. Center for Strategic and International Studies, Maritime Futures: The Arctic and the Bering Strait Region. Washington, D.C.: November 2017. RAND Corporation. Identifying Potential Gaps in US Coast Guard Arctic Capabilities.RR-2310-DHS. Santa Monica, CA: 2018. U.S. Committee on the Marine Transportation System. Revisiting Near- Term Recommendations to Prioritize Infrastructure Needs in the U.S. Arctic. Washington, D.C.: 2018. Department of Defense. Report to Congress on Assessment of Requirement for a Strategic Arctic Port. Washington, D.C.: January 2018. Department of Homeland Security, Office of the Chief Financial Officer. Arctic Search and Rescue: Fiscal Year 2017 Report to Congress. Washington, D.C.: March 13, 2018. International Union of Marine Insurance. IUMI Position Paper: Arctic Sailings. Hamburg, Germany: August 2018. U.S. Coast Guard Acquisition Directorate, Research & Development Center. Alaska AIS Transmit Prototype Test, Evaluation, and Transition Summary Report for the Near Shore Arctic Navigational Safety Information System (ANSIS). New London, C.T.: October 2018. GAO. Arctic Planning: Navy Report to Congress Aligns with Current Assessments of Arctic Threat Levels and Capabilities Required to Execute DOD’s Strategy. GAO-19-42. Washington, D.C.: November 8, 2018. Alaska Federation of Natives. Indigenous Engagement with Their Countries’ Military and Civilian Services/ Government on Maritime Arctic Issues. Anchorage, A.K.: December 2018. RAND Europe. The Future of Arctic Cooperation in a Changing Strategic Environment, PE-268-RC. Cambridge, United Kingdom: 2018. U.S. Navy. Strategic Outlook for the Arctic. January 2019. U.S. Coast Guard. Arctic Strategic Outlook. April 2019. Department of Defense. Office of the Secretary of Defense. Annual Report to Congress on Military and Security Developments Involving the People’s Republic of China 2019. Washington, D.C.: May 2, 2019. Pompeo, Michael, R. Secretary of State. U. S. Department of State. Looking North: Sharpening America’s Arctic Focus. Remarks. Rovaniemi, Finland, May 6, 2019. Pompeo, Michael R. Secretary of State. U.S. Department of State. Remarks. Arctic Council Ministerial Meeting. Rovaniemi, Finland, May 7, 2019. Department of Defense. Office of the Under Secretary of Defense for Policy. Report to Congress on Department of Defense Arctic Strategy. Washington, D.C.: June 2019. U.S. Committee on the Marine Transportation System. A Ten-Year Projection of Maritime Activity in the U.S. Arctic Region. 2020-2030: Washington, D.C.: September 2019. Congressional Research Service. Changes in the Arctic: Background and Issues for Congress. Washington, D.C.: November 27, 2019. U.S. Army Corps of Engineers, Alaska District. Port of Nome Modification Feasibility Study: Draft Integrated Feasibility Report and Supplemental Environmental Assessment. Nome, A.K.: December 2019. Congressional Research Service. Changes in the Arctic: Background and Issues for Congress. Washington, D.C.: January 23 2020. Appendix II: Comments from the U.S. Committee on the Marine Transportation System Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Susan Fleming, Director; Matt Barranca, Assistant Director; Emily Larson, Analyst in Charge; Chuck Bausell; Geoff Hamilton; Georgeann Higgins; Ned Malone; John Mingus; Jan Montgomery; Kaleb Mount; Fatima Sharif; Curtia Taylor; Sarah Veale; and Laurel Voloder made key contributions to this report.
Why GAO Did This Study Arctic sea ice has diminished, lengthening the navigation season and increasing opportunities for maritime shipping. However, the U.S. Arctic lacks maritime infrastructure—such as a deep-draft port and comprehensive nautical charting—to support increased traffic. The lack of infrastructure exacerbates risks inherent to shipping in the Arctic such as vast distances and dangerous weather. This report examines (1) how U.S. Arctic shipping trends have changed since 2009 and factors that have shaped shipping in the region, and (2) the extent to which U.S. agencies' efforts to address Arctic maritime infrastructure gaps have aligned with leading management practices. GAO collected U.S. Coast Guard traffic data from 2009 through 2019 and interviewed 20 stakeholders selected to represent a range of views. GAO also analyzed Arctic strategies, interviewed selected agencies involved with maritime infrastructure, and compared efforts to leading management practices. What GAO Found Maritime shipping activity, as indicated by the number of vessels in the U.S. Arctic, generally increased from 2009 through 2019. Domestic maritime activity declined after the discontinuation of offshore oil and gas exploration activities in Alaska's Chukchi Sea in 2015. However, since 2015, international activities related to natural gas development, particularly in the Russian Arctic, have increased, according to stakeholders. Factors affecting decisions of ship operators about whether to operate in the U.S. Arctic include increased operating costs of Arctic-capable ships, environmental changes that have caused more volatile weather and ice conditions, and concerns over environmental impacts. Agencies have taken some steps to address Arctic maritime infrastructure gaps identified by federal agencies, such as a lack of nautical charting, but federal efforts lack a current strategy and interagency leadership. Examples of agency actions include the U.S. Coast Guard developing recommended shipping routes and the National Oceanic and Atmospheric Administration continuing to chart Arctic waters. To guide federal efforts, the White House developed a National Strategy for the Arctic Region in 2013 and established an interagency Arctic Executive Steering Committee (AESC) in 2015. However, agency officials and stakeholders noted the strategy is now outdated due to changing conditions in the Arctic. As a result, federal efforts lack a current government-wide strategy that aligns with key management practices such as identifying goals, objectives, and establishing performance measures. Moreover, U.S. Arctic interagency groups do not reflect leading collaboration practices, such as sustained leadership and inclusion of all relevant stakeholders, and the White House has not designated which entity is to lead U.S. Arctic maritime infrastructure efforts. For example, the AESC is now dormant according to agency officials and staff at the White House Office of Science and Technology Policy (OSTP), which chairs the AESC. Without a current strategy and a designated interagency entity with these collaboration practices in place, agencies may miss opportunities to leverage resources and target infrastructure improvements in areas that would best mitigate risks. What GAO Recommends GAO is making three recommendations, including that OSTP and other appropriate entities within the Executive Office of the President: develop and publish a strategy to address gaps and designate the interagency mechanism responsible for leading federal efforts. OSTP neither agreed nor disagreed but noted it is considering the need for and role of additional federal coordination. GAO stands by its recommendations.
gao_GAO-19-449
gao_GAO-19-449_0
Background Explosives and Their Production Explosives include high explosives, propellants, and pyrotechnics. Propellants and pyrotechnics are sometimes referred to as low explosives. All three types of explosives serve essential functions in nuclear weapons. Figure 1 illustrates key explosive-containing components found in a generic nuclear weapon as well as the types of explosives these components contain. High explosives are the most common explosive by volume in nuclear weapons. There are two classes of high explosives used in nuclear weapons: insensitive high explosives (IHE) and conventional high explosives (CHE). An IHE is less susceptible to accidental detonation than a CHE and less violent upon accidental ignition, therefore it is safer to handle. NNSA places a premium on safety throughout all phases of explosives activities, including research and development, testing, production, and storage, because handling any explosive material is inherently dangerous, according to NNSA officials and contractor representatives. Producing a high explosive material generally follows four steps, as shown in figure 2: (1) synthesis—producing raw explosive molecules; (2) formulation—mixing raw explosive molecules with binding ingredients to form an explosive mixture; (3) pressing—compacting formulated explosives into shapes of the required density; and (4) machining— cutting away excess material to achieve the final shape. Analytical, mechanical, safety, and performance testing are to occur after each step. During synthesis, technicians use chemicals to produce fine, powder-like raw explosives. During formulation, technicians combine the explosive powder with plastic binder ingredients to produce a mixture that exhibits the physical and performance properties desired. Formulated explosives used by NNSA often appear like small, irregularly shaped pebbles, known as prills, as shown in figure 3. During pressing, the third step, technicians compact formulated explosives into a solid form. During machining, the fourth step, technicians use computer-controlled equipment to cut and shape the explosive into its final shape. After the explosive has been machined, technicians join explosive and non-explosive parts into functional components during subassembly. Small-scale synthesis and formulation and production-scale pressing, machining, and subassembly activities are carried out at multiple NNSA sites. After each step of the production process, NNSA’s sites conduct tests to ensure that explosives meet NNSA’s safety and performance requirements. During safety testing, scientists conduct a variety of tests to ensure that explosives meet DOE’s safety requirements. Regarding performance testing, scientists conduct other tests that require specialized equipment. For example, scientists use scanning equipment, like heat flow sensors, for thermal testing on formulated explosive material. Scientists also conduct tests using X-ray imaging equipment to evaluate weapon characteristics by detonating a “mockup.” The mockup uses a high explosive main charge—the explosive material that surrounds the nuclear core, known as the pit—and a nonfissile surrogate material that has similar physical properties to plutonium. The mock implosion is called a hydrodynamic test because the surrogate material and other components become hot enough to flow like fluid. Explosive Molecules and Formulations Used in Nuclear Weapons High explosive molecules used in U.S. nuclear weapons include but are not limited to high melting explosive (HMX), pentaerythritol tetranitrate (PETN) and triaminotrinitrobenzene (TATB). First fielded in conventional weapons in World War II, HMX and PETN were later introduced into several components in the U.S. nuclear weapons stockpile and are still used in them today. DOE first introduced TATB into the nuclear stockpile in 1979, and it is still the only molecule that DOE considers to be an IHE (see sidebar). In all U.S. nuclear weapons, the main charge is made of formulations of HMX or TATB. DOD also uses HMX and TATB in certain conventional weapons. TATB: NNSA’s Key Insensitive High Explosive Triaminotrinitrobenzene (TATB) is a key insensitive high explosive that is currently used in National Nuclear Security Administration (NNSA) and Department of Defense (DOD) military applications, including nuclear and conventional weapons. Scientists first synthesized TATB in 1888 but did not initially recognize it as an explosive. In 1966, Los Alamos National Laboratory developed the industrial method for synthesizing TATB. From the late 1970s to the late 1980s, two domestic manufacturers supplied TATB to DOD and NNSA. However, when the Cold War ended and a U.S. nuclear test moratorium began, the demand for TATB declined, and both manufacturers ceased production by 1993. DOD then acquired TATB from a U.K.-based firm until its plant closed in 2005. Beginning in 2007, DOD and NNSA collaborated to re-establish a manufacturing capability for TATB in the United States. Specifically, DOD’s Holston Army Ammunition Plant (Holston), which is located in Kingsport, Tennessee, began producing TATB in 2014. DOD has qualified the Holston-produced TATB for use in conventional weapons but NNSA has not yet qualified it for use in nuclear weapons because the material properties of the formulated material are not yet up to NNSA standards, according to NNSA documentation. available binding ingredient to create a plastic bonded explosive. Each explosive formulation is designed for a specific application. The performance requirements for explosive formulations in nuclear weapons are more stringent than those for conventional weapons for DOD formulations to ensure both performance and safety. Explosives scientists commonly use the term “recipe” to describe the ingredients and many variables in the process—such as the temperature, mixing speed, or container size—used to make explosive molecules and formulations that meet specific performance requirements. Ongoing and Planned LEPs and NNSA’s Other Modernization Plans In December 2018, NNSA completed the last production unit for the W76- 1 LEP, marking the completion of warhead production for the first LEP in which NNSA undertook full-scale design activities for weapon systems since 1982. Five other LEPs and stockpile modernization efforts were ongoing as of January 2019, as shown in table 1. As we concluded in an April 2017 report, this is a particularly challenging time for NNSA, as the agency plans to simultaneously execute LEPs and modernization efforts along with major construction projects, such as efforts to modernize NNSA’s uranium and plutonium manufacturing capabilities. NNSA’s Sites, Infrastructure, and Workforce Levels for Explosives Activities NNSA’s nuclear security enterprise consists of eight government-owned sites managed and operated by seven contractors. Five of these sites conduct explosives activities: Livermore, Los Alamos, Sandia, Pantex, and Nevada. In addition to these sites, NNSA relies on several third-party suppliers of explosive materials and related equipment. The largest of these is Holston, which is a government-owned, contractor-operated facility that primarily produces explosives for DOD. Holston is NNSA’s sole supplier of explosives used in main charges. The infrastructure that supports NNSA’s explosives activities consists of thousands of real property assets, which are to be tracked in FIMS. The database is managed for NNSA missions by its Office of Safety, Infrastructure and Operations. According to NNSA officials and DOE documents, FIMS helps managers understand the current state of NNSA infrastructure and inform infrastructure modernization funding decisions. We have previously reported on concerns about the accuracy of the FIMS database with respect to certain data fields that were not assessed as part of this review. DOE has taken sufficient steps to address recommendations we have previously made about FIMS. Workforce levels for explosives activities have generally increased in recent years, which contractor representatives attribute to the increase in workload because of LEP and modernization efforts. Table 2 shows NNSA contractor representatives’ estimates for actual full-time equivalents (FTE) and percentages of FTEs engaged in explosives activities at each of the five sites over the last 5 fiscal years. Selected Leading Practices in Federal Strategic Planning The Government Performance and Results Act of 1993 as amended (GPRA) requires, among other things, that federal agencies develop strategic plans. The Office of Management and Budget (OMB) provides guidance to federal executive branch agencies on how to prepare their agency-wide strategic plans in accordance with GPRA requirements, as updated and expanded by the GPRA Modernization Act of 2010. We have reported that these requirements also can serve as leading practices for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. In addition, we have reported in the past on federal agencies’ strategic planning efforts and have identified additional useful practices to enhance agencies’ strategic plans. The leading practices in federal strategic planning that we selected are: (1) involving stakeholders, such as federal agencies, state governments, or others, in defining the mission and desired outcomes, which helps ensure that their expectations and interests are met and that resources and efforts are targeted at the program’s highest priorities; (2) assessing external and internal forces, which helps managers anticipate future challenges and make adjustments before potential problems become crises; and (3) covering at least a 4-year period while making adjustments as needed to reflect the operating environment. Further, our past work has shown that effective strategic plans should include several specific elements. These elements include: (1) a comprehensive mission statement that explains why a program exists and what it does; (2) long-term goals and objectives that specify how an agency will carry out its mission and explain what results are expected from the program; (3) strategies to achieve the goals and objectives that are specific enough to allow an agency to assess whether the strategies will help achieve those goals; (4) a description of how performance measures will be used to assess progress toward long-term goals; and (5) the identification of external factors that could significantly affect achievement of the strategic goals. NNSA’s Sites Conduct a Range of Interdependent Explosives Design and Production Activities, and NNSA Has Adopted a Centralized Approach to Managing Them NNSA’s five sites involved in explosives conduct interdependent activities to design and produce explosives and about 100 different nuclear weapon components that contain explosive materials. Each of the sites assumes primary responsibility for certain explosives activities—such as Livermore conducts design, research, and development of new IHE main charge formulations; Pantex produces all main charges; Los Alamos conducts design and production of main charge detonators as well as explosives research and development; Sandia conducts design and production of nonnuclear explosive components; and Nevada conducts large experimental explosive shots to support design activities. However, most of these activities require the participation of multiple sites. The following examples illustrate some of the collaborative, interdependent activities that NNSA’s sites and their suppliers undertake to design and produce explosive components found in nuclear weapons. Main charge for the W80-4 LEP. Livermore manages design activities for the W80-4 LEP, including for its main charge. The main charge used in the W80-4 warhead will consist of newly synthesized TATB, formulated with a new binding ingredient, according to contractor representatives. As NNSA officials and contractor representatives explained during our site visits to Livermore and Pantex, Livermore scientists redeveloped the specific process for TATB synthesis and formulation that is being used in the W80-4 LEP, first in small test batches and then in larger amounts. Next, Livermore sent its specifications for synthesis and formulation to Holston, which has produced successively larger batches. As the design and cost study phase of the W80-4 LEP continues, Livermore and Pantex continue to receive and test these batches of formulated explosive and work with Holston to ensure that production lots meet NNSA specifications. In coordination with Livermore, Pantex will press and machine the finished main charges for the W80-4 when the LEP reaches the production phase. Pantex will receive formulated TATB from Holston and conduct its own tests to ensure the quality of the initial production lots and pressing, machining, and subassembly processes. Detonators. The design and production of main charge detonators involves several NNSA sites and their suppliers. According to contractor representatives, Livermore and Los Alamos share the responsibility for designing the main charge detonators, and Los Alamos will produce all the detonators. As part of production, Los Alamos reprocesses the PETN used in detonators from a stockpile of DOD-grade material purchased 30 years ago. Other detonator parts come from third-party suppliers and from NNSA’s Kansas City National Security Campus, another NNSA site that does not have a role in designing or producing explosives, according to contractor representatives. Los Alamos produces and tests completed detonators and then sends them to Pantex for weapon assembly, according to contractor representatives. Spin rocket motors. Sandia plays the primary role in designing spin rocket motors. Spin rocket motors use pyrotechnics and propellants and are a key component in the B61 and B83 bombs. Contractor representatives at Sandia said that they supply the explosives to third- party suppliers, who produce the motors. The completed spin rocket motors are sent to Sandia for inspection and testing, and after Sandia approves the components, they are shipped to Pantex for weapon assembly, according to contractor representatives. Component manufacturing research. In addition to designing and producing components for LEPs and modernization efforts, NNSA sites also collaborate on other explosives research and development programs, such as on component manufacturing processes. For example, Los Alamos, Livermore, Sandia, and Pantex are collaborating on additive manufacturing processes for explosives. Additive manufacturing differs from traditional manufacturing in that it builds components by depositing material rather than by cutting material away during machining. This research effort seeks to introduce additive manufacturing into the explosives production process, producing explosive parts with highly complex geometries while meeting NNSA’s safety and performance requirements, according to a contractor representative. In May 2018, according to NNSA documentation, NNSA began implementing a new enterprise-wide approach to improve the management and coordination of explosives activities across its sites. In the past, each program that used explosives—such as an LEP or a research and development program—developed or procured them independently of other programs, without formal coordination to ensure each program’s awareness of other programs’ requirements or time frames. Under the new enterprise-wide approach, NNSA has taken several steps to centralize management at an enterprise level and to coordinate explosives activities across its sites. Specifically: In May 2018, NNSA established the Energetic Materials Enterprise Manager (enterprise manager) position to help coordinate NNSA’s explosives activities. The agency issued a May 2018 memorandum formally establishing the position, signed by the Acting Deputy Administrator for Defense Programs. The memorandum specified that the enterprise manager should encourage collaboration among the sites and programs that conduct explosives activities. In September 2018, the enterprise manager established NNSA’s Energetics Coordinating Committee (coordinating committee) to identify coordination challenges across the enterprise and emerging needs for critical explosive materials, among other purposes. The coordinating committee is composed of NNSA officials and contractor representatives from NNSA’s sites, is chaired by the enterprise manager, and is expected to meet at least once a year. According to NNSA documents, the coordinating committee met twice in 2018 and identified a number of future actions requiring input from the sites, such as defining future needs associated with the production of main charge explosive materials. In December 2018, NNSA issued the strategic plan for energetic materials. This strategic plan states that it will help NNSA organize its efforts to meet weapon delivery schedules for the overall energetics community. Prior to the strategic plan’s final issuance, the enterprise manager provided a draft to coordinating committee members to solicit their comments. However, more recent action taken by NNSA indicates that the enterprise approach to managing high explosives is continuing to evolve. First, according to NNSA officials, in 2019 NNSA is planning to reorganize the Office of Defense Programs—which is responsible for all stockpile activities. This reorganization could affect the approach to managing high explosives activities. Specifically, officials said part of this reorganization is the creation of a new organization for production activities, which is expected to divide production activities into several groups oriented around different weapons components. It is currently unclear under which production group explosives activities will fall because there are production activities associated with explosives for both nuclear and nonnuclear components, according to NNSA officials. Second, in December 2018, NNSA officials indicated that they are considering elevating high explosives to a “strategic material” and managing it more similarly to NNSA’s existing approach for managing other strategic materials, such as plutonium. NNSA’s strategic materials managers are overseen by a senior NNSA official and appointed to manage each material as a program, with a budget and dedicated staff, according to NNSA documentation. NNSA does not consider the high explosives enterprise manager to be managing a program; therefore, the enterprise manager does not have an explosives budget or dedicated staff, according to NNSA officials. NNSA officials said they anticipate issuing an analysis of alternatives study in spring 2019 that will contain a recommendation to the NNSA Administrator on how explosives activities should be managed going forward, which could reflect a shift toward managing high explosives as a strategic material. NNSA Officials and Contractor Representatives Identified Management Challenges for Explosives-Related Activities and Have Taken Some Actions in Response, but Have Not Addressed Issues Affecting the Accuracy of Infrastructure Data NNSA officials and contractor representatives have identified a number of challenges related to NNSA’s supply of explosive materials, infrastructure, and staff recruitment and training. First, NNSA’s supply of certain highly specialized explosive materials is dwindling, and NNSA officials and contractor representatives stated that it is challenging to reproduce or procure these materials. Second, officials and contractor representatives identified infrastructure that is aging and deteriorating, inaccurate information on that infrastructure, and storage areas filled to near capacity as challenges. Finally, according to NNSA contractor representatives, there are difficulties in recruiting and training qualified staff. NNSA has taken some actions to address these challenges, such as starting to recreate “recipes” for specialized materials and modernize aging infrastructure, according to NNSA officials and contractor representatives. However, taking additional steps to improve the quality of information about its explosives infrastructure would give the agency more reasonable assurance that officials, contractor representatives, and the enterprise manager have the quality information necessary to support management decisions. NNSA Officials and Contractor Representatives Identified Challenges in Ensuring an Adequate Supply of Specialized Explosive Materials and Have Taken Some Actions to Address Them NNSA’s Challenges Producing Fogbank The National Nuclear Security Administration (NNSA) has had challenges in the past producing materials other than explosives that are essential to the successful operation of nuclear weapons. In 2000, NNSA began a life extension program (LEP) to replace or modernize components for W76 warheads, which are delivered by submarine-launched ballistic missile systems. NNSA had to delay production of the refurbished warheads when it encountered problems in manufacturing an important material that NNSA refers to as “Fogbank.” In March 2009, we reported that NNSA had lost knowledge of how to manufacture the material because it had kept few records of the process when the material was made in the 1980s, and almost all staff with expertise on production had retired or left the agency, leaving the production process for Fogbank dormant for about 25 years. As we reported, NNSA’s loss of the technical knowledge and expertise to manufacture Fogbank resulted in a 1-year delay in the W76-1 LEP and an unexpected cost increase of nearly $70 million. According to NNSA officials, production challenges with Fogbank have since been resolved, and the last production unit for the W76-1 LEP was completed in December 2018. NNSA’s supply of certain highly specialized explosive materials is dwindling. These materials have specific chemical and physical characteristics that fulfill precise performance requirements in nuclear weapons, such as detonation within nanoseconds, according to contractor representatives. One such material, titanium sub-hydride potassium perchlorate (THKP), is used in actuators to open valves, among other things, according to contractor representatives. TATB, the IHE molecule used in main charges, is another such material, according to contractor representatives. In some cases, contractor representatives said that only one container or production lot of specialized material was ever produced that met NNSA’s specifications. The inventories of these materials have dwindled as ongoing LEPs, modernization efforts, and research and development activities draw on them. For example, only a small container of THKP remains. Additional inventory will be required to meet the needs of four of the five ongoing LEPs and modernization efforts, as well as for any future needs, according to contractor representatives from Sandia. Similarly, although Pantex has a stockpile of legacy TATB for the B61-12 LEP, contractor representatives said that new material will be needed to meet the requirements of planned and future LEPs and modernization efforts. NNSA officials stated that reproducing and procuring these highly specialized materials presents challenges for the agency. According to NNSA documents and officials, lost recipes and a fragile supplier base contribute to these challenges (see sidebar). Lost Recipes Some specialized materials were created decades ago, and the knowledge base to successfully produce them is now gone. According to NNSA documents, technical knowledge of material production processes can be lost when long intervals occur between production orders. In some cases, processes were not well documented or were infrequently practiced and proven. Thus, NNSA sites must spend considerable effort to recreate the recipes and techniques for producing these materials. Sandia representatives explained that sometimes a single company or even an individual created these materials and has since ceased production or is now deceased. For example, THKP was produced exclusively for Sandia by DOE’s Mound Site near Dayton, Ohio, which closed in 1994. The THKP production process was designed by an individual at the Mound Site who is now deceased. In some cases, according to contractor representatives, a single container of explosives (or a single production lot) met anticipated future needs for quality and quantity when it was originally produced, so production was discontinued. Contractor representatives explained that replicating the material exactly is nearly impossible because of the large number of variables, such as the mixing speed and temperature, that must be controlled for, even if the ingredients are identical to those used many years ago, which is not often the case. To address the challenge of lost recipes, Los Alamos, Sandia, Livermore, and Pantex are all working to reproduce materials with performance and physical properties similar to those of legacy materials and prepare for their full-scale production. For example, Livermore scientists said they are conducting research to synthesize new TATB that is uniquely suited to NNSA’s needs. According to NNSA contractor representatives, the synthesis process will be refined until it can be replicated by Holston for the W80-4 LEP. Additionally, Los Alamos scientists are researching the formulation process with legacy TATB for the B61-12 main charges. The chemical formulation of binder material used in the past has slightly changed, affecting the structural strength of formulated TATB. Without the proper strength, this formulated explosive cannot be pressed effectively, according to contractor representatives. Sandia is also working to re-establish the THKP production process. NNSA is also working to address the challenge of lost recipes by developing a comprehensive master list for explosive materials. The list tracks information such as the suppliers involved and specific production challenges. According to NNSA and contractor officials, collecting and sharing such information across the sites related to explosive production processes, specifications, and performance will help prevent lost recipes in the future. Fragile Supplier Base Even if the sites can replicate lost recipes for explosive materials, NNSA’s supplier base for those materials is fragile. As previously reported and according to NNSA documentation, finding suppliers willing and able to provide required parts and materials can be difficult. Contractor representatives told us that this difficulty arises because of the small quantities of explosive parts and materials NNSA procures, the irregular nature of NNSA’s procurements, and the agency’s exacting performance requirements. For example, neutron generators contain explosive parts that Sandia orders irregularly, in batches numbering only in the hundreds. These parts have such exacting requirements for size and timing that they are hand-made under microscopes. Sandia contractor representatives explained that sometimes the laboratory’s part and material orders may represent only 1 to 3 percent of a company’s total production. To address this challenge, NNSA is working to purchase materials more consistently to ensure that companies can rely on NNSA as a steady customer and be comfortable working to meet NNSA’s exacting requirements. Contractor representatives said that ensuring consistency in production can help maintain the expertise needed to avoid having to reconstitute a specialized process, which can be costly. For example, the effort to restart TATB synthesis and formulation cost approximately $13 million and added 3-1/2 years to the original TATB production schedule, according to Los Alamos contractor representatives. Contractor representatives at Pantex and Los Alamos said that they plan to support continuous production of synthesized TATB and formulated explosives at Holston in the future to avoid delays in restarting production (see sidebar). A Fragile Supplier Base for Other Material The National Nuclear Security Administration (NNSA) has identified challenges with a fragile supplier base for other specialized materials that are used in explosives-related experiments and research. For example, Los Alamos National Laboratory (Los Alamos) in New Mexico requires highly specialized test vessels to conduct essential nuclear weapons research. Specifically, Los Alamos’s Dual-Axis Radiographic Hydrodynamic Test Facility (DARHT) uses X-ray machines to record three-dimensional interior images of mock nuclear materials that are imploded using explosives. The exploding components are contained in steel vessels. This facility is unique because it is the world’s most powerful X-ray machine for analysis of these implosions (called hydrodynamic tests). The vessels used at DARHT are made of specialized steel that does not need to be heat-treated during repairs, allowing the laboratory to easily repair them after explosive testing. There is currently a small supplier base (domestic and international) for manufacturing these vessels. Los Alamos contractor representatives are concerned with vendor availability, capability, and willingness to produce vessels because of the small number the laboratory has purchased in the past—they currently have seven operational vessels. Also, contractor representatives said they are concerned that the workforce which knows how to create this specialized steel is nearing retirement. To help ensure a continued future supply of the vessels, Los Alamos is working with Lawrence Livermore National Laboratory in California and the Nevada National Security Site, which use similar vessels, to develop a multi-year procurement plan to encourage suppliers to continue to produce the specialized steel used in their manufacture. NNSA supplier challenges are complicated further when a supplier is foreign or there is only one domestic supplier. According to NNSA documentation, using a foreign supplier may leave NNSA vulnerable to a potential national security risk. Even when the only supplier is domestic, single-point failure is a concern should that supplier delay or cease production, according to contractor representatives. NNSA officials provided an example involving Holston, NNSA’s sole supplier of TATB. According to NNSA officials and contractor representatives, Holston also serves DOD customers that order far larger quantities of explosives, and Holston is required to prioritize those customers’ orders using DOD procurement priority ratings, which may mean that NNSA orders are delayed. For example, Livermore placed an order for the W80-4 main charge explosives at Holston that was to be fulfilled by March 2019, but that order was delayed while the plant worked to finish a DOD order with a higher-priority rating. In addition to this delay, Livermore’s order will be further delayed because Holston had an explosive incident in January 2019 and ceased operations for 3 weeks, according to Livermore and DOD contractor representatives. As a result of both these delays, the W80-4 LEP will have to postpone a hydrodynamic test and other studies, complicating an already tight design and development schedule. This will delay the W80-4 LEP at least 2 months, according to Livermore officials. To minimize the potential for future production delays at Holston, NNSA is working to elevate the priority of all its orders for explosives. Some DOD nuclear weapon delivery platforms have the highest-priority DOD rating, and NNSA officials said they have received permission from DOD to apply this rating to the DOE explosives orders for the nuclear warheads associated with those delivery platforms, including explosive orders for the B61-12 LEP. NNSA officials said they cannot currently use the highest-priority rating for orders associated with the W80-4 LEP because the delivery platform for that LEP does not have the highest-priority rating. NNSA officials are working with DOD and DOE attorneys to obtain permission for using DOD’s highest-priority rating. A contractor representative at Livermore said that in addition to NNSA’s efforts, the Air Force is working separately to obtain permission to use the highest- priority rating for this delivery platform. If the Air Force is successful, NNSA could use that delivery platform’s new high-priority rating for its W80-4 LEP orders. The Livermore contractor representative said that they believe the Air Force will receive permission to use the highest- priority rating before NNSA does. In situations where a supplier cannot or will not produce a specialized material or related component, NNSA is exploring options for producing those materials or components itself. NNSA officials said that they are conducting an analysis of alternatives to meet synthesis, formulation, and production requirements to be completed by the spring of 2019. The analysis will include an option for in-house production of TATB at Pantex. NNSA documentation indicates that Pantex could independently produce the TATB needed for current and future LEPs and modernization efforts with a substantial investment, exact figures for which may be reported upon completion of the analysis of alternatives. Similarly, contractor representatives from Sandia said that in the absence of qualified suppliers, they are working to produce explosive materials, such as THKP as discussed above. NNSA Officials and Contractor Representatives Have Identified Infrastructure and Workforce Challenges and Are Taking Actions to Address Them, but NNSA Has Not Fully Addressed the Accuracy of Infrastructure Data NNSA has also identified challenges with its explosives infrastructure, infrastructure data, and workforce. Specifically, NNSA’s infrastructure is aging and deteriorating, some infrastructure data are inaccurate, and some storage areas are near capacity. In addition, recruiting and training qualified staff have presented a challenge to NNSA. As we have previously reported, these challenges are shared across the nuclear security enterprise and are not confined to explosives activities. NNSA is taking several actions to address these challenges, as described below, but data inaccuracies remain related to NNSA’s explosives-related assets. Infrastructure Is Aging and Deteriorating According to NNSA documentation, no mission risk is greater than the state of the agency’s aging infrastructure. The NNSA 2019 Master Asset Plan states that 40 percent of the explosives infrastructure of NNSA’s sites is insufficient to meet mission needs, which can lead to contamination of explosive products or limit the use of facilities. Contractor representatives told us that such contamination has occurred. For example, Pantex contractor representatives said that batches of explosives have been contaminated in its main formulation building by rust falling from the rafters and grass blowing through cracks in the walls. Similarly, Los Alamos contractor representatives said that detonator subassemblies have been rejected at the laboratory because of contamination from foreign debris, such as dust particles that enter through cracks in exterior doors. In addition, older facilities were not built to modern safety standards and pose risks to explosives activities and employees, according to contractor representatives and NNSA documents. At Los Alamos, the design of several older facilities is insufficient to meet current needs, which negatively affects both productivity and safety. For example, the Los Alamos’s High Explosives Chemistry Laboratory is a 1950s era building that is difficult to adapt to modern instrumentation, and electrical and other system failures cause approximately 20 percent downtime, according to contractor representatives. This building is also under a state of continuous limited operation because the laboratory must work under a decreased net explosive limit to keep employees safe while handling explosive materials because the facility lacks adequate blowout walls, according to contractor representatives. Contractor representatives at Los Alamos said that the decreased explosive limits in this facility have hampered their productivity levels. Contractor representatives at Pantex stated that the intrusion of water in key facilities poses electrocution risks, can damage expensive equipment, and can affect production because of downtime when explosives activities must be suspended because of severe weather. Further, we observed facilities at Pantex with water leaks in the roof and floor; some of these facilities house expensive equipment that must be stored under plastic sheeting to prevent water damage. One such facility is Pantex’s Analytics and Chemistry Laboratory, built in 1943 and shown in figures 4 and 5. NNSA and its sites have taken some actions to address this infrastructure challenge. For example, Los Alamos plans to replace its High Explosives Chemistry Laboratory by 2026, and Pantex recently constructed a new building to replace an aging pressing facility and has plans to begin construction on a new analytical laboratory and a formulation building in the 2020s. NNSA documentation states that the new pressing facility will improve operational safety and security thereby enhancing the quality and efficiency of operations. Pantex’s planned analytical laboratory and formulation buildings, however, will not be completed in time to support the currently scheduled B61-12 LEP and W88 alteration modernization effort. Further, according to NNSA officials and contractor representatives, site infrastructure modernization plans are budget dependent and funding for infrastructure modernization efforts is not always certain. Some Infrastructure Data on Explosives-Related Assets Are Inaccurate Contractor representatives told us and we observed during site visits that some of the data on explosives-related assets in the FIMS real property database were inaccurate and out of date. NNSA policy and the FIMS user’s guide state that NNSA and sites should review and update the capabilities, or programmatic mission(s) associated with each asset, such as being explosives-related, every 5 years, or more frequently if mission requirements change or there are changes in an asset’s physical condition or use. However, 8 of the 22 randomly selected assets from the four sites that we observed contained data in FIMS that were inaccurate because either the information on an asset was out of date or the asset should never have been listed as explosives-related. Some contractor representatives told us that they did not understand why some of their sites’ assets had been characterized as primary assets related to the high explosives mission. For example, an inert storage closet at Pantex and a tool shed at Livermore were labeled as primary explosives-related assets, but according to contractor representatives, they can no longer be used to store explosives because they do not meet appropriate safety standards. Figure 6 illustrates the inert storage at Pantex, which officials said had not been used for any explosives operations for at least 20 years, despite “explosives storage” labeling on the door, but was still characterized as a primary explosives-related asset. However, according to NNSA officials, NNSA was, at the time of our review, in the process of revising guidance on how to associate capabilities with assets. The contractor representatives may not have been aware of the initial guidance the asset was characterized under or of the change underway at the time of our site visit. In other cases, contractor representatives told us that the asset name did not indicate its current use. For example, FIMS data on explosives-related assets at Los Alamos has a “plastics building” that had not been used for manufacturing and assembling plastics for 20 years. Although it currently houses explosives-related work, the building’s name in FIMS had not been updated. Additionally, Los Alamos’s FIMS data indicated that the site had a “day room” that to contractor representatives’ knowledge had never been used for any explosives activities although its purpose has changed over time. We found additional inaccuracies related to various measures of explosives-related assets reported in FIMS. For example, we found at least 94 erroneous entries on the gross square footage of the 1,266 assets identified as having some type of explosives-related capability. For example, FIMS data indicated that a road at Livermore, a bunker at Sandia, and an asset named “recreational/fitness” at Pantex were 3, 1, and 2 gross square feet, respectively. The data listed replacement values of at least $1 million for each of these assets. Los Alamos’s data contained similar errors, such as electrical cables recorded as measuring zero square feet. NNSA officials and contractor representatives identified potential causes for inaccuracies in the FIMS data. For example, contractor representatives who work on explosives activities do not enter explosives- related asset information in FIMS, according to NNSA officials and contractor representatives. Instead, FIMS administrators, who manage information on infrastructure across NNSA sites, said they update FIMS using information that subject matter experts or building managers provide to them, typically in an annual data call. FIMS administrators may therefore not be aware of information that is dated or otherwise incorrect for explosives-related assets. In addition, entering information in certain data fields in FIMS was difficult for assets that were not buildings, according to one FIMS administrator. For example, piping and other utilities may be replaced or updated in sections over time, and it can be difficult to know which date to record for age in FIMS. Because our review included only a limited sample of explosives-related assets, we could not determine the full extent of the FIMS data inaccuracies. NNSA managers use data from FIMS for planning purposes on infrastructure modernization decisions. According to NNSA officials, data from FIMS feeds into other databases that are used to inform infrastructure funding decisions, such as developing the Integrated Priority List that helps NNSA determine the most critical infrastructure modernization projects. While NNSA relies on these data to make planning and funding decisions, our observations of explosives-related assets shows that these data may not be useful in informing the agencies’ infrastructure modernization decisions. Federal internal control standards state that managers should make decisions using quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. By taking steps to improve the accuracy of FIMS data— such as by reviewing and updating information about associations of assets with their primary and secondary programmatic missions, ensuring that those who provide asset information to FIMS administrators understand the data they need to provide, and clarifying how to accurately enter information in FIMS for assets that are not buildings— NNSA would have more reasonable assurance that officials, contractor representatives, and the enterprise manager have the quality information necessary to support management decisions on explosives-related activities. Storage Areas for Explosives Are Filled to or Near Capacity DOE’s requirements for explosives storage limit the amount and type of explosives that can be stored in a single location, since certain explosives may react when stored together. Explosives must be properly stored throughout their life cycles, from the synthesis of raw explosives to their use in weapon assembly or testing. According to a senior NNSA official and site contractor representatives, some sites are running out of space where they can safely store explosives. As contractor representatives from Pantex told us and we observed on our site visit, bunkers for storing explosives are filled to or near capacity, especially for storage in high- security areas. According to contractor representatives, this is problematic because Pantex has the greatest need of all NNSA sites for explosives storage because of its role in producing explosives, receiving and holding explosive parts from across the nuclear security enterprise prior to weapon assembly, and assembling and disassembling weapons. Contractor representatives from Los Alamos also voiced concern about being near their capacity to store detonator cable assemblies and other explosives awaiting shipment for installation in weapons or for testing. NNSA officials and contractor representatives said that they are tracking the shortage of sufficient explosives storage and in some cases have plans to expand current capacity. Los Alamos contractor representatives also said that they are moving forward with constructing a small staging facility that will be collocated with their detonator production facility. It is expected to cost less than $5 million so it will not affect larger line item infrastructure projects. Contractor representatives at Pantex explained that although some storage areas have been identified for replacement, they are, as yet, unfunded projects. In the near term, contractor representatives said that they have other, more pressing infrastructure modernization project needs than explosives storage. They said that they are closely monitoring their storage capacity and expect ongoing modernization efforts to free up some storage space as weapons are assembled. Difficulties in Contractors Recruiting and Training Skilled Staff According to NNSA documents and contractor representatives, the contractor workforce at NNSA sites needs to grow to meet the demands of ongoing and future explosives work, but contractors face difficulty recruiting and training qualified new staff to perform this specialized work, which often requires a security clearance. In 2018, Pantex estimated that it needed 211 FTE contractor staff to adequately carry out the site’s explosives activities. However, Pantex contractor representatives indicated that as of November 2018, they had 172 FTEs on board. A major recruitment challenge is competition from industry. Contractor representatives at multiple sites told us that they often compete with large corporations and industries in the local area that offer well-paying jobs for qualified new staff, such as for engineers. For example, site contractor representatives told us that Los Alamos and Sandia compete with Facebook in Albuquerque to attract qualified staff; and Pantex competes with various oil and gas companies in Texas. To address this challenge, contractor representatives from Pantex have recently expanded outreach to local colleges and universities, and NNSA has held job fairs to attract new staff. Lengthy training and clearance processes that are required for specialized explosives work present another challenge. Pantex representatives said recent graduates are required to undergo on-the-job training that can take years before they are ready to safely engage in certain explosives activities. NNSA officials and contractor representatives said that this training challenge is exacerbated by the delays in processing security clearances. NNSA contractor representatives said that some new hires have waited more than a year, and some more than 2 years, to receive clearances to conduct required work or training. In December 2017, we identified delays in obtaining personnel security clearances as a government-wide risk. We also added this issue to our March 2019 High-Risk List. To mitigate this challenge, contractor representatives from Pantex said that they are hiring students before they finish college so that security clearances can be granted by the time students are ready to begin their first day on the job or at least closer to that time. Los Alamos has decided to hire and train individuals without clearances, who must wear red vests and be escorted at all times while their clearances are finalized. We observed numerous workers in this temporary and escorted status during our site visit. Contractor representatives at Livermore said that they also use escorts for new staff without clearances. However, contractor representatives said that requiring additional staff as escorts is costly, can decrease productivity, and has safety impacts because additional staff must be present during activities involving high explosives. NNSA’s Strategic Plan for Explosives Does Not Describe Some Management Challenges and Is Not Fully Consistent with Leading Practices for Strategic Planning NNSA’s 2018 strategic plan for energetic materials describes some identified explosives-related challenges discussed above, as well as further actions to address these challenges, but does not describe other challenges NNSA officials and contractor representatives identified. This strategic plan incorporates some leading practices for strategic planning. However, some of the strategic plan’s elements have not been fully developed consistent with selected leading practices for strategic planning. NNSA’s Defense Programs Strategic Plan for Energetic Materials Describes Some Challenges NNSA Officials and Contractor Representatives Have Identified but Not Others The strategic plan for energetic materials, which includes comments from coordinating committee members, describes some of the challenges that NNSA officials and contractor representatives identified in conducting explosives activities, which we discussed above. Specifically, it describes some challenges related to the supply of explosive materials and to infrastructure modernization, including the following: Supply of explosive materials. The strategic plan describes both the supply of explosive materials as well as the supply of pre-cursor ingredients as a challenge facing NNSA. The strategic plan also identifies a number of actions NNSA is taking to bolster the supply chain, such as re-establishing the capability to manufacture THKP. Infrastructure modernization. The strategic plan notes that explosives-related “facilities require recapitalization to support LEP activities, improve efficiencies, reduce downtime, and maintain baseline capabilities.” It also identifies several interrelated actions NNSA is taking to address infrastructure challenges, such as re- purposing some facilities and eliminating others that are inadequate, too costly to maintain, or no longer needed. In addition, the strategic plan describes the challenge of adequate storage for explosives and includes actions to annually monitor and track storage conditions at the sites as well as provide long-term, low-temperature, moisture-free storage for explosives. However, based on our review of the strategic plan, it does not discuss three of the challenges that NNSA officials and contractor representatives had identified: the quality of data on infrastructure information, workforce levels, and safety. First, the data quality challenge related to infrastructure information, such as inaccuracies in FIMS, is not discussed in the strategic plan, although NNSA officials and contractor representatives we interviewed identified it as a challenge that may affect its planning and decision-making related to explosives activities. Second, the strategic plan does not discuss workforce challenges. While the strategic plan states that NNSA “recognize(s) that staffing is an important aspect for supporting energetics, it assumes that ongoing efforts across the nuclear security enterprise related to workforce are successful.” Since the enterprise manager does not track workforce levels across the enterprise, as previously noted, it is unclear how NNSA can determine if its contractors’ workforce efforts across the enterprise are successful and whether levels are adequate to achieve the goals of the strategic plan for explosives over time. Third, outside of infrastructure improvement, the strategic plan also does not directly discuss the challenge of safety, although it affects all explosives activities and challenges that NNSA has identified. Because of the inherent danger of explosives activities, safety is important, and even when protocols are followed, unintended events can occur that affect human safety—as illustrated by a safety incident last year. The incident occurred at a Los Alamos facility in April 2018 when a small explosive pellet deflagrated during pressing, causing two people to incur short-term hearing loss. One of those people was an escort and was only required to be present because of the delay in security clearance processing, a challenge discussed above. According to a December 2018 Los Alamos document, pressing operations had resumed at the facility. Although the cause of the incident is still unclear, it provided an opportunity to make safety improvements in the facility at Los Alamos, according to contractor representatives. According to a Los Alamos document about the incident, a key lesson learned was that safety records like maintenance logs, blast calculations, and materials safety testing results need to be archived and readily accessible to staff before the start of any work activities. The inherent challenge of safety in explosives and key lessons learned, such as this one, are not discussed in the strategic plan. NNSA officials said that they are planning to revise the strategic plan for energetic materials in 2020 but did not state that they would include the challenges of data quality, workforce, or safety. All three of these challenges may impede NNSA’s ability to achieve the goals described in the plan for explosives activities. We have previously identified selected leading practices in strategic planning. These practices specify that agencies should define strategies that address management challenges that threaten an agency’s ability to meet its long-term strategic goals. As NNSA revises its strategic plan for energetic materials, by discussing the data, workforce, and safety challenges it faces and the actions it plans to address them, as appropriate, or documenting the rationale for why the challenges were not included, NNSA would have better assurance that its strategies address these challenges. NNSA Followed Leading Practices for Strategic Planning, but Some Elements Present in Effective Strategic Plans Have Not Been Fully Developed In developing its strategic plan for energetic materials, NNSA followed several key leading practices in strategic planning that we have identified in our past work, including the following: Involving stakeholders, such as federal agencies, state governments, or others, in defining the mission and desired outcomes helps ensure that their expectations and interests are met and that resources and efforts are targeted at the program’s highest priorities. When developing the strategic plan, NNSA shared a draft with members of the coordinating committee and incorporated their comments to ensure that their interests and expectations were met. Assessing external and internal forces helps managers anticipate future challenges and make adjustments before potential problems become crises. For example, external forces (e.g., emerging technological trends and new statutory requirements) and internal forces (e.g., culture, management practices, and business processes) may influence the program’s ability to achieve its goals. When developing the strategic plan, NNSA officials and coordinating committee members considered external and internal forces. For example, the officials and members discussed the availability of explosives from external suppliers, such as Holston, compared to the potential costs or challenges related to internal NNSA production of explosives. Covering at least a 4-year period, while making adjustments as needed to reflect significant changes to the operating environment, is also a key strategic planning practice. The strategic plan covers more than 4 years of explosives activities. For example, there is a performance goal to re-establish a reliable THKP supply by 2024. In addition, NNSA officials have discussed their intention to update the plan as their operating environment changes. Our past work has also shown that effective strategic plans should include specific elements. We reviewed NNSA’s Defense Programs Strategic Plan for Energetic Materials and found that the strategic plan includes most of these elements, but we also found that some of the strategic plan’s elements have not been fully developed. Specifically: Mission statement. According to leading federal strategic planning practices, a comprehensive mission statement should explain why a program exists and what it does. The strategic plan does not clearly identify a mission statement but includes an overarching “strategy to ensure the availability of energetic materials and products for the stockpile.” When asked to identify the energetics mission statement, the two contractor representatives who led the development of the strategic plan told us that they consider this “strategy” to be the energetics mission. However, a strategy cannot be a mission, since a strategy is how a mission may be achieved. Long-term strategic goals and objectives, strategies, and performance goals. There are several interrelated elements on long- term strategic goals, objectives, strategies, and performance goals, according to leading strategic planning practices. These include that long-term strategic goals and objectives should specify how an agency will carry out its mission and explain what results are expected from the program. The strategic plan includes four long-term strategic goals for meeting its mission, some strategies for achieving its goals, and some performance goals to assess progress related to ensuring the availability of explosives. They are also logically linked to each other. For example, the strategic plan’s goal to sustain and modernize the energetics infrastructure relates to the strategic plan’s strategy to eliminate facilities that are inadequate, too costly to maintain, or no longer needed. However, we found that responsibilities for achieving the strategic plan’s four goals are not clearly assigned within NNSA, and the four goals are not consistently quantifiable. For example, the third goal is to “manage the energetics supply chain,” but the strategic plan does not specify who is responsible for achieving this goal within NNSA. Further, this long-term strategic goal is not quantifiable because it describes a general process and does not define the expected results, which may make it difficult for NNSA to assess progress in meeting the goal. According to leading strategic planning practices, strategies should be specific enough to allow an assessment of whether they will help achieve those goals, such as by describing the resources needed, including the staff responsible to achieve a program’s goals and objectives. We found that the strategic plan contains several strategies for achieving goals, but some of them are not specific enough to clearly identify the types of resources required, such as the parties responsible for achieving them. For example, under the goal of managing the energetics supply chain, there is a strategy to “plan, track and assess the energetics strategic posture,” but the strategic plan does not specify what is meant by the energetics strategic posture or who is responsible for undertaking these actions. This strategy is also limited because it does not describe the resources needed to achieve the broader goal. According to leading strategic planning practices, performance goals should be used to assess progress toward long-term goals and should include (1) the specific activities within the program that will be assessed for performance and (2) the level of performance to be achieved for each measure. We found that the strategic plan has 50 performance goals, most of which were quantifiable—or able to be assessed for performance or progress. However, some were not quantifiable, such as to “enhance or advance energetics formulations for additive manufacturing.” This performance goal also does not set milestones, such as a time frame for completion, or staff assigned to achieve it, contrary to leading strategic planning practices. Further, the level of performance for some goals was not fully developed. For example, the performance goal “to reduce substandard mission- critical facilities below 10 percent” does not clarify whether the goal is to reduce the current number of inadequate and substandard facilities by 10 percent (a change of about 50 facilities) or reduce the total number of inadequate and substandard facilities to be less than 10 percent of all facilities (a change of about 500 facilities). This performance goal also does not set time frames for measuring performance or list responsible parties associated with it. Another performance goal that was not fully developed is to “manage the energetics supply chain,” which falls under the long-term strategic goal of “sustaining and modernizing the infrastructure.” In addition, this performance goal is identical to a long-term strategic goal in the strategic plan titled “manage the energetics supply chain.” A performance goal should not replicate a strategic goal, since long- term strategic goals are broader in nature than performance goals. Moreover, this particular performance goal is not quantifiable, does not set a time frame for completion, and does not list a responsible party to carry out specific activities to achieve the goal. External factors. According to leading strategic planning practices, external factors that could significantly affect achievement of the strategic goals, such as economic trends or actions by Congress, state and federal agencies, or other entities, should be identified. The strategic plan identifies some external factors that could significantly affect the achievement of strategic goals. Specifically, the strategic plan notes that DOD’s demand for explosives from Holston could affect NNSA’s ability to achieve its goals. According to NNSA officials and documents, DOD’s demand for explosives is increasing, and Holston is already struggling to meet DOD’s needs. According to Holston contractor representatives, DOD is expanding Holston’s production capabilities for HMX, research development explosive (RDX), and insensitive munitions explosive (IMX) which when completed will relieve pressure on TATB production. In addition, the strategic plan identifies challenges to its supplier base, such as the difficulty of sourcing explosive materials from non-U.S. suppliers and that the small size of NNSA’s orders provides limited economic incentive for commercial vendors. However, the strategic plan does not identify other external factors that could significantly affect the achievement of strategic goals, such as actions taken or not taken by Congress. Specifically, modernizing the explosives infrastructure will require appropriations for the significant capital investment needed, but the uncertainty of future appropriations in a challenging fiscal environment is an external factor not identified in the strategic plan. In addition, the strategic plan does not acknowledge that other NNSA programs may compete for funds or affect infrastructure modernization priorities at a given site. NNSA officials, including the enterprise manager, stated that they are aware that the strategic plan for energetic materials has limitations, such as performance goals that are not specific or are difficult to quantify. NNSA officials said that they released the strategic plan quickly as it was the first of its kind for explosives activities, and they believed the explosives community would receive the most benefit it if it was published as soon as possible, though it was not fully complete. Further, they said that they intend to revise the strategic plan in the next year or so. As NNSA revises its strategic plan for energetic materials, including fully developed elements of an effective plan—such as a clear mission statement and quantifiable performance goals that set time frames for completion and list responsible parties who will carry out specific activities for all strategic goals—would help NNSA make the strategic plan useful in measuring goal achievement and assessing accountability. Conclusions NNSA is undertaking an extensive, multifaceted effort to sustain and modernize U.S. nuclear weapons, and explosives are essential to the functioning of these weapons. Five NNSA sites conduct a range of interdependent activities to design and produce explosives. NNSA has identified several challenges in carrying out these activities and is taking actions to address them. For example, NNSA officials and contractor representatives identified challenges related to producing highly specialized materials and are working to re-establish their supply. However, NNSA managers may be relying on inaccurate FIMS data on infrastructure related to explosives activities to make modernization decisions, because we found a number of inaccuracies in FIMS data on explosives activities. NNSA officials and contractor representatives identified a few potential causes for these inaccuracies; however, because our review included only a limited sample of explosives-related assets, we could not determine the full extent of the FIMS data inaccuracies. According to NNSA officials, NNSA has taken some initial steps to revise guidance, which we find encouraging as these revisions may help improve accuracy of FIMS data. By taking additional steps to improve the accuracy of FIMS data—such as reviewing and updating information about associations of assets with their primary and secondary programmatic missions, ensuring that those who provide asset information to FIMS administrators understand the data they need to provide, and clarifying how to accurately enter information in FIMS for assets that are not buildings—NNSA would have more reasonable assurance that officials, contractor representatives, and the enterprise manager have the quality information necessary to support management decisions on explosives-related activities. In addition, the strategic plan for energetic materials—which represents a positive step toward managing explosives in a forward-looking, enterprise-wide approach—does not discuss three of the significant challenges that NNSA officials and contractor representatives identified related to explosives activities. NNSA officials said that they are planning to revise the strategic plan in 2020 but did not state that they would incorporate data quality, workforce, or safety challenges. As the agency revises its strategic plan for energetic materials, by discussing these challenges and actions planned to address them, as appropriate, or documenting the rationale for why the challenges were not included, NNSA would have better assurance that it is effectively managing challenges that present risks to achieving its objectives. The strategic plan for energetic materials also does not contain fully developed elements that we have previously reported that effective strategic plans should include, such as a fully developed mission statement and performance goals that are quantifiable, set time frames for completion, and list responsible parties to carry out specific activities. NNSA officials said that they intend to revise the strategic plan in 2020. As NNSA revises its strategic plan for energetic materials, by including fully developed elements of an effective strategic plan—such as a fully developed and clearly identified mission statement and performance goals that are quantifiable, have time frames for completion, and list responsible parties to carry out specific activities for all strategic goals— NNSA would help make the strategic plan more useful in measuring goal achievement and assessing accountability. Recommendations for Executive Action We are making the following three recommendations to NNSA: NNSA’s Energetic Materials Enterprise Manager and relevant NNSA officials and contractor representatives at NNSA sites should take steps to improve the accuracy of FIMS data related to NNSA’s infrastructure supporting explosives activities. These steps should include reviewing and updating information about associations of assets with primary and secondary explosives missions; ensuring that those who provide asset information to FIMS administrators understand the data they need to provide; and clarifying how to accurately enter information in FIMS for explosives assets that are not buildings. (Recommendation 1) NNSA’s Energetic Materials Enterprise Manager, in consultation with members of NNSA’s Energetics Coordinating Committee, should, as the agency revises its Defense Programs Strategic Plan for Energetic Materials, include discussion of identified challenges related to explosives activities, such as data quality, workforce levels, and safety as well as any actions to address them, as appropriate, or document the rationale for why identified challenges were not included. (Recommendation 2) NNSA’s Energetic Materials Enterprise Manager, in consultation with members of NNSA’s Energetics Coordinating Committee, should, as the agency revises its Defense Programs Strategic Plan for Energetic Materials, include fully developed elements of an effective strategic plan, such as a clearly identified mission statement and performance goals that are quantifiable, set time frames for completion, and list responsible parties to carry out specific activities for all strategic goals. (Recommendation 3) Agency Comments We provided a draft of this report to DOE and NNSA for review and comment. In its written comments, which are summarized below and reproduced in full in appendix I, NNSA concurred with the report’s recommendations and described actions that it intends to take in response to our recommendations. NNSA also provided technical comments, which we considered and incorporated in our report as appropriate. DOE did not comment on our findings and recommendations. In response to our first recommendation, NNSA stated that it recognizes the need to improve infrastructure data consistency and accuracy and intends to complete several actions by March 31, 2020 to improve its infrastructure data. For example, DOE’s Infrastructure Executive Committee is conducting a comprehensive review of the existing 178 data elements in FIMS and has proposed deleting or adjusting 66, which it believes will sharpen its focus on data quality for the remaining data elements. In addition, among other actions, NNSA stated it is implementing the Mission Dependency Index 2.0 initiative, which is expected to provide greater consistency and accuracy on reporting asset capability and determining consequence to mission. In response to our second and third recommendations, NNSA stated that it is planning to revise its Strategic Plan for Energetic Materials by October 31, 2019. NNSA stated that the update to its plan will include a discussion of the identified challenges to explosives activities as well as fully developed elements of an effective strategic plan. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or at bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the National Nuclear Security Administration Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jonathan Gill (Assistant Director), Eric Bachhuber (Analyst in Charge), Natalie Block, Scott Fletcher, Ellen Fried, Rob Grace, and Dennis Mayo made key contributions to this report. Also contributing to this report were Cindy Gilbert, Penney Harwell Caramia, Dan C. Royer, Jeanette Soares, Kiki Theodoropoulos, and Khristi Wilkins.
Why GAO Did This Study NNSA is responsible for the management and security of the U.S. nuclear stockpile. NNSA has ongoing and planned efforts to modernize nearly all of the weapons in the stockpile, which require new explosive components. The production of some key explosives ceased in the early 1990s, and much of the infrastructure supporting this work is aging, making it expensive and difficult to maintain. The Senate Report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to review NNSA's high explosive capabilities specific to nuclear weapons. This report examines (1) explosives activities that NNSA and its sites conduct and how NNSA manages them; (2) challenges NNSA officials and contractor representatives identified in conducting these activities and the extent to which NNSA has taken actions to address them; and (3) the extent to which NNSA's strategic plan for explosives activities describes further actions, if any, to address the challenges identified and follows leading practices for strategic planning. GAO reviewed NNSA documents related to explosives activities, including its strategic plan; compared the plan with leading practices; and interviewed NNSA officials and site representatives. What GAO Found Five National Nuclear Security Administration (NNSA) contractor-operated sites conduct activities to design and produce explosive materials. There are about 100 different nuclear weapon components that contain explosive materials (see figure). Each site assumes primary responsibility for certain activities, but most activities require collaboration by multiple sites, according to NNSA officials and contractor representatives. In 2018, NNSA began adopting a centralized approach to managing these activities and coordinating them across its sites. NNSA officials and contractor representatives identified several challenges related to explosives activities, such as the agency's dwindling supply of explosive materials, aging and deteriorating infrastructure, and difficulty recruiting and training qualified staff. For example, only a single container of one specialized material remains. NNSA officials and contractor representatives indicated that the agency is taking some actions to address these challenges, such as working to replenish the supply of dwindling, highly specialized materials. NNSA's strategic plan for explosives activities addresses some of the challenges agency officials and contractor representatives have identified, and NNSA followed several key leading practices in developing its strategic plan. However, some of the plan's elements have not been fully developed consistent with selected leading practices. For instance, the plan does not include a fully developed mission statement, and some performance goals are not quantifiable. NNSA officials stated that they are aware of the strategic plan's limitations and that they released it quickly to ensure that the explosives community could use it as soon as possible. NNSA officials said that they intend to revise the strategic plan in the next year or so. As NNSA revises its strategic plan, by including fully developed elements of an effective strategic plan, NNSA would help make the strategic plan more useful in measuring goal achievement and assessing accountability. What GAO Recommends GAO is making three recommendations, including that NNSA, as it revises its strategic plan for explosives activities, include fully developed elements of an effective strategic plan. NNSA agreed with GAO's recommendations.
gao_GAO-19-600
gao_GAO-19-600_0
Background Section 653(a) Reporting Mandates Congress first enacted Section 653(a) in the Foreign Assistance Act of 1971. According to Section 653(a), “not later than thirty days after the enactment of any law appropriating funds to carry out any provision of this Act (other than section 451 or 637) or the Arms Export Control Act, the President shall notify the Congress of each foreign country and international organization to which the United States Government intends to provide any portion of the funds under such law and of the amount of funds under that law, by category of assistance, that the United States Government intends to provide to each.” To provide Congress with the mandated data within the mandated time frame, State and USAID officials review the annual appropriations act and the accompanying joint explanatory statement to identify the congressional instructions contained within them. Although State has the delegated authority to approve the programming of foreign assistance funds and is charged with submitting the Section 653(a) report to Congress, State and USAID have shared responsibilities regarding the administration of certain foreign assistance accounts. Throughout this report we refer to the congressional instructions in the annual appropriations acts and those allocation tables within the joint explanatory statements that are incorporated by reference into the act as “requirements,” and we refer to Congress’s instructions to the agencies presented as additional language in the joint explanatory statement as “directives.” Foreign Assistance Appropriations Accounts Congress funds foreign assistance by appropriating funds to 16 accounts, each of which has a distinct purpose and specific legal requirements, such as the number of years the funds are available for obligation. Table 1 provides a summary of these 16 accounts. These accounts are generally administered individually by State or USAID, or jointly by both agencies. In addition, the period of availability for obligation for these accounts ranges from 1 to 5 years, or in some cases, until funds are expended. Appropriations Requirements and Directives and Administration Priorities The annual appropriations acts have hundreds of specific instructions— both requirements and directives—attached to many of the foreign assistance accounts that State and USAID address in the Section 653(a) report. According to State officials, the annual appropriations acts have become more detailed since the addition of the Section 653(a) mandates in the Foreign Assistance Act of 1971. For example, State officials said that when the Section 653(a) reporting mandate began the annual appropriations act contained fewer requirements and directives. The Foreign Assistance and Related Programs Appropriations Act, 1971, appropriated $2.2 billion in foreign assistance and was 18 pages in length. By contrast, the relevant portion of the annual appropriations act for fiscal year 2018 appropriated $33.7 billion in foreign assistance and was 138 pages in length. In addition, the accompanying joint explanatory statement was 9 pages in 1971 and 31 pages in 2018. As shown in table 2 below, during the 4 fiscal years covered by our review, the total number of requirements and directives addressed in the Section 653(a) reports has varied depending on congressional instructions within the annual appropriations acts and corresponding joint explanatory statements. For example, fiscal year 2016 had 1,056 total requirements and directives, while fiscal year 2018 had 657. The total number of requirements and directives has also varied by account. For instance, the Economic Support Fund, which was appropriated roughly $3.9 billion in fiscal year 2018, had 107 requirements and directives that instructed agencies how to allocate about $2.9 billion. Other accounts appropriated funds in fiscal year 2018 had fewer requirements and directives. For example, the International Disaster Assistance account, which was appropriated about $4 billion in fiscal year 2018, had three requirements and directives. The requirements and directives also vary in their specificity. For instance, of the $8.6 billion appropriated for Global Health Programs in fiscal year 2018, the joint explanatory statement required that $829.5 million be allocated toward maternal and child health. State and USAID were also required to make funds allocated for the Global Health Programs available in specific amounts, such as making $755 million available for activities addressing malaria in fiscal year 2018. State and USAID also balance the requirements and directives with administration priorities. For example, officials from OMB said that they review the Section 653(a) report to ensure that allocations for certain countries— such as Israel and Jordan—are consistent with the administration’s financial commitments to those countries. The text box below provides examples of the types of requirements and directives found in the fiscal year 2018 appropriations act. Foreign Assistance Requirements and Directives Congress includes a variety of instructions to the agencies managing foreign assistance funds in statutes, such as the annual appropriations acts, and in legislative history, such as the joint explanatory statements. These instructions come in two broad categories. Requirements: Congress’s instructions to agencies as contained in the annual appropriations act, including mandatory and non-mandatory spending, and tables within the joint explanatory statement that are required by statute. Directives: Congress’s instructions to agencies presented as additional language in the joint explanatory statement that are not required by statute. Examples Mandatory requirement detailed in law: “Of the funds appropriated by this Act, not less than $400,000,000 shall be made available for water supply and sanitation projects pursuant to the Senator Paul Simon Water for the Poor Act of 2005 (Public Law 109-121), of which not less than $145,000,000 shall be for programs in sub-Saharan Africa, and of which not less than $15,000,000 shall be made available to support initiatives by local communities in developing countries to build and maintain safe latrines.” Mandatory requirement referenced in law but detailed in the joint explanatory statement: Law: “For necessary expenses to carry out the provisions of chapter 4 of part II of the Foreign Assistance Act of 1961, $1,816,731,000, to remain available until September 30, 2019.” Joint explanatory statement: “Funds for certain programs under this heading are allocated according to the following table:” Ambassador-at-Large for Global Women’s Issues Conflict and Stabilization Operations Disability Programs Disability Programs Family Planning/Reproductive Health (U.S. Agency for International Development) House Democracy Partnership Organization of American States Polio Reconciliation Programs Trade Capacity Building 1,900,000 9,000,000 7,500,000 12,000,000 10,000,000 Nonmandatory requirement detailed in law: “Of the funds appropriated by this Act under the heading ‘Economic Support Fund,’ up to $112,500,000 may be made available for assistance for Egypt, of which not less than $35,000,000 should be made available for higher education programs including not less than $10,000,000 for scholarships for Egyptian students with high financial need to attend not-for-profit institutions of higher education.” Nonmandatory directive detailed in the joint explanatory statement: “The State Department Secretary and U.S. Agency for International Development Administrator are directed to provide no assistance to the central Government of the People’s Republic of China under Global Health Programs, Development Assistance, and Economic Support Fund, except for assistance to detect, prevent, and treat infectious diseases.” In addition, requirements and directives can be specific to a country or organization, specific to a sector, or be cross-cutting such that they may be applicable across multiple countries and accounts—as the examples above demonstrate. State submitted Section 653(a) reports that provided mandated data notifications on how foreign assistance funds are allocated by country and account; however, State did not submit the reports within the mandated time frame during fiscal years 2015 through 2018. During the years covered in our review, State’s Section 653(a) reports provided information to Congress on the tens of billions of dollars for foreign assistance accounts specified in the annual appropriations act. In addition to detailing the category of assistance by account, the Section 653(a) reports further delineated funding by the countries and international organizations to which the foreign assistance was directed. State also included supplemental spreadsheets with the reports that outlined how the various requirements and directives in the annual appropriations act were to be addressed. In fiscal years 2015 through 2018, State submitted Section 653(a) reports an average of 169 days after the enactment of the annual appropriations act (or 139 days late). During those 4 fiscal years, State submitted Section 653(a) reports from 80 to 230 days past the 30-day mandated time frame for reporting, as shown in figure 1. During the 4 fiscal years covered by our review, State took the longest amount of time to submit the Section 653(a) report in fiscal year 2015. State officials explained that in fiscal year 2015 and prior years it generally took them longer to submit the report because they first submitted a draft Section 653(a) report to the House and Senate appropriations committees. According to State officials, the appropriations committees’ majority and minority staff then engaged in negotiations with each other and with State on the draft to reach agreement on the final allocation of funds. In addition, the fiscal year 2015 appropriations act allowed State to propose deviations from the requirements in the joint explanatory statement. Thus, the agencies submitted the draft report to the appropriations committees with allocations that, in some cases, varied from the levels Congress included in the tables in the joint explanatory statement. State submitted the draft Section 653(a) report for the fiscal year 2015 appropriations act in April 2015 with the proposed deviations and engaged in a 5-month negotiation process to finalize the allocation of funds in September 2015. According to State, Congress changed the Section 653(a) reporting requirements in the fiscal year 2016 appropriations act to forestall the months-long negotiation process with the appropriations committees that had occurred in prior years. The fiscal year 2016 appropriations act authorized State and USAID to deviate in their allocations by up to 5 percent from the mandated amounts in the tables of the joint explanatory statement. This change allowed State to submit the report in a more timely fashion than in fiscal year 2015. By specifying how much leeway State and USAID were allowed in their allocations, the agencies were able to develop their plans without submitting a draft Section 653(a) report and seeking further agreement from the appropriations committees. As a result, officials submitted the 2016 report in 110 days, compared with 260 days in 2015. State’s Timeliness in Submitting Section 653(a) Reports Was Affected by Its Complex Process, Data Collection Weaknesses, and, in Fiscal Year 2018, Staff Vacancies We found that delays in submitting the Section 653(a) report were primarily attributable to State’s complex process to address appropriation requirements and directives while also reflecting administration priorities, as well as to data collection weaknesses. Nevertheless, State has not reviewed its process to identify and address such issues and other potential inefficiencies. Absent such a review, State is not in a position to improve its process to meet the 30-day mandate. In fiscal year 2018, State officials noted that reaching agreement on priorities within the new administration and staff vacancies also adversely affected the timeliness of the Section 653(a) report submission. State Has a Multistep Process for Responding to the Section 653(a) That Is Not Designed to Meet the Mandated Time Frame State has a multistep process to provide the mandated Section 653(a) report. Pre-appropriation preparatory work. According to State documentation, the process for responding to Section 653(a) mandates begins with State and USAID developing notional allocation estimates in a spreadsheet before the upcoming fiscal year’s annual appropriations act is passed. Allocation analysis and development. After the act is passed, USAID and State review their allocation estimates against the requirements and directives in the act and adjust their spreadsheet containing allocation estimates as necessary, taking into consideration policy direction from State and USAID leadership. According to State and USAID officials, detailed congressional instructions for particular accounts can limit the agencies’ ability to allocate funds according to the administration’s priorities and to consider country-specific foreign assistance needs. For example, Congress appropriated $876 million in fiscal year 2018 for the Nonproliferation, Anti-Terrorism, Demining, and Related Programs account and included 40 associated requirements and directives. State and USAID officials added that in order to satisfy all the requirements and directives they sometimes have to allocate appropriated amounts to address more than one requirement or directive. For instance, in the fiscal year 2018 Section 653(a) report, some of the funds allocated to meet an appropriation requirement for conventional weapons destruction were also designated as an allocation to satisfy a different requirement for humanitarian demining. Further, some amount of those funds satisfied a more specific requirement for humanitarian demining in Laos. State officials noted that by allocating appropriated amounts to more than one requirement or directive, they have greater flexibility to address administration priorities, while also meeting congressional instructions. Allocation negotiation, review, and agreement. State and USAID ensure that input from all the various parties is taken into consideration when further developing allocations. About 200 State and USAID bureaus and overseas posts review the allocations and propose changes in their copies of the spreadsheet that are then returned to State’s Office of U.S. Foreign Assistance Resources. According to State and USAID officials, they consider the proposed changes in light of emerging issues in selected foreign countries that may lead to the redirection of or changes to the proposed allocation of funds. State and USAID also review the proposed changes with agency leadership. OMB review and Section 653(a) report finalization and transmission. Once State and USAID agree on changes to the allocations, State submits the Section 653(a) report to OMB to be reviewed against the policy direction of the Executive Office of the President. State officials indicated that OMB feedback must be resolved before finalizing allocations. Concurrent with OMB’s review, State begins the process of finalizing allocation levels. Once State and USAID’s allocations are complete, State provides final allocation levels to bureaus and overseas posts and submits the Section 653(a) report to the relevant appropriations subcommittees. Given its complexity, State’s process is not designed to meet the mandated 30-day time frame. For example, in fiscal year 2018, State planned to complete the Section 653(a) process in 85 days. Figure 2 below outlines the stages of the Section 653(a) report development process and the targeted number of days for each stage during fiscal year 2018. The data developed for the Section 653(a) report plays a critical role in the obligation of tens of billions of dollars in foreign assistance funds appropriated annually. According to State and USAID officials, the agencies are constrained from obligating funds until the report is completed because a number of pre-obligation requirements are based on allocations in the Section 653(a) report. While the submission of the Section 653(a) report does not legally affect State’s ability to obligate foreign assistance funds, according to State and USAID officials, consultations, spend plans, and congressional notifications cannot be completed until allocation amounts are finalized through the Section 653(a) process. As a result, State and USAID officials said the amount of time it takes to submit the Section 653(a) report affects the obligation of funds. Data Collection Weaknesses Lead to Data Discrepancies and Hinder Efficiency of State’s Process State officials indicated that their process for collecting appropriations- related feedback and information from various offices, bureaus, and overseas posts necessitates significant staff time to correct data entry errors. Throughout the Section 653(a) process, State officials use a spreadsheet to consolidate information. For example, after State develops its initial allocations in the spreadsheet, it sends the spreadsheet out to about 200 bureaus and overseas posts to review and make appeals related to account and country allocations, which State and USAID then take into consideration as they continue to modify the allocations. According to State officials, reviewing suggested changes to allocations from about 200 bureaus and overseas posts is time consuming. This process is further complicated when they sometimes find mistakes in the returned spreadsheets, such as incorrect formulas and currency formats. Occasionally, returned spreadsheets also include additional data columns that were not in the original documents. Such discrepancies make it difficult to merge and process all of the suggested changes and identify how the changes interact with the various requirements and directives. State officials said that these discrepancies occur because they do not have controls in place to prevent modification of the spreadsheet. For example, the formulas and format of the spreadsheet can be manipulated by the various individuals reviewing the document. In addition, the spreadsheet does not automatically verify that the changes proposed by the bureaus and overseas posts comply with the requirements and directives. Instead, officials have to individually compare the changes with the requirements and directives and ensure that they are in compliance. State officials indicated that it takes them time and resources to discover and correct the errors, merge all of the spreadsheets, and ensure compliance, which contributes to delays in developing the Section 653(a) report. According to State’s Foreign Affairs Manual, State must maintain effective systems of management control that are designed to provide reasonable assurance regarding the prevention of or prompt detection of errors and irregularities. State officials indicated that while they do correct errors and validate the data in the Section 653(a) report for accuracy before final submission, it takes time and resources to do so, which adds to the total amount of time it takes to produce the report. Given the individual account and country allocations, and number of stakeholders involved in providing feedback, State officials acknowledged that their spreadsheet-based system is inadequate for the complexity of the task. State officials said that their existing data information system—the Foreign Assistance Coordination and Tracking System Info Next Generation—could potentially be modified to automate the distribution and collection of appropriations-related feedback from their offices and overseas posts, as well as to ensure that the changes comply with the annual appropriations act’s requirements and directives. Currently, State uses this system during the last phase of the Section 653(a) process to input the final allocations and share the Section 653(a) report with bureaus and overseas posts and the appropriations committees. While State officials said that they are exploring options to improve this process, they have not yet decided how to address weaknesses in their data collection system. State Views Its Current Section 653(a) Process as Necessary but Has Not Reviewed It to Identify Potential Inefficiencies State officials said that the Section 653(a) process that they developed is necessary to address congressional instructions and administration priorities and because they use the allocations in the report as a basis for spend plans required for the obligation of funds. Federal standards for internal control state that management should set objectives to meet the requirements of applicable laws and regulations. State officials noted that it might be possible to meet the 30-day mandate but that doing so would be inefficient because the subsequent report would need major revisions before finalizing allocations. According to officials, that alternative process, while it would meet the 30-day mandate, would be likely to further delay the development of spend plans and obligation of funds. State officials told us that they have informally suggested to the congressional appropriations committees that the mandated time frame for delivering the report should be extended, but they said they have not formally requested that Congress amend the 30-day reporting mandate. State officials said that they would also need to engage in conversations with authorizing committees responsible for making changes to the reporting mandate in the Foreign Assistance Act. GAO’s guidance on business process reengineering states that agencies should model their processes to identify problem areas and non–value- added activities that need to be changed or eliminated, such as excessive reviews. State officials said that, while they have made adjustments to improve their Section 653(a) process, they have not conducted a systematic review of their process since it changed in fiscal year 2016. Such a review could identify changes to expedite the completion and submission of the mandated report. Given that State’s process is not designed to meet the Section 653(a) 30-day reporting mandate, absent changes to its processes or Section 653(a), State is unlikely to meet the 30-day reporting mandate in the future. In Fiscal Year 2018, State Officials Noted That Agreeing on Administration Priorities and Staff Vacancies Also Affected Timeliness in Submitting the Section 653(a) Report Agreeing on Administration Priorities According to State officials, reaching agreement on administration priorities affected the timeliness of their Section 653(a) report in fiscal year 2018. The current Secretary of State and USAID Administrator both had their first experience with the Section 653(a) process during fiscal year 2018, which led to more detailed review within both agencies than in previous years, according to State officials. In addition, State officials said that USAID recommended unanticipated and significant changes to the proposed allocations before OMB’s review. USAID officials said that significant changes were necessary since USAID disagreed with the allocations State proposed for USAID’s appropriations within the Global Health Programs, Development Assistance, and the Economic Support Fund accounts. In fiscal year 2018, it took State and USAID 110 days to complete the allocation negotiation, review, and agreement step of the Section 653(a) process. According to State and USAID officials, they used 46 of the 110 days to reach agreement on the changes that USAID proposed. OMB officials noted that they needed to resolve policy issues concerning the administration’s foreign assistance priorities, which also contributed to delays. Once State sent the report to OMB in August 2018, OMB officials said that they approved the report after 36 days. In previous years, OMB officials explained that they usually approved the report within 15 days. However, they said that they were working to resolve a policy issue with other offices in the Executive Office of the President, and were therefore delayed in approving the fiscal year 2018 report. In total, it took State and USAID 189 days to produce the Section 653(a) report in fiscal year 2018. In fiscal year 2018, State planned to complete the Section 653(a) process in about 85 days after the enactment of the appropriations act—which exceeds the 30-day reporting, as shown in figure 3. In 2018, staffing gaps in State’s Office of U.S. Foreign Assistance Resources also affected the development of the Section 653(a) report. State’s Office of U.S. Foreign Assistance Resources is staffed by State and USAID-funded personnel and provides supervision and direction of State and USAID’s foreign assistance funding and programs. While the development of the Section 653(a) report is a critical task for the Office of U.S. Foreign Assistance Resources, the office and its staff are also responsible for developing a U.S. foreign assistance strategy, annual country-specific assistance operational plans, consolidated strategic and program plans, and operational budgets. State’s Office of U.S. Foreign Assistance includes two subordinate offices involved in developing the Section 653(a) report, both of which had vacancies in 2018. Within the Office of U.S. Foreign Assistance, the Resources and Appropriations office has primary responsibility for reviewing and identifying the Section 653(a) requirements and directives, but in 2018, five of 13 full-time equivalent positions, or 38 percent, were vacant. In addition, 10 of 41 full-time equivalent positions within the Regional and Global Affairs office, or 24 percent, were vacant. This office provides geographic and functional expertise to help develop and adjudicate allocations for the report. According to State officials, these vacant positions affected the timeliness of the Section 653(a) report in 2018 because the staff in both these offices assist with developing the report throughout the Section 653(a) process. As previously shown in figure 3, most of the delays in the fiscal year 2018 process occurred during the allocation negotiation, review, and agreement phase—which relies heavily on officials from the offices experiencing staffing gaps. According to State officials, the staff shortfall affecting the development of the fiscal year 2018 Section 653(a) report was due to the State hiring freeze that affected the entire agency, as well as vacancies among USAID personnel assigned to State’s Office of U.S. Foreign Assistance Resources. State’s hiring freeze took effect in January 2017 and was lifted in May 2018. In addition, State officials said that USAID has not filled USAID-funded vacancies within State’s Office of U.S. Foreign Assistance Resources. In fiscal year 2018, nine of the 13 full-time equivalent positions in State’s Resources and Appropriations office were funded by USAID, of which four were vacant, and 21 of the 41 full-time positions in Regional and Global Affairs office were funded by USAID, of which six were vacant. Our 2019 High-Risk Series report calls for agencies to design and implement action plans for closing skills gaps, which can include when an agency has an insufficient number of people to complete its work. The report states that the action plan should define the root cause of all skills gaps within an agency and provide suggested corrective measures, including steps necessary to implement solutions. State officials said that they have received permission to fill the vacant State positions, and USAID has provided permission to advertise two vacant USAID positions within State’s Office of U.S. Foreign Assistance Resources. Thus, that office is requesting additional State full-time equivalent positions. Despite these efforts, State and USAID officials said that they do not have an action plan to address the vacancies. Without a plan to fill these vacancies, a lack of staff resources will likely continue to impact the timeliness of the Section 653(a) reports. Conclusions Congress appropriates tens of billions of dollars for foreign assistance annually and mandates the President to report to Congress on how the U.S. government will allocate funds for foreign countries, by category of assistance, within 30 days of the enactment of the annual appropriations act. State and USAID have developed a complex process to balance how their allocations will meet the detailed requirements and directives within the annual appropriations acts, the administration’s priorities, and country-specific foreign assistance needs. However, State has been unable to meet the mandated time frame for submitting the Section 653(a) report for various reasons. Most importantly, State’s process for completing the various phases of the Section 653(a) process is not designed to meet the mandated 30-day deadline. Moreover, State officials have not systematically reviewed their process since it changed in fiscal year 2016, to identify areas that can be streamlined or eliminated to expedite the completion and submission of the report. Additionally, State’s system for collecting input on foreign assistance allocations from its various offices, bureaus, and overseas posts is prone to data entry errors that take extra time to correct, contributing to delays in submitting the Section 653(a) report. Further, State’s two offices primarily responsible for managing the Section 653(a) process had a substantial number of positions vacant in 2018 but did not have a formal plan to address the resulting skills gaps. Absent addressing these challenges, State and USAID will likely continue to be in violation of their legal mandate for submitting Section 653(a) reports to Congress within 30 days after the annual appropriations act is enacted. Recommendations for Executive Action We are making a total of three recommendations to State. The Secretary of State should ensure that the Director of State’s Office of U.S. Foreign Assistance Resources conducts a review of the Section 653(a) process to identify process steps that can be streamlined or eliminated and determine the time frame needed to prepare the annual Section 653(a) report. If State determines that the time frame exceeds 30 days, the office should coordinate with other appropriate officials to submit a legislative proposal to Congress to extend the mandated time frame for submitting Section 653(a) reports. (Recommendation 1) The Secretary of State should ensure that the Director of State’s Office of U.S. Foreign Assistance Resources improves the data collection from the many sources contributing to the Section 653(a) reports, such as by enhancing their data information systems. (Recommendation 2) The Secretary of State should develop a plan to address vacancies within State’s Office of U.S. Foreign Assistance Resources, consulting with the USAID Administrator as appropriate. (Recommendation 3) Agency Comments We provided a draft of this report to State, USAID, and OMB for review and comment. State and USAID provided written comments about the draft, which are reprinted in appendix II and appendix III, respectively. State also provided technical comments about the draft report, which we incorporated as appropriate. OMB did not provide comments on the draft report. State concurred with our three recommendations. USAID concurred with our first two recommendations; however, USAID’s written comments indicate that they do not believe staffing shortages at State were responsible for the chronic delays in the submission of the Section 653(a) report. While we do not report that staffing gaps were the primary reason for State not meeting reporting deadlines, we did find them to be a contributing factor to the delays in fiscal year 2018. State officials indicated that staffing gaps in their Office of U.S. Foreign Assistance Resources affected the development of the Section 653(a) report, contributing to delays. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Administrator of USAID, the Acting Director of OMB, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or bairj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the extent to which the Department of State (State) met the data notification and timeliness mandates under Section 653(a) of the Foreign Assistance Act of 1961 (Foreign Assistance Act) for fiscal years 2015 through 2018, and (2) the factors that affected State’s ability to address Section 653(a) mandates for fiscal years 2015 through 2018. To examine the extent to which State has met the data notification and timeliness mandates under Section 653(a) of the Foreign Assistance Act, we reviewed State’s Section 653(a) reports for fiscal years 2015 through 2018 to assess whether they documented the amounts of U.S. foreign assistance to be provided to each foreign country and international organization, as well as the amounts provided by category of assistance. To determine the timeliness associated with the development and submission of State’s Section 653(a) reports for fiscal years 2015 through 2018, we also reviewed documentation to identify when the Department of State, Foreign Operations, and Related Programs Appropriations Acts were enacted, the mandated submission dates, and compared those dates with the dates that State submitted the reports to Congress. We used this information to generate a figure that shows the actual submission time frames for Section 653(a) reports during those years compared with the 30-day reporting mandate. We also interviewed officials from State, the U.S. Agency for International Development (USAID), and the Office of Management and Budget (OMB) to better understand how the agencies address the Section 653(a) mandates. To examine the factors that affected State’s ability to address Section 653(a) mandates for fiscal years 2015 through 2018, we reviewed State and USAID documents. We also interviewed State, USAID, and OMB officials to get their views on what factors, if any, affected the timeliness of the Section 653(a) reports. For those factors that we identified, we requested and analyzed additional information as described below. In reviewing State’s Section 653(a) process, we analyzed State and USAID guidance documents and reports developed to address Section 653(a) mandates. We reviewed State’s analyses that identified the requirements and directives in the annual appropriations acts and joint explanatory statements for fiscal years 2015 through 2018. These requirements and directives outline how the agencies should allocate the funding for programs and for countries and international organizations. In addition, we reviewed State’s and USAID’s guidance documents that outlined the Section 653(a) process. Based on this information, we summarized State’s process and developed a figure that shows the major steps of State’s process, as well as the amount of time that each step lasted during the development of the fiscal year 2018 Section 653(a) report. We assessed State’s process against federal standards for internal control, which state that management sets objectives to meet the requirements of applicable laws and regulations. We also assessed the process against GAO’s guidance on business process reengineering, which outlines best practices on how agencies should model their processes. To examine the quality of the data State collects during development of the Section 653(a) reports, we reviewed whether State’s analyses followed State’s Foreign Affairs Manual requirement that State must maintain effective systems of management control programs designed to provide reasonable assurance regarding the prevention of or prompt detection of errors and irregularities. We analyzed State’s reports on the requirements and directives in the annual appropriations acts from fiscal year 2015 through 2018. In addition, we validated a judgmental sample of the requirements and directives that State identified to ensure that they were in the applicable appropriations act, joint explanatory statement, and reports from the appropriations committees in the Senate and House of Representatives. Although we identified an error in the appropriated amount recorded for the fiscal year 2016 International Narcotics Control and Law Enforcement account, we did not find errors specific to the requirements and directives State identified. Therefore, we concluded that the analyses were sufficiently reliable for our purpose, and we used State’s analyses to determine the total number of requirements and directives. Moreover, we reviewed the fiscal year 2015 through 2018 appropriations acts to identify the amounts appropriated for the accounts included in the corresponding Section 653(a) reports. We also identified the purpose and the time frame during which the appropriations for each account were available for obligation in the fiscal year 2018 appropriations act. To examine the issue of staff vacancies in fiscal year 2018, we received data from State and USAID on staff vacancies in key offices involved in the development and submission of the Section 653(a) report. In addition, we interviewed State and USAID officials about vacancies and whether they had developed plans to address the vacancies. We assessed whether State had designed and implemented action plans for closing skills gaps, which could include gaps caused by having an insufficient number of people to complete its work—as described in our 2019 High- Risk Series report. We conducted this performance audit from December 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of State Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Thomas Costa (Assistant Director), Mason Thorpe Calhoun (Analyst in Charge), Katya E. Rodriguez, Ashley Alley, Faisal Amin, David Dayton, Neil Doherty, Justin Fisher, and Melissa Wolf made key contributions to this report.
Why GAO Did This Study State and USAID were responsible for managing $33.7 billion in foreign assistance funds in fiscal year 2018. Section 653(a) of the Foreign Assistance Act of 1961 mandates the President to report to Congress, on an annual basis, funding allocations by foreign country and category of assistance within 30 days of Congress appropriating certain funds. State, in coordination with USAID, makes decisions on how to allocate the funds, taking into consideration congressional instructions, the administration's priorities, and country-specific foreign assistance needs. GAO was asked to review State and USAID's process to respond to Section 653(a). This report examines (1) the extent to which State met the mandates under Section 653(a) for fiscal years 2015 through 2018 and (2) factors that affected State's ability to address the mandates. GAO reviewed annual appropriations acts and Section 653(a) reports submitted during fiscal years 2015–2018, and met with State, USAID, and Office of Management and Budget officials in Washington, D.C. What GAO Found The Department of State (State), through its Section 653(a) report, has provided Congress with information on the allocation of U.S. foreign assistance funds to foreign countries and international organizations by category of assistance as mandated, but the reports were not submitted within the mandated time frame. Specifically, in fiscal years 2015 through 2018, State submitted Section 653(a) reports from 80 to 230 days past the 30-day mandate, as shown in the figure. Multiple factors contributed to delays in submitting the Section 653(a) report. First, State has developed a multistep process for responding to hundreds of congressional instructions each year, while also reflecting administration priorities, which is not designed to meet the mandated time frame. This process involves coordination with the U.S. Agency for International Development (USAID), about 200 bureaus and overseas posts, and the Office of Management and Budget. Even though State's process is complex and does not meet the mandated time frame, State has not systematically reviewed its process since it revised the process in fiscal year 2016. State officials said that the process is necessary to address congressional instructions and administration priorities and because they use the allocations in the report as a basis for spend plans required to obligate funds. Second, a key part of State's process, involving data collection, has weaknesses that lead to discrepancies and hinder efficiency. According to federal internal control standards, agency data systems should provide quality data that is free from errors. However, State's mechanism for collecting information is a spreadsheet-based system susceptible to human errors, and State does not have appropriate controls in place to ensure data consistency. Third, in fiscal year 2018, staffing gaps also affected the development of the Section 653(a) report. State's two offices primarily responsible for managing the Section 653(a) process had 15 of 54 full-time equivalent positions vacant, which contributed to delays in submitting the Section 653(a) report, according to State officials. GAO has identified the filling of staffing gaps as a high-risk area that agencies should address. Unless State and USAID take steps to address these factors, they will continue to face challenges meeting their Section 653(a) requirements within the currently mandated time frame. What GAO Recommends GAO is making three recommendations to State: (1) conduct a systematic review of the Section 653(a) process to identify inefficiencies and determine the amount of time needed to prepare the Section 653(a) report, and if it exceeds 30 days, request that Congress extend the mandated time frame; (2) improve data collection; and (3) develop a plan to address staff vacancies, in consultation with USAID as appropriate. State concurred with these recommendations.
gao_GAO-20-384
gao_GAO-20-384_0
Background Executive Order 13548 committed the federal government to similar goals stated in an executive order issued a decade earlier and required federal agencies to take additional actions. Specifically, the prior Executive Order 13163 called for an increase in the hiring of individuals with disabilities across the federal government and for agencies to develop plans for increasing employment opportunities for individuals with disabilities. The additional actions stated in Executive Order 13548 specified that federal agencies were to implement strategies for retaining federal workers with disabilities in federal employment, to make increased use of Schedule A excepted hiring authority for persons with disabilities, and to designate a senior-level official to be accountable for meeting the goals of the order and developing and implementing the agency’s plan. In January 2017, EEOC issued a final rule amending the regulations requiring federal agencies to engage in affirmative action for individuals with disabilities. The rule codified many of the requirements placed on agencies by management directives and past executive orders, among other things. Agencies were to begin following the rule in January 2018. The revised regulation requires that agencies take specific steps until they meet specific employment goals set by EEOC for individuals with disabilities and targeted disabilities, provide personal assistance services to certain employees who need them because of a targeted disability, and meet a number of other requirements designed to improve employment opportunities for individuals with disabilities in the federal workforce. OPM, EEOC, and Labor each have roles in advancing the hiring and retention of persons with disabilities in the federal government. OPM is responsible for executing, administering, and enforcing the civil service laws, rules, and regulations. This includes ensuring compliance with merit system principles that prohibit discrimination—including on the basis of disability—in all aspects of personnel management, among other things. Additionally, OPM is responsible for monitoring federal agencies’ implementation of affirmative action programs for disabled veterans, including providing technical assistance and reporting on progress made by agencies. EEOC, in the federal sector, is responsible for enforcing the employment discrimination prohibitions of anti-discrimination laws, including the Rehabilitation Act, which prohibits discrimination on the basis of disability. EEOC is responsible for the annual review and approval of agencies’ affirmative action program plans for the hiring, placement, and advancement of individuals with disabilities. It is also responsible for establishing procedures for handling federal employees’ allegations of discrimination and for providing for the adjudication of complaints and hearing of appeals. Labor’s Office of Disability Employment Policy (ODEP) is to provide national leadership in developing policy to eliminate barriers to the employment of individuals with disabilities in the public and private sectors. ODEP works in collaboration with federal, state, and local agencies, private sector employers, and employer associations to develop and disseminate evidence-based policy strategies and effective practices. The office also assists agencies and employers with adopting such policies and practices. Additionally, Labor administers the Federal Employees’ Compensation Act, which provides workers’ compensation coverage to federal employees for employment-related injuries and occupational diseases. Under Executive Order 13548, each of these agencies were assigned roles and responsibilities often in consultation with each other. For example, OPM, in consultation with Labor and EEOC, was tasked to identify and assist agencies in implementing strategies for retaining federal employees with disabilities. Additionally, OPM was also to consult with Labor, EEOC, and OMB in designing model recruitment and hiring strategies for agencies and developing mandatory training on employment of the disabled. Labor was to consult with OPM in pursuing innovative re-employment strategies and develop policies that foster improved return-to-work of employees who were injured on the job. OMB’s initial role was to convene federal agencies and assist their start- up efforts to implement the Executive Order, according to staff in OMB’s Office of Performance and Personnel Management. OMB staff told us the agency helped to establish a framework for coordination and collaboration among the key leadership agencies focused on making the federal government a model employer for persons with disabilities and to provide support for regulatory and policy initiatives related to disability employment. In 2015, in furtherance of an executive order focused on improving diversity and inclusion in the federal workforce, OMB joined OPM and EEOC and issued a memorandum to all heads of executive departments and agencies announcing the establishment of the Diversity and Inclusion in Government Council. The council initially operated under the direction of OPM, OMB, and EEOC and was formed to provide a forum for improving senior leadership engagement and collaboration on strategic and operational diversity and inclusion priorities. OMB’s role has since diminished as it delegated much of the leadership responsibilities to the other key leadership agencies. For reporting purposes, the federal government distinguishes between two major categories of disabilities: targeted and other disabilities. Targeted disabilities, generally considered to be more severe, include traumatic brain injuries, deafness, blindness, partial or complete paralysis, significant mobility impairments, and psychiatric disabilities, among others. Other disabilities include such conditions as gastrointestinal disorders, cardiovascular or heart disease, autoimmune disorders, pulmonary or respiratory conditions, and learning disabilities. Federal statutes and regulations provide special hiring authorities for people with disabilities. These include Schedule A excepted service hiring authority—which permits the noncompetitive appointment of qualified individuals with intellectual, severe physical, or psychiatric disabilities and appointments and noncompetitive conversion for veterans who are 30 percent or more disabled. To qualify for a Schedule A appointment, an applicant must generally provide proof of disability. Proof of disability can come from a number of sources, including a licensed medical professional, or a state agency that issues or provides disability benefits. The federal government gathers data on the number of individuals with disabilities in the workforce through OPM’s Standard Form 256, Self- Identification of Disability (SF-256). Federal employees voluntarily complete this form to disclose their disability status, as defined by the Rehabilitation Act. Our past work highlighted concerns about the accuracy of data captured in the SF-256. For example, we reported that agency officials and advocates for people with disabilities believe there is an undercount of employees with disabilities because some individuals may not disclose their disability status out of concern they will be discriminated against or precluded from advancement. In addition, employees may develop a disability during federal employment and may not know how to or why they should update their status. Disability status information is confidential and cannot be used to affect an employee in any way. Given our previously reported concerns, we recommended that OPM assess the extent to which the SF-256 accurately measures progress toward the goal of Executive Order 13548 and to explore options for improving the accuracy of SF-256 reporting. To address our recommendation, OPM updated its 2012 Employee Feedback Survey to allow federal employees to confidentially self-disclose a disability and serve as a source of comparison through which OPM could assess the accuracy of the SF-256. Federal Agencies Exceeded the Hiring Goal Set Forth in the Executive Order, but OPM Does Not Track or Report Retention Data An Additional 143,000 Persons with Disabilities Were Hired Government- wide between 2011 and 2015 Federal agencies exceeded the government-wide goal to hire an additional 100,000 persons with disabilities in the federal government by 2015, according to our analysis of OPM’s EHRI data across the 24 CFO Act agencies. During fiscal years 2011 through 2015, a total of approximately 143,600 persons with disabilities were hired across all positions, which includes full-time permanent positions and part-time or temporary positions. Of those hires, approximately 87,000—61 percent— were hired into full-time permanent positions. Similar hiring continued to increase in 2016 and 2017 as the federal government hired approximately an additional 79,600 persons with disabilities during those 2 years across all positions, of which approximately 49,200—62 percent—were full-time permanent positions. Figures 1 and 2 show the total government-wide number of persons with disabilities and targeted disabilities hired in fiscal years 2011 through 2017. Our determinations of the number of new hires each year were consistently lower than the numbers OPM included in its executive branch reporting. The discrepancy between our numbers and OPM’s reported counts is largely attributed to our exclusion of agency-to-agency transfers in our analysis. For the purpose of our analysis of government-wide hiring, we excluded transfers because we did not consider those to be new hires since those individuals remained employed in the federal government. Figure 3 shows the total government-wide number of persons without disabilities hired during the same time period. According to our analysis, a total of approximately 903,000 persons without disabilities were hired across all positions between 2011 through 2015. Of those hires, approximately 403,900—45 percent—were hired into full-time permanent positions. Hiring continued to increase with an additional 377,150 in 2016 and 2017 combined across all positions, of which approximately 189,200—50 percent—were full-time permanent positions. The data shown in figures 1 and 3, and summarized in Table 1, show that from 2011 through 2017, the percent of hires with disabilities steadily increased from 11 percent to almost 20 percent. Our analysis at the agency level shown in table 2 shows that all agencies increased the percentage of persons with disabilities hired from 2011 through 2017 and almost all agencies increased the percentage of persons with targeted disabilities hired over the same period. Table 2 shows this information by agency for fiscal years 2011, 2015, and 2017. We chose to present these years of data to mark the first and last years of the 5-year period specified in Executive Order 13548 and to also show the most recent data available at the time of our review. Analyses of Retention Data Show Varied Results As part of our analyses of individuals hired during the 2011 through 2017 time period, we analyzed employee retention in terms of the number of years an individual hired during that time period remained employed. Across the federal government, of the more than 223,000 persons with disabilities hired during the 2011 through 2017 time period, approximately 39 percent of them stayed in the federal government for less than 1 year and approximately 60 percent stayed for less than 2 years, as shown in figure 4. These percentages are slightly better than the percentages of employees without disabilities who left within the same amount of time as shown in figure 5. Across the federal government, of the more than 1.28 million persons without disabilities hired during the 2011 through 2017 time period, approximately 43 percent of them stayed in the federal government for less than 1 year and approximately 60 percent of them stayed for less than 2 years. The data shown in figures 4 and 5 taken in context together provide an aggregate overview of government-wide hiring and retention trends of individuals with disabilities in comparison to hiring and retention trends of individuals without disabilities. We found the trends to be generally consistent between the employee groups during this time period, with the largest percentage of hires staying less than 1 year. These departures may be explained, in part, by the proportion of employees hired into temporary positions who therefore were not necessarily expected to stay on the job for a longer duration, or by employees who did not meet performance standards. To pinpoint the root causes behind these departure rates and to determine where appropriate improvements and potential solutions may be warranted, targeted data collection, tracking, and analysis is needed. Moreover, the loss of such a substantial percentage of new hires within their first 2 years of employment provides an opportunity for the federal government to examine why this occurs, identify any lessons learned, and better target its retention efforts as appropriate to potentially reduce such early departures. Further, these retention trends have implications related to agencies’ ability to meet and sustain progress toward the federal goals of ensuring that at least 12 percent of their workforce is comprised of employees with disabilities including 2 percent comprised of employees with targeted disabilities. In addition, we analyzed the number of persons with disabilities hired into each occupational category as identified in OPM’s EHRI database for fiscal years 2011 through 2015. The categories are administrative, blue collar, clerical, professional, technical, and other. Within each category, we identified the number of employees who remained in those positions for at least 2 years. Our analysis summarized in table 3 shows the highest retention rates for employees with disabilities and employees with targeted disabilities occurred in three categories: administrative, blue collar, and professional. For example, in the professional occupational category, the retention rates were approximately 48 and 43 percent for employees with disabilities and targeted disabilities, respectively—which were the highest levels of retention for persons with disabilities and targeted disabilities in any occupational category. However, the number of persons with disabilities hired into this category is considerably lower than that of non- disabled hires into the same category. Specifically, approximately 13 percent of persons with disabilities and approximately 11 percent of persons with targeted disabilities were hired into the professional occupational category. In contrast, as shown in table 3, 23 percent of persons with no disability were hired into this same occupational category and retained at a similar rate. Our analysis by GS level in table 4 shows that retention rates increase with GS level, regardless of disability, with retention rates being slighlty higher for persons without disabilities for the top three GS levels. Moreover, persons with disabilities and targeted disabilities were more likely to be hired at the lowest three GS levels, with one exception. Persons with disabilities fared equally or relatively well in GS-11 and above categories compared to persons without disabilities or with targeted disabilities. OPM Does Not Track Retention Data on Employees with Disabilities OPM does not routinely track or report retention data on employees with disabilities, which could help inform both agency-specific and government-wide assessments of how the federal government is performing with retaining the employees it hires. OPM officials said OPM has the ability to track the retention of all employees in the federal government and can do so for any specific category of employees on an as needed basis or upon request. For example, in 2015, OPM started reporting new hire retention data on employees who are veterans by including this information in its annual report on the employment of veterans in the federal government. This report also includes hiring data on disabled veterans. However, there is no similar OPM tracking or reporting of retention data for all individuals with disabilities including targeted disabilities. The federal regulations, executive order and management directive discussed earlier in this report all include statements about the importance of retaining individuals with disabilities in the federal government. For example, Executive Order 13548 stated that agencies must improve their efforts to employ workers with disabilities through increased recruitment, hiring, and retention of these individuals. Further, it stated that OPM, in consultation with Labor and EEOC, shall identify and assist agencies in implementing strategies for retaining federal workers with disabilities in federal employment. Federal regulations state that agencies shall give full consideration to the retention of qualified individuals with disabilities in the federal workforce. EEOC’s MD 715 requires agencies to conduct an internal review and analysis of the effects of their current and proposed policies, practices, procedures and conditions that relate to the employment—including retention—of individuals with disabilities. Making use of the agency-specific data OPM already gathers in its EHRI database complemented with the retention information agencies report in their annual MD 715 submissions would help to facilitate more comprehensive analyses of the retention of employees with disabilities across the federal government. Such analyses could provide a fuller picture of how the federal government is performing with retaining the employees it hires, help identify common agency experiences—both successes and challenges—and assist in pinpointing the root causes that contribute to retention rates of employees with disabilities in the federal workforce. Making retention data available to federal agencies for such use is also consistent with a federal internal control standard that states that management is to obtain relevant data from reliable internal and external sources in a timely manner so that they can be used for effective monitoring. Without routinely tracking and analyzing data on how long employees with disabilities remain employed in their agencies, federal managers are limited in their ability to assess the performance and effectiveness of the hiring and retention efforts put in place at their agencies. In addition, agencies are missing opportunities to leverage such information to help inform their own internal reviews and analysis of progress in meeting the goals included in federal regulations that at least 12 percent of their workforce be comprised of employees with disabilities including 2 percent comprised of employees with targeted disabilities. Selected Agencies Used Various Practices to Increase Hiring but Opportunities Exist to Examine the Impact of Schedule A Hiring Authority and Enhance Reasonable Accommodation Programs The three agencies we selected as case illustrations generally experienced increases in the percentage of employees hired with disabilities and targeted disabilities. Table 5 shows the percentage of employees hired by each agency in fiscal years 2011, 2015, and 2017. We chose to present these years of data to mark the first and last years of the 5-year period specified in Executive Order 13548 and to also show the most recent data available at the time of our review. For our analysis of individual agency-level hiring data, we included transfers in cases where employees transferred into an agency because we considered that to be a new hire at the individual agency level. Similar to the government-wide retention analysis described earlier, we also examined retention data at DOJ, SBA, and SSA. Of the employees with disabilities hired at DOJ and SSA from 2011 through 2017, approximately 31 percent and 33 percent, respectively, stayed in the federal government for less than 1 year. Approximately 53 percent and 51 percent, respectively, stayed for less than 2 years. These retention rates were slightly better than government-wide rates. In contrast, approximately 65 percent of employees with disabilities hired at SBA during that time period stayed for less than 1 year and approximately an additional 9 percent stayed for less than 2 years of employment. These departures may be explained, in part, by the proportion of employees hired into temporary positions who therefore were not necessarily expected to stay on the job for a longer duration. For example, SBA staff said that, on average, 45 percent of SBA’s workforce is comprised of temporary employees hired by the agency’s Office of Disaster Assistance during a disaster. As such, SBA expects turnover among those hires, including employees with disabilities. Similar to our analysis of government-wide retention rates by GS level and by occupational category, we identified the number of individuals hired at each of the three selected agencies during fiscal years 2011 through 2015 who stayed for at least 2 years. We found that generally across the three agencies, employees with disabilities were retained longer at the higher GS levels. As the GS levels increased, individuals without disabilities retained their jobs at a slightly higher rate than individuals with disabilities. Our analysis of occupational categories found that, in general, the three agencies each retained people with disabilities at lower rates than people without disabilities. More detailed hiring and retention data for each of the three agencies are included in appendix I. Selected Agencies Collaborated and Shared Information to Aid Recruitment and Hiring of Individuals with Disabilities To aid recruitment and employment opportunities for individuals with disabilities, the three agencies we interviewed reported using (1) collaboration with other federal agencies for knowledge and information sharing and (2) coordination with employee resource and advisory groups. The following examples are illustrations of practices that selected agencies implemented. We did not assess the effectiveness or attempt to quantify the costs or benefits of the practices. Two agencies provided examples of their collaboration with other federal agencies for knowledge and information sharing. For example, DOJ officials told us that staff from their agency’s Criminal Division participated in an OPM effort using a “Resume Mining” feature in the USAJOBS Agency Talent Portal, in which the division’s human resources specialists searched through active resumes and filtered the searches based upon candidates who were eligible to be hired non-competitively under the Schedule A hiring authority. According to SBA officials, they used the Workforce Recruitment Program—a resource managed through Labor to help federal hiring managers connect with qualified candidates with disabilities for all jobs. SBA also retains a repository of resumes for individuals with disabilities to share with hiring managers. In 2015, to assist hearing impaired candidates and in a joint effort with the Federal Communications Commission, SBA hired staff fluent in American Sign Language (ASL) to provide video relay services directly to the deaf and hard-of-hearing communities. As a result, SBA officials told us SBA’s ASL customer support staff is able to communicate with and assist hearing-impaired job candidates. SBA also developed a National Strategic Recruitment Plan, which highlights Labor’s Workforce Recruitment Program for College Students with Disabilities. SBA officials said this plan has served as a successful tool for recruitment and hiring managers within their agency. Two of the three selected agencies we reviewed, DOJ and SSA, have disability employee resource or advisory groups made up of employees and management. These groups are generally made up of a variety of representatives from across the agency, including human resources professionals, hiring managers, recruitment coordinators, and employees with disabilities. The purpose of these groups includes helping to identify policies and procedures that support a positive work environment for people with disabilities. For example, DOJ’s Attorney General’s Advisory Committee for People with Disabilities (AGCPD) meets quarterly and works with DOJ management on disability employment issues. AGCPD advisory members told us one of their most significant contributions has been assisting with developing an agency-wide policy to help increase the use of the Schedule A hiring authority between 2010 and 2012. As a result, the number of individuals with disabilities hired at DOJ increased, according to AGCPD members. However, they said the agency has been unable to sustain those numbers in recent years. DOJ staff said this may also be attributed, in part, to a hiring freeze across DOJ at the time that affected all hires. AGCPD members also told us they routinely review DOJ’s disability hiring and retention percentages to monitor agency progress on this issue. According to SSA officials, SSA’s employee advisory group, the National Advisory Council of Employees with Disabilities (NACED), advises the agency regarding reasonable accommodations, recruiting, and creating pathways for promotions and retention of employees with disabilities. SSA’s management was involved in establishing guidelines for the advisory group to operate within the agency. NACED has a senior executive service member who serves as the council’s liaison with SSA senior management. NACED assisted in the creation of mandatory agency training for managers and employees at SSA on disability awareness and sensitivity. The group also assisted the agency in producing a video that features SSA employees with disabilities and is available on SSA’s intranet website. In addition, the advisory group assisted the agency to ensure SSA’s systems are compliant with assistive technology. In addition, according to SSA officials, the agency has placed designated Selective Placement Program Coordinator (SPPC) points of contact in each of its regional offices to support disability recruitment and hiring efforts. SSA officials told us the role of their SPPC has been instrumental in building coalitions and networks with their internal and external stakeholders, including connecting SSA’s human resources, equal employment opportunity (EEO), and employee affinity groups. SSA officials said these essential connections enable their agency to acquire the information needed to make informed disability employment and general EEO program and policy decisions. Selected Agencies Provided Schedule A Training but Do Not Measure Its Impact As noted earlier, federal statutes and regulations provide special hiring authorities for people with disabilities, which includes Schedule A hiring authority. Agencies are not required to use Schedule A authority and can choose to use the traditional competitive process to fill job vacancies. However, Executive Order 13548 called for increased utilization of the federal government’s Schedule A excepted service hiring authority for persons with disabilities, as appropriate. Consistent with federal emphasis on the use of Schedule A, all three selected agencies reported to us that they provide training on Schedule A hiring authority to their hiring managers and human resources professionals. For example: According to SBA officials, the agency provides supervisory training to all hiring managers and supervisors to emphasize Schedule A hiring authority, among other hiring flexibilities. SSA officials told us their agency holds annual mandatory training for managers and human resource specialists on special hiring authorities that apply to individuals with disabilities, including Schedule A, and reasonable accommodations. SSA also provides a manual to its managers focused specifically on recruitment, interviewing, and hiring related to Schedule A authority. DOJ officials told us their agency participated in ongoing training and other initiatives designed to increase the use and understanding of Schedule A. Nevertheless, the agencies we spoke with reported that some hiring managers and human resources staff are unfamiliar with or unsure of how to use the Schedule A hiring authority. Consequently, the agencies have found that there is a continual need to increase hiring managers’ awareness of Schedule A and to educate both managers and human resource personnel on the use of the hiring authority. For example: SBA officials said their managers often have questions about what Schedule A is and how to use it in the hiring process. SSA officials said they continue to receive questions about the hiring authority from their newer managers, which they address on a case- by-case basis. Similarly, the key leadership agencies underscored this as an issue they have seen government-wide in their experience. For example, EEOC staff said because hiring managers change frequently, information and the use of the Schedule A hiring authority may be a topic that was not part of their previous work experiences or portfolios. EEOC officials said that all managers could benefit from more training to understand how and when it is permissible to use the special authority to hire individuals with disabilities. To help address issues around the use of Schedule A, officials from the key leadership agencies emphasized the importance of federal agencies having designated staff familiar with disability issues, such as an SPPC, in which a part of his or her job responsibilities is to help educate and train the workplace on disability issues such as the use and benefits of the Schedule A hiring authority. Consistent with this guidance, two of the three agencies use SPPCs to provide guidance and, in one case, provide training. For example: SBA’s SPPCs frequently provide guidance on the option to utilize the Schedule A hiring authority prior to opening a competitive job announcement on USAJOBS. SSA has designated SPPCs in each of its regional offices. The SPPCs provided guidance and training to managers on the appointment of individuals with disabilities using the Schedule A appointment authority. As a result, in fiscal year 2019, SSA officials said these efforts contributed to their agency filling more than 250 positions using the Schedule A hiring authority. Additional opportunities exist to further address issues around the use of Schedule A. We have previously reported that training at all staff levels, in particular training on hiring, reasonable accommodations, and diversity awareness can help disseminate leading practices throughout an agency and communicate expectations for implementation of policies and procedures related to improving employment of people with disabilities. In addition, our past work has underscored the importance of assessing and measuring the real impact of training to determine how it contributes to the accomplishment of agency goals and objectives. Moreover, a leading training investment practice is to evaluate the benefits achieved through training, such as having a formal process for evaluating improvement in performance and tracking the impact of training on the agency’s performance goals. While assessing training is important, the three selected agencies said they do not assess the impact of their training related to Schedule A. For example, according to SBA officials, their training covers a range of hiring flexibilities beyond Schedule A. As such, SBA officials said they are unable to evaluate the effect of the training to specifically measure an increased level of hiring managers’ and human resources professionals’ understanding of how and when to use Schedule A authority. SSA officials told us that while their agency does not evaluate their training, the agency is currently developing an evaluation module to allow employees and managers to provide feedback on the effectiveness of their Schedule A training. However, SSA did not provide a committed timeframe for completion of such a module. DOJ staff said training is provided by its various component agencies and is updated when appropriate. However, DOJ did not provide any further details to explain the frequency, content, or results of such evaluations. Without evaluating the impacts of training to ensure that hiring managers understand how and when to use the Schedule A hiring authority, agencies may be missing opportunities to enhance awareness of and sensitivity to disability issues and opportunities to increase the number of employees with disabilities across the federal workforce. Reasonable Accommodations Were Often Low Cost; Feedback on Accommodations is Not Always Collected Federal agencies are required to provide reasonable accommodation to qualified employees or applicants with disabilities, unless to do so would cause undue hardship. In general, a reasonable accommodation is a change in the work environment or in the way things are customarily done that would enable an individual with a disability to apply for a job, perform the duties of a job, or enjoy the benefits and privileges of employment. Officials from the three selected agencies indicated that many reasonable accommodation provisions are low- to no cost to their agencies, often involving minor changes to an employee’s workspace or work schedule, or modifications to work-related technologies. For example, the most common reasonable accommodation requests cited by each of the agencies included: providing ergonomic adjustments or modifications to the layout of workspaces; adjusting work schedules to allow employees with chronic medical conditions to attend medical appointments and complete their work at alternate times or locations; providing sign language interpreters or closed captioning at meetings making materials available in braille or large print. In addition, according to information posted on the Office of Disability Employment Policy website within Labor, examples of other job accommodations that are low cost and often involve minor changes to a person’s work environment include: physical changes, such as installing a ramp or restroom modifications; accessible and assistive technologies such as providing screen reader software or using videophones to communicate with employees who have impaired hearing; and policy enhancements, such as allowing service animals in the workplace. Federal agencies are required to post on their websites, and make available to all applicants and employees in written and accessible formats, procedures for reasonable accommodation. Agencies are also required to collect specific information about each reasonable accommodation, including whether the accommodation was granted and the basis for any denial. All three of the selected agencies indicated in their 2018 MD 715 reports to EEOC that their agencies have these established procedures in place and are in compliance with EEOC regulations and guidance. While the three selected agencies reported they have processes in place for receiving reasonable accommodations requests, only SSA has procedures for obtaining employee feedback from employees after an accommodation is provided. According to agency officials, the agency offers employees who have requested job accommodations various opportunities to provide feedback to agency management about their reasonable accommodation experience. For example, SSA officials said their agency uses a dedicated email inbox and telephone number to receive inquiries and feedback from reasonable accommodations customers and stakeholders. Both of these are monitored daily by the agency’s Center for Accommodations and Disability Services (CADS) to ensure emails and calls are logged and tracked. Additionally, according to agency officials, if an employee prefers to contact the reasonable accommodations office anonymously, employees can complete the anonymous Process Improvement Comments Survey to submit concerns, comments, or recommendations for reasonable accommodations process improvement. To address issues and concerns received through any of these means, CADS staff reach out to the relevant managers, as appropriate, and only share information on a need-to-know basis, or as otherwise required by applicable law. According to SSA officials, SSA’s policy also requires that managers or CADS staff confirm with the employee that a job accommodation was received and is effective prior to closing the request in the agency database. Finally, SSA’s policy requires supervisors to continually engage in this interactive process to ensure the continued effectiveness of job accommodations. In contrast, DOJ and SBA officials reported that their agencies do not have any specific procedures in place to solicit ongoing employee feedback from employees who request reasonable accommodations. Staff from both agencies said that communication between the supervisor and individual needing a reasonable accommodation is encouraged. In general, if an afforded accommodation is ineffective or needs modification, the employee and supervisor are responsible for contacting the appropriate disability employment program manager to address the issue. Federal agencies are not explicitly required to obtain feedback from employees about the effectiveness of their job accommodations experience. However, EEOC policy guidance states that agencies should keep cumulative records for at least 3 years to track their performance with regard to providing reasonable accommodations to employees. Tracking performance over a 3-year period is critical to an agency’s ability to assess whether it has adequately processed and provided reasonable accommodations, according to EEOC guidance. Agencies are encouraged to use this tracking information to evaluate whether and where they need to improve their handling of reasonable accommodation requests. In addition, this type of monitoring is consistent with federal internal control standards. Specifically, the standard calls for ongoing monitoring to be built into the entity’s operations, performed continually, and responsive to change. Without periodically soliciting, obtaining, and documenting employee feedback on agencies’ reasonable accommodations efforts, management is missing opportunities to evaluate the effectiveness of their programs, identify potential risks, and identify any improvements that may be warranted. For example, such information could provide valuable insights about the timeliness of processing and fulfilling employees’ requests and the ongoing effectiveness of an accommodation. In some cases, an accommodation may no longer be effective for an employee for various reasons such as if the employee’s limitations change, workplace equipment changes, job responsibilities change, or the accommodation involves equipment or software that requires maintenance or updates. EEOC, OPM, and Labor Have Coordinated Roles to Assist Agencies EEOC, OPM, and Labor took various actions during the course of the 5- year period specified under the executive order for meeting the government-wide hiring goal and have continued their efforts. For example, the agencies began to meet quarterly immediately after the executive order was signed to establish collaborative actions they could take to increase disability hiring and retention measures and to discuss best practices focused on hiring and retaining individuals with disabilities. Officials from OPM, EEOC, and Labor continue to meet quarterly as participants in an interagency working group called the Federal Exchange on Employment and Disability (FEED). FEED meetings cover a broad range of federal disability topics, including sharing best practices and establishing collaborative partnerships designed to make the federal government a model employer of people with disabilities. For example, at one FEED meeting, OPM announced a new resource to help address some common questions OPM receives about Schedule A. At another FEED meeting, OPM and EEOC officials discussed possible strategies agencies can consider when they are planning to re-survey their agencies through the Standard Form 256, Self-Identification of Disability (SF-256), such as initiating the re-survey campaign during Disability Awareness Month when there is increased attention on disability issues. OPM assisted agencies with disability hiring plans and authorities and compiled government-wide data. Under EO 13548, OPM was required to implement a system for reporting regularly to the President, heads of agencies, and the public on agencies’ progress in implementing their disability hiring plans and meeting the objectives of the executive order. In May 2012, we reported on OPM’s progress in reviewing agencies’ hiring plans and found that many plans had deficiencies that needed to be addressed. For example, not all plans identified a senior- level official responsible for development and implementation of the plan. We recommended that OPM incorporate information about such deficiencies in its external reporting. OPM did so, and also worked with agencies to correct any plan deficiencies by November 2012. In 2016, OPM issued its capping report announcing the success of the government’s effort, which included a summary of the initiatives taken to improve agency coordination, education, and training accompanied by a series of tables showing the composition of disability hires across the federal workforce. OPM also continues to collect government-wide disability data, which is available to agencies through the MAX.gov web portal, and provides assistance to agencies upon request. In October 2018, the Director of OPM issued a joint memorandum with the Chair of EEOC to the Chief Human Capital Officers Council regarding updates to the SF-256 to reflect changes to terms used to describe targeted disabilities, serious health conditions, and other disabilities. As discussed in an earlier section of this report, individuals use this form to voluntarily self- identify a disability, and OPM uses the information provided through this form for data collection purposes only. The revised form includes simplified condition descriptions and provides respondents with the option of identifying if they have a targeted disability, disability, or serious condition without specifying a diagnosis. SF-256 continues to be the primary tool for measuring the workforce participation of persons with disabilities in the federal government. The joint memorandum reminded agencies that OPM and EEOC are available to assist agencies in their efforts to help employees self-identify as people with disabilities and people with targeted disabilities, as appropriate. EEOC collects information through MD 715, issued regulations, and provides technical assistance. EEOC’s ongoing data and information collection efforts under MD 715 require agencies to report annually on the status of their equal employment opportunity programs. This includes agency-specific self-assessments of the extent to which they are meeting their responsibilities to provide employment opportunities for qualified applicants and employees with disabilities and targeted disabilities. If agencies identify any barriers to the equal employment of persons with disabilities, they must work to eliminate the barrier. EEOC’s MD 715 annual reporting requirement included under Part J captures agencies’ descriptions of how their affirmative action efforts improve the recruitment, hiring, advancement, and retention of applicants and employees with disabilities. According to EEOC’s guidance to agencies, Part J is to assist agencies in meeting the requirements for an affirmative action plan. Specifically, Part J requires agencies to examine employment trends and participation rates of persons with reported and targeted disabilities in agency programs. In 2017, Part J was revised and now solicits agency information about voluntary and involuntary separations of employees with disabilities. For example, agencies are to confirm whether voluntary and involuntary separations occurred at a rate exceeding that of employees without disabilities. Agencies are required to complete Part J and, for transparency purposes, post their affirmative action plans on their external websites. The importance of this type of information is underscored by the analysis summarized in an earlier section of this report showing that approximately 60 percent of persons with disabilities hired into the federal government during 2011 through 2017 stayed for less than 2 years of service. Also as noted earlier, opportunities exist to enhance collection and analysis of retention data and learn about what factors contribute to retention rates of employees with disabilities in the federal government. EEOC provides various types of support to agencies to help them implement requirements of the revised regulations on affirmative action for individuals with disabilities. For example, EEOC officials said they visited all agencies to provide guidance and technical assistance with their hiring plans. EEOC continues to provide ongoing feedback to agencies, both formally and informally, and visits agencies on a 3-year rotation cycle. As part of EEOC’s outreach, agency representatives provide presentations to, and participate in meetings with, federal employees and employers. The agency’s website also includes a list of outreach coordinator contacts for each of its field offices. EEOC’s Training Institute provides a variety of training programs specialized for the federal sector, including courses on disability issues and MD 715 barrier analysis, as well as customized training throughout the year to meet particular agencies’ needs. EEOC’s federal training courses can be delivered on site or virtually. Labor provides tools, resources, education, and training to agency managers. Labor has implemented and supported a number of initiatives aimed at enhancing the federal sector’s performance on disability employment. For example, Labor’s Office of Disability Employment Policy supports the Employer Assistance and Resource Network on Disability Inclusion (EARN), which is a federal resource that provides education, training, tools, and resources for managers on the hiring, retention, and advancement of persons with disabilities. In 2018, EARN issued a federal framework—in partnership with EEOC and OPM—which outlined various employment strategies and practices for agencies to consider and incorporate into their own efforts related to disability inclusion in the workforce. In addition, Labor leads an interagency working group known as the Federal Exchange on Employment and Disability, which is comprised of federal staff across government with roles in developing, implementing and managing disability employment programs to foster cross-agency collaboration and share best practices. The agency also developed a toolkit for Federal Agencies on Hiring People with Disabilities outlining a five-step process and related resources to assist federal agencies in their efforts to increase the employment of people with disabilities. Another effort supported by Labor provides more targeted technical assistance and free consulting services on workplace accommodations through the Job Accommodations Network. To increase the recruitment of persons with disabilities, Labor also plays a lead role in the Workforce Recruitment Program for College Students with Disabilities, which is a recruitment, and referral program that connects federal and private sector employers nationwide with college students and recent graduates with disabilities for summer or permanent employment. Labor has also developed and provided assistance on various trainings for federal hiring managers and human resources professionals, including an OPM course titled, “A Roadmap to Success: Hiring, Retaining and Including People with Disabilities.” Conclusions In its effort to become a model employer, the federal government increased employment opportunities for persons with disabilities; provided specific direction and guidance to agencies through various executive orders, management directives, and regulations; and exceeded its goal to hire an additional 100,000 individuals with disabilities. However, OPM does not routinely track or report retention data, which could help pinpoint the root causes behind disabled employee departure rates. Making use of the agency-specific data OPM already gathers in its EHRI database complemented with the retention information agencies report to EEOC would allow for more comprehensive retention analyses of employees with disabilities across the federal government. Such analyses would provide a fuller picture of how the federal government is performing with retaining the employees it hires and help to identify common agency experiences, both success and challenges. Without comprehensive analyses of retention data, the federal government is limited in its ability to assess the performance and results of the hiring and retention efforts for this segment of the workforce. Selected agencies implemented a number of practices that helped bolster their recruitment and hiring of persons with disabilities, including collaborating with other federal agencies for knowledge and information sharing, coordinating efforts with employee resource or advisory groups, and providing additional training for hiring managers and human resources staff on using Schedule A hiring authority—one of the commonly used hiring flexibilities available to agencies to onboard qualified individuals with disabilities. However, the selected agencies do not assess or measure the impact of their Schedule A training to determine how it contributes to the accomplishment of federal goals to increase the number of employees with disabilities across the federal workforce. In addition, opportunities exist to enhance the effectiveness of selected agencies’ reasonable accommodations programs by obtaining employee feedback from employees about their job accommodations experience. OPM, EEOC, and Labor have worked collaboratively to assist agencies with enhancing their recruitment and hiring efforts. They compiled government-wide data, issued guidance and regulations to clarify agencies’ responsibilities and obligations to strengthening employment opportunities for disabled persons, and provided various resources, education, and training. Recommendations for Executive Action We are making the following recommendation to OPM: The Director of OPM should routinely track and report retention data for employees with disabilities and make such data available to federal agencies, including EEOC, through a centralized web portal—such as MAX.gov. For example, OPM could track and report such data by General Schedule level pay groupings, which could help pinpoint root causes that contribute to retention rates, inform assessments of government-wide progress on employee retention, and identify needed improvements. (Recommendation 1) We are making the following recommendations to DOJ: The Attorney General of the United States should develop and implement policies and procedures for assessing the impact of training provided to agency hiring managers and human resources staff on Schedule A hiring authority. This includes assessing the impact of its training on agency performance goals related to increased hiring of individuals with disabilities and targeted disabilities. (Recommendation 2) The Attorney General of the United States should develop and implement policies and procedures for obtaining employee feedback about the agency’s reasonable accommodations efforts and use such information to evaluate the ongoing effectiveness of the program. This may include identifying any effects on employee retention, identifying potential risks, and determining any improvements that may be warranted. (Recommendation 3) We are making the following recommendations to SBA: The Administrator of SBA should develop and implement policies and procedures for assessing and tracking the impact of training provided to agency hiring managers and human resources staff on Schedule A hiring authority. This includes assessing the impact of its training on agency performance goals related to increased hiring of individuals with disabilities and targeted disabilities. (Recommendation 4) The Administrator of SBA should develop and implement policies and procedures for obtaining employee feedback about the agency’s reasonable accommodations efforts and use such information to evaluate the ongoing effectiveness of the program. This may include identifying any effects on employee retention, identifying potential risks, and determining any improvements that may be warranted. (Recommendation 5) We are making the following recommendation to SSA: The Commissioner of SSA should develop and implement policies and procedures for assessing and tracking the impact of training provided to agency hiring managers and human resources staff on Schedule A hiring authority. This includes assessing the impact of its training on agency performance goals related to increased hiring of individuals with disabilities and targeted disabilities. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of the report to OPM, EEOC, Labor, OMB, DOJ, SBA, and SSA for review and comment. We received written comments from 3 agencies—OPM, SBA, and SSA—that are reprinted in appendices II through IV and summarized below. EEOC informed us that they had no comments. Labor and DOJ provided technical comments, which we incorporated as appropriate. OMB did not provide comments on the draft. OPM concurred with our recommendation to routinely track and report retention data for employees with disabilities and make such data available to federal agencies. OPM stated that it already routinely tracks retention data for persons with disabilities by agency. In addition, OPM responded that retention data for employees with disabilities by agency and GS level pay groupings for fiscal years 2017 and 2018 can be obtained by federal agencies through the MAX.gov website. However, OPM did not provide any supporting documentary evidence or further details to explain its tracking efforts or which data are available to federal agencies. SBA disagreed with the retention data we present in figure 8, showing that approximately 65 percent of employees with disabilities hired at SBA between 2011 through 2017 stayed less than one year. In its written comments, SBA stated that under hiring authorities it uses in responding to disasters, appointments are generally not to exceed one year. As indicated in our report, we acknowledge that each of our retention analyses include full-time permanent hires and part-time or temporary hires. We also include a specific statement regarding temporary hires at SBA’s Office of Disaster Assistance. SBA concurred with our recommendation to assess and track the impact of training provided to agency hiring managers and human resources staff on Schedule A hiring authority. SBA responded that it will formally evaluate the impact of training to ensure hiring managers understand the use of Schedule A hiring authority and assess hiring trends and retention. SBA partially concurred with our recommendation to obtain employee feedback about its reasonable accommodation efforts. SBA stated that its procedures require supervisors to contact the Disability Employment Program Manager with concerns about the effectiveness of a provided accommodation and work together to make any necessary adjustment. SBA further stated that the procedures have been revised and will include a requirement for completing a feedback survey aimed to determine the effectiveness of the reasonable accommodation program and make any adjustments required. SBA stated that it also established an internal mailbox for reasonable accommodation communications that is monitored daily. Effective implementation of SBA’s plans, including administering a survey, would meet the intent of the recommendation. SSA concurred with our recommendation to assess and track the impact of training provided to agency hiring managers and human resources staff on Schedule A hiring authority. SSA stated that it is revising its framework to include outcome-based evaluations for training related to the employment and support of individuals with disabilities, including Schedule A hiring. DOJ did not agree or disagree with the recommendations. We are sending copies of this report to the appropriate congressional committees, the Director of OPM, the Chair of EEOC, the Secretary of Labor, the Director of OMB, the Attorney General of DOJ, the Administrator of SBA, and the Commissioner of SSA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or JonesY@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Hiring and Retention Data of Selected Agencies As part of our review, we selected three agencies as case illustrations to examine practices they have adopted to increase hiring and retention of individuals with disabilities. The three selected agencies are the Department of Justice (DOJ), the Social Security Administration (SSA), and the Small Business Administration (SBA). Our selection was based on various factors including the agency’s size in terms of total full-time employees and average percentage of total employees with reported disabilities or targeted disabilities during 2011 through 2017. For each of the three agencies, we analyzed personnel data captured in the Office of Personnel Management’s (OPM) Enterprise Human Resources Integration (EHRI) database including the General Schedule (GS) levels in which individuals with disabilities were placed and their position classifications. The following figures and tables summarize our analyses of hiring and retention rates of individuals with and without disabilities in the three selected agencies during fiscal years 2011 through 2017. These analyses provide an aggregate overview of hiring and retention trends of individuals with disabilities at the three selected agencies as compared to hiring and retention trends of individuals without disabilities at these agencies. We found the trends to be generally consistent between the employee groups. Department of Justice During the 2011 through 2017 time period we examined, 31 percent of the total number of persons with disabilities hired at DOJ during that time stayed in the federal government for less than 1 year and nearly 54 percent of them stayed for less than 2 years, as shown in figure 6. During that same time period, approximately 24 percent of the total number of persons without disabilities who were hired stayed for less than 1 year of service while approximately 46 percent of hires stayed for less than 2 years of service, as shown in figure 7. The data shown in figures 6 and 7 taken in context together indicate that retention at DOJ during this time period was generally consistent for persons both with and without disabilities. These departures may be explained, in part, by the proportion of employees hired into temporary positions who therefore were not necessarily expected to stay on the job for a longer duration, or by employees who did not meet performance standards. Tables 6 and 7 show the results of our analysis of employee retention at DOJ by occupational category and GS level for individuals hired in fiscal years 2011 through 2015 and stayed for at least 2 years. Small Business Administration During the 2011 through 2017 time period we examined, approximately 65 percent of the total number of persons with disabilities hired at SBA during that time stayed in the federal government for less than 1 year, as shown in figure 8. During that same time period, approximately 55 percent of the total number of persons without disabilities that were hired at SBA stayed for less than 1 year of service, as shown in figure 9. The data shown in Figures 8 and 9 taken in context together indicate that retention at SBA during this time period was generally consistent for persons both with and without disabilities. These departures may be explained, in part, by the proportion of employees hired into temporary positions who therefore were not necessarily expected to stay on the job for a longer duration, or by employees who did not meet performance standards. For example, SBA staff said that on average, 45 percent of SBA’s workforce is comprised of temporary employees hired by the agency’s Office of Disaster Assistance during a disaster. As such, SBA expects turnover among those hires, including employees with disabilities. Tables 8 and 9 show the results of our analysis of employee retention at SBA by occupational category and GS level for individuals hired in fiscal years 2011 through 2015 and stayed for at least 2 years. Social Security Administration During the 2011 through 2017 time period we examined, approximately 33 percent of the total number of persons with disabilities hired at SSA during that time stayed in the federal government for less than 1 year, as shown in figure 10. During that same time period, approximately 25 percent of the total number of persons without disabilities that were hired at SSA stayed for less than 1 year of service, as shown in figure 11. The data shown in figures 10 and 11 taken in context together indicate that retention at SSA during this time period was generally consistent for persons both with and without disabilities. These departures may be explained, in part, by the proportion of employees hired into temporary positions who therefore were not necessarily expected to stay on the job for a longer duration, or by employees who did not meet performance standards. Tables 10 and 11 show the results of our analysis of employee retention at SSA by occupational category and GS level for individuals hired in fiscal years 2011 through 2015 and stayed for at least 2 years. Appendix II: Comments from the Office of Personnel Management Appendix III: Comments from the Small Business Administration Appendix IV: Comments from the Social Security Administration Appendix V: GAO Staff Contact and Staff Acknowledgements GAO Contact Yvonne D. Jones at (202) 512-6806 or JonesY@gao.gov. Staff Acknowledgments In addition to the contact named above, Leah Querimit Nash (Assistant Director), Arpita Chattopadhyay, Anthony Patterson, and Erik Shive made key contributions to this report. In addition, Michael Bechetti, Elizabeth Curda, Karin Fangman, Rob Gebhart, Michele Grgich, Amalia Konstas, Serena Lo, Art Merriam, and Sharon Miller made contributions to this report.
Why GAO Did This Study Federal agencies are required to provide equal opportunity to qualified individuals with disabilities in all aspects of federal employment. GAO was asked to examine agencies' efforts to increase the employment of individuals with disabilities. Among other objectives, this report examines: (1) the extent to which agencies met the 2010 federal goal to hire an additional 100,000 individuals with disabilities by 2015, and the retention rates of those employees between 2011 and 2017; and (2) practices selected agencies used to increase hiring and retention of individuals with disabilities. GAO analyzed data and documents from OPM and interviewed agency officials. GAO interviewed officials from DOJ, SBA, and SSA about their efforts to enhance employment opportunities for disabled persons. GAO selected these three agencies because they represent a range of agency size and relatively high or low percentages of total employees with disabilities. What GAO Found Approximately 143,600 persons with disabilities were hired during 2011 through 2015—plus an additional 79,600 hires in 2016 and 2017—across the 24 Chief Financial Officers Act agencies, exceeding the stated goal of 100,000 by 2015. About 39 percent of individuals with disabilities hired during 2011 through 2017 stayed less than 1 year and approximately 60 percent stayed less than 2 years. Of the total individuals without disabilities hired during that same time period, approximately 43 percent stayed less than 1 year and approximately 60 percent stayed less than 2 years. Although targeted data tracking and analyses could help pinpoint root causes contributing to departure rates, the Office of Personnel Management (OPM) does not track or report retention data on disabled employees. Doing so, and making such data available to agencies would facilitate more comprehensive analyses of the retention of employees with disabilities and identify needed improvements. Officials at three agencies GAO examined—Department of Justice (DOJ), Small Business Administration (SBA), and Social Security Administration (SSA)—used various practices to increase hiring, such as training staff on Schedule A—a commonly used hiring authority to employ individuals with disabilities. However, the agencies neither assess the impact of training nor how it relates to contributing to performance goals of increasing the number of disabled hires. Agencies are expected to track performance related to providing reasonable accommodations. The selected agencies reported having processes in place for receiving reasonable accommodations requests, but only SSA has procedures for obtaining feedback from employees after an accommodation is provided. Without such feedback, DOJ and SBA are limited in their ability to assess the continued effectiveness of reasonable accommodations provided to employees. What GAO Recommends GAO is making 6 recommendations: OPM should track and report retention data; DOJ, SBA, and SSA should assess training impacts; and DOJ and SBA should obtain employee feedback on reasonable accommodations. OPM and SSA concurred with GAO's recommendations; SBA concurred with one and partially concurred with one recommendation; DOJ did not agree or disagree with the recommendations. GAO continues to believe all recommendations are warranted.
gao_GAO-19-414
gao_GAO-19-414_0
Background U.S. international trade agreements that cover USG procurement include the GPA and bilateral and regional FTAs. The revised GPA has 20 parties (including the EU) covering 48 WTO member countries (including the 28 EU member countries). Another 33 WTO members are observers; of these, 10 are in the process of acceding to the agreement. In addition to the GPA, the United States has 14 FTAs with 20 countries, four of which (Canada, Israel, Singapore, and South Korea) are also parties to the GPA. Almost all of the FTAs to which the United States is a party include provisions covering government procurement. The GPA aims to mutually open government procurement markets for goods, services, and construction services among its parties, according to the WTO. Under the GPA, foreign suppliers are able to compete alongside with U.S. suppliers for USG contracts covered by the agreement, and U.S. suppliers are able to compete for covered foreign government contracts in accordance with the framework established by the GPA. According to the office of the United States Trade Representative (USTR), to implement U.S. obligations under the international agreements that cover government procurement, the United States—generally (and not always) — waives preferential purchasing requirements for goods and suppliers from other countries that are parties to the agreements in covered procurements over a certain threshold. For example, USTR has waived the Buy American Act and other preferential provisions for eligible products in acquisitions covered by various trade agreements. However, Commerce officials noted that small business set-aside requirements are not waived nor are the provisions of the Berry Amendment. Government Procurement Markets and the Procurement Opportunities That Parties to the GPA and FTAs Have Reported Opening to Foreign Firms As part of our body of work on international government procurement, we have previously reported the following: The U.S. and EU government procurement markets are comparable in size, and each is larger than those of all other GPA and U.S. FTA partner countries combined. Some other parties to the agreements also have large government procurement markets, including Japan, South Korea, Canada, Mexico, and Norway. The government procurement chapters of the GPA and selected U.S. FTAs that we reviewed generally have similarities in text and commitments, possibly because key parties negotiated multiple agreements concurrently. However, the revised GPA generally provides more comprehensive market access than the selected FTAs we reviewed. The United States reported opening more procurement opportunities covered by the GPA to foreign firms than had other parties to the agreement. Data for 2010 showed that the United States reported $837 billion in GPA covered procurement. This amount is about twice as large as the approximately $381 billion reported by the next five largest GPA parties combined—the EU, Japan, South Korea, Norway, and Canada—even though total U.S. procurement is less than that of the other five parties combined. Previously, we reported on the opportunities available to U.S. and foreign firms seeking to compete for covered government procurement contracts in the countries that are parties to the agreements. In the current report, we analyze the value and number of actual contract awards, reported in procurement databases, including contracts covered under the GPA and NAFTA and those not covered. Covered contracts can be awarded to domestic firms, to firms from countries that are parties to the GPA and U.S. FTAs, or to other non-U.S. firms. Additionally, the Buy American Act does not apply to products that are purchased for use outside the United States, nor to the acquisition of services. Therefore, such contracts can be awarded without the application of Buy American Act domestic preference conditions to bids from any firm, including firms from non-GPA and non-FTA countries. Two Types of Data Sources Used to Estimate Foreign Sourcing in Government Procurement To estimate foreign source procurement, we looked for information about where the goods and services that governments purchase are produced and the characteristics of the firms supplying those goods and services. We identified two types of primary data sources that could be analyzed to estimate foreign sourcing in government procurement: (1) government procurement databases to estimate direct cross-border central government procurement and (2) input-output tables merged with international trade data to estimate total procurement by all levels of government and the portion comprising imported goods and services. Data from Government Procurement Databases on Contracts Awarded by Central Governments Government procurement databases collect information on contracts awarded by government entities to firms supplying goods and services. Except for Japan, all the countries in our analysis maintain online government procurement databases that can serve as a primary data source to generate statistics on their foreign source central government procurement. The USG and the other six main parties to the GPA and NAFTA use these databases to report to the WTO their required procurement statistics under the GPA. While Japan does not have a government procurement database, Japan’s central government collects procurement data from various ministry sources and reports the aggregated data to the WTO. As table 1 shows, the U.S. Federal Procurement Data System-Next Generation (FPDS-NG) provides more data fields that can be used as proxies for measuring foreign source procurement than the non-U.S. databases provide. FPDS-NG contains data on four potential proxy measures of foreign sourcing—firm location, firm ownership, product and service origin, and place of performance. The database for the EU and Norway and the databases for Canada and Mexico all contain contract award data related to firm location. South Korea’s database and Japan’s WTO submission on its 2013 procurement contain data on source country of goods and services. Therefore, two data fields, one reflecting firm location and the other reflecting country of product and service origin, appear to provide reasonable proxy measures of foreign source procurement, although neither is available across all data sources. (For more information on the characteristics of each government procurement database, see app. II.) Information about how much goods and services a country imports provides the basis for another approach to estimating what portion of all government procurement in that country is imported. The WIOD provides such information, giving us a second type of data and an alternative analytical approach for estimating foreign source government procurement. The WIOD links data on an economy’s supply chain interdependencies to data on its import and export flows, thus providing a proxy estimate of the share of imports in procurement by all levels of government. We based our method for analyzing linked input-output tables on an approach used by the European Commission which examines import penetration of government procurement within Europe. Unlike the contract data we analyzed from government procurement databases, the WIOD data capture procurement by all levels of government. However, the input-output tables are organized by industry, which requires a decision as to which industries make up the government sector in any given country’s economy. Some industries, like “public administration”, can safely be assumed to be part of the governmental sector in every country. Other industries, like education or health care, vary across countries in the degree to which they are part of the government sector, if at all. While this analytical method based on input-output tables can provide broad estimates of how much governments are purchasing imported goods and services, it relies on some important assumptions that may affect the reliability of the results. For example, it assumes that the goods and services purchased by all levels of government are imported to the same extent as they are when purchased by other industries in the same country. This assumption, known as the “proportionality assumption”, recognizes that results from this method may overestimate the share of imports in government procurement to the extent that the analysis does not capture attempts by the government sector to limit foreign sourcing in its procurement. On the other hand, other aspects of this method may underestimate the share of imports in government procurement. For example, the input-output data include intermediate inputs but do not include purchases for investment, such as some government assets because, according to the authors of the European Commission study, input-output tables do not have the data to distinguish between investments by the private and public sectors. Thus, the input-output data could exclude investment made through construction services like those purchased to build highways, schools, or other assets that have long-term use, services that are included in covered procurement under both the GPA and NAFTA. The USG Likely Procured More Than Twice as Much from the Other Six Main Parties to the GPA and NAFTA as Vice Versa, but Exact Comparisons Are Not Possible The value of U.S. government (USG) contracts awarded to firms located in the other six main parties to the GPA and NAFTA likely exceeds twice the estimated value of contracts from those parties to U.S. firms, but exact comparisons are not possible. The USG awarded contracts valued at about $12 billion to foreign-located firms in fiscal year 2015, of which less than half went to firms located in the other six main parties. Conversely, the government procurement data we analyzed indicated the central governments of these parties awarded almost $7 billion to foreign sources, of which less than a third were awarded to firms located in the United States or for goods or services from the United States. Over three- quarters of these U.S.-sourced contracts were awarded by Canada and Mexico. Only the USG’s procurement database contains data on firm ownership. Analyzing these data, we found that the USG awarded more, by reported contract value, to foreign-owned firms located abroad, than it awarded to U.S.-based subsidiaries of foreign-owned firms. This was mostly U.S. Department of Defense (DOD) contracts in support of the U.S. military presence in those countries. Overall, while available contract data enable broad cross-country comparisons, these data allow only limited insight into the effects on the U.S. economy of foreign sourcing of USG procurement. This is principally because the contract data do not capture the economic roles of firms awarded contracts and thus do not allow for a definitive assessment of the economic implications of foreign sourcing, as we discuss later in this report. USG Contracts Valued at About $5 Billion Went to Firms Located in the Six Main Parties, out of About $12 Billion Awarded to All Foreign-Located Firms In 2015 the USG awarded about 511,000 contracts valued at about $290.9 billion. Out of this total, about 47,000 contracts valued at about $12.1 billion were awarded to firms located outside the United States (as shown in the data by firm location). Similarly, the USG awarded about 50,000 contracts valued at about $16.5 billion for foreign goods and services (as shown by country of product and service origin). See table 2. Of the USG foreign source procurement awarded to firms in the other six main parties to the GPA and NAFTA, firms located in the EU received more than half in terms of contract value and slightly less than half by number. In 2015 the USG awarded about 10,000 contracts valued at about $5.3 billion to firms located in the other six main parties to the GPA and NAFTA (see table 2 above). This $5.3 billion is about 40 percent of the total value of USG contracts awarded to foreign-located firms. Firms located in the EU received almost 5,000 USG contracts valued at $2.8 billion. Firms located in Japan, South Korea, and Canada were awarded most of the remaining aggregate USG contract value ($1.1, $0.8, and $0.6 billion, respectively) and number of contracts (about 1,500, 600, and 2,900, respectively) awarded to firms in the other six main parties to the GPA and NAFTA. Firms located in Mexico and Norway received less than 1 percent of the aggregate USG contract value and number of contracts awarded to firms in the other six main parties. However, as table 2 also shows, the majority of foreign-sourced USG procurement, in terms of both value and number of contracts, went to firms located in countries that are not among the other six main parties to the GPA and NAFTA. Germany, Japan, and South Korea are among the top five countries whose firms received the most USG contract value in fiscal year 2015. However, countries in the Middle East, including Afghanistan, United Arab Emirates, and Saudi Arabia, were also among the countries whose firms were main recipients of USG procurement in terms of aggregate contract value (see app. III for additional information on USG foreign source procurement by country). Finally, table 2 shows that FPDS-NG data are similar when we use, instead of firm location, the alternative measure of foreign sourcing based on country of product and service origin. For example, the aggregate value of contracts awarded by the USG for goods and services originating in countries of the other six main parties was about 43 percent of the overall value of USG foreign source procurement—the same proportion we found when using firm location as proxy measure of foreign sourcing. In addition, as with the results based on firm location, most of the USG’s foreign source procurement as measured by country of product and service origin went to countries outside the other six main parties to international procurement agreements. USG Awarded Less by Contract Value to U.S.- based Subsidiaries of Foreign-Owned Firms Than to Foreign-Owned, Foreign-Located Firms, Which Mainly Support DOD Operations Abroad Foreign-located firms can be either foreign-owned or U.S.-owned, just as U.S.-located firms can be either foreign-owned or U.S.-owned. Among the government procurement databases we used, only the FPDS-NG includes data on firm ownership. Some research on foreign sourcing in government procurement differentiates between direct and indirect cross- border procurement based on knowledge about both the location and ownership of the successful bidder: In direct cross-border procurement, the successful bidder is both foreign-owned and foreign-located. In indirect cross-border procurement, the successful bidder is a U.S.- based domestic subsidiary of a foreign-owned firm. According to a recent EU Commission study, between 2009 and 2015, the EU’s indirect cross-border government procurement was more than 5 times greater in terms of both value and number of contract awards than its direct cross-border government procurement. The study notes that indirect cross-border procurement is often high when direct cross-border procurement is low and suggests that may reflect actual or perceived barriers to cross-border bidding, which lead firms to rely on their locally based subsidiaries for cross-border sales. The study reported that indirect cross-border government procurement (foreign-owned, domestically located vendor) accounted for 21.9 percent of the number and 20.4 percent of the value of certain contract awards in the EU’s 28 countries, while direct cross-border government procurement (foreign-owned, foreign-located vendor) accounted for 1.7 percent of the number of contracts and 3 percent of contract value. In contrast to the findings of that EU Commission study, our analysis of FPDS-NG data shows that indirect cross-border procurement by the USG was smaller in terms of total award value and number of contracts than direct cross-border procurement. This indicates that foreign firms selling to the USG generally do not establish a local presence in the United States. Specifically, foreign-owned firms located in the United States (indirect cross-border procurement) received contracts valued at about $3.6 billion, or less than 1 percent of the value of all USG contracts. By contrast, firms that were both foreign-owned and foreign-located (direct cross-border procurement) received contracts valued at about $11.8 billion, or about 4 percent of the value of all USG contracts ($290.9 billion). Therefore, USG direct cross-border procurement was about three times greater than indirect cross border procurement for contracts awarded in fiscal year 2015. A possible explanation for this finding could be that foreign-owned and foreign-located firms are awarded more USG contracts in terms of value and number than U.S. subsidiaries of foreign-owned firms because those contracts are covered by international procurement agreements. Foreign- owned and foreign-located firms are awarded more USG contracts because they may bid for large-value GPA covered USG contracts at a higher rate than their U.S.-located counterparts, or they may generally be more competitive for such contracts. However, for contracts not covered under the GPA and NAFTA, the relative difference between the two groups of foreign-owned firms becomes smaller in terms of aggregate contract value. Therefore, the difference between direct and indirect cross-border procurement is likely not due to agreement coverage as one might expect. To better understand why the USG’s direct cross-border procurement was larger than its indirect cross-border procurement, we further analyzed the FPDS-NG data on firm location, firm ownership, and place of performance—where the services were performed or where the goods were produced. Based on firm location, as stated earlier, foreign-located firms were awarded about $12.1 billion in USG contracts. Measured by aggregate contract value, almost all of the USG contracts awarded to those firms were performed abroad (i.e., outside the United States)—$11.9 out of $12.1 billion or 98 percent. USG contracts performed abroad are commonly awarded to U.S.-located as well as to foreign-located firms. In 2015, the USG awarded contracts performed abroad valued at about $23.3 billion, of which about half was awarded to U.S.-located firms. In particular, as figure 1 suggests, while U.S.-located firms received contracts performed abroad valued at $11.4 billion, foreign-located firms were awarded USG contracts valued at $11.9 billion. Almost all of those USG contracts—$11.7 out of $11.9 billion or 98 percent—were awarded to firms that were foreign-owned as well as foreign-located (i.e., direct cross-border government procurement). The vast majority of the value of these USG contracts to foreign-owned, foreign-located firms was for DOD contracts performed abroad. In particular, DOD awarded about 84 percent of the value of USG contracts—$9.8 billion out of $11.7 billion— that were performed abroad and awarded to foreign-owned, foreign-located firms. The vast majority of those contracts ($7.5 billion or 77 percent) were covered under the GPA and NAFTA. (See app. III for a breakdown by agency of all USG contracts performed abroad and awarded in fiscal year 2015 to foreign-owned, foreign-located firms.) Foreign-owned firms located in six countries received the majority (57 percent) of DOD’s $9.8 billion in aggregate award value of contracts performed abroad. Specifically, firms located in three countries in the Middle East—Afghanistan, Saudi Arabia, and United Arab Emirates— together received 28 percent of that award value; firms in Japan and South Korea together received 18 percent; and firms in Germany received 11 percent. About a quarter of DOD’s $9.8 billion in aggregate award value were for purchases of fuel, oil, lubricant, and wax. About 9 percent were for education and training services, and about 7 to 8 percent each were for construction of buildings and housekeeping services. For example, fuel was the main product procured by DOD in United Arab Emirates, while in Saudi Arabia most DOD procurement was for education and training services. (See app. III for a breakdown of DOD contracts performed abroad and awarded to foreign-owned and foreign- located firms, by country.) Central Governments of the Other Six Main Parties Awarded Almost $2 Billion to U.S.-Located Firms or for U.S.-Made Products out of About $6.5 Billion in Foreign-Awarded Contracts Our analysis of available procurement contract data from 2015 shows that the central governments of the other six main parties to the GPA and NAFTA, apart from the USG, awarded contracts valued at about $170.5 billion. About 4,000 out of a total of 245,000 of these contracts with an estimated total value of about $6.5 billion were awarded to foreign sources, that is, to foreign-located firms or for imported products and services. Some of these contracts awarded by the other six main parties were covered by the GPA and NAFTA, while others were not. Furthermore, the central governments of the other six main parties awarded about 2,000 U.S.-sourced contracts worth about $1.8 billion (see fig. 2). U.S.-sourced contracts are contracts awarded to U.S.-located firms or for products made in the United States. Canada and Mexico awarded most of the U.S.-sourced contracts. Specifically, central government contracts awarded to U.S.-located firms by Canada and Mexico accounted for almost 80 percent of the value and number of all U.S.-sourced contracts. Over 60 percent of the value and number of U.S.-sourced contracts awarded by the central governments of the other six main parties were for the procurement of goods. In particular, Canada awarded more than 20 times more in contract value to purchase goods than it did to purchase services from U.S.-located firms. However, for contracts covered under trade agreements, the other six main parties collectively awarded more U.S.-sourced contracts for services than for goods; these contracts were awarded primarily by the EU and Mexico. U.S.-located firms were awarded virtually no construction services contracts. This result is consistent with our findings for procurement flows among all countries among the other six main parties to GPA and NAFTA and may be explained by the proxy measure used—firm location, which accounts only for direct cross-border procurement. For example, the EU commission paper cited previously finds that for construction works the share of direct cross-border procurement in the total value of awards was 1.7 percent compared with 12.3 percent for indirect cross-border procurement. While Available Contract Data Enable Broad Cross- Country Comparisons of Foreign Sourcing by Central Governments, They Allow Limited Assessment of Economic Implications Select Data Elements Available in Government Procurement Databases Allow for Broad Cross-Country Comparisons, but Not Precise Estimates The data available from the government procurement databases we analyzed provide relevant and useful information for assessing foreign sourcing in government procurement, but these data do not allow for precise cross-country comparisons based on the GPA provisions on rules of origin. Data and reporting on country of origin for goods and services is limited for a number of reasons. Most of the databases we analyzed contain fields on contract award value and type of contract, as well as fields on firm location or country of product or service origin—proxy measures of foreign sourcing that, as we have found, allow for broad cross-country comparisons. However, precise estimates from the available data are not possible because no single internationally accepted definition exists to distinguish procured goods and services that are “foreign” from those that are “domestic” and the information in government procurement databases is not uniform. There is no agreed- upon definition of the country of origin for goods and services for statistical reporting purposes in the GPA even though a similar term— country of production—is used in the 1994 GPA’s general principles on nondiscrimination. Instead, the GPA generally expresses that a party shall apply the rules of origin that it applies in the normal course of trade when determining the country of origin for goods and services in covered procurement. Another factor that limits cross-country comparisons of country of origin data by parties to the GPA is the recent revision to the GPA itself, which no longer requires the parties to provide country of origin statistics, as we previously reported. According to the 1994 GPA, parties were to provide statistics on the country of origin for products and services purchased by its entities, to the extent that such information is available. However, the revised GPA, which went into effect in 2014, does not require parties to report available information on the country of origin of purchased products or services. While all the GPA members included in our scope reported the amount of covered procurement to the WTO, only Japan (until 2013) reported statistics on the “nationality of the winning tenderer”. The WTO Committee on Government Procurement’s Work Programme on the Collection and Reporting of Statistical Data is currently examining the issues surrounding how countries define country of origin for the procurement of goods and services. Finally, while the United States collects a variety of relevant data on foreign sourcing, those data have certain limitations for cross-country comparisons since the data are collected for different purposes. While U.S. agencies collect country data on successful bidders and the country of origin of goods and services in response to the Buy American Act and report these in FPDS-NG, the agencies do not collect data on country of origin determinations in response to relevant provisions of the GPA or NAFTA. For example, the U.S. Federal Acquisitions Regulation (FAR), in implementing statutes including the Buy American Act, applies different tests to determine the country of origin of an end product and defines end products to include “domestic”, “foreign”, or “U.S.-made”. The test to determine country of origin for an end product under the Buy American Act is different from the test to determine country of origin in the procurement of an end product under trade agreements. According to the FAR, for manufactured products, the Buy American Act uses a two-part test to define a domestic end product: (1) the article must be manufactured in the United States, and (2) the cost of domestic components must exceed 50 percent of the cost of all the components. According to the FAR, for procurement under trade agreements, the test to determine “country of origin” is “substantial transformation”, (i.e., transforming an article into a new and different article of commerce, with a name, character, or use distinct from the original article). The substantial transformation test can also be used to determine whether a product is a U.S.-made end product. The FAR also defines a foreign end product as an end product other than a domestic end product. Therefore, under the FAR, contracting officers use different tests and different descriptors to designate country of origin. Since corresponding data fields for these descriptors are not available in FPDS-NG, the data do not allow for exact cross-country comparisons of foreign sourcing under the GPA and NAFTA. Available Procurement Contract Data Allow Limited Assessment of the Economic Implications of Foreign Sourcing In all countries included in this report, available contract data do not allow for a definitive assessment of the economic implications of foreign sourcing in government procurement, such as impacts on wages and profits. As figure 3 shows, using the United States for illustrative purposes, foreign versus domestic sourcing in government procurement could be viewed in four different ways —firm location, firm ownership, product and service origin, and place of contract performance. For example, FPDS-NG data shows that a task order under a DOD contract for facilities support performed in Iraq reports the United Arab Emirates as the country of product and service origin for safety and rescue equipment, while also reporting the firm location and ownership as the United States. FPDS-NG data showed that another task order under the same contract, for housekeeping services, reports the place of performance as Kuwait but reports the United States as the country of product and service origin, the firm location, and the country of firm ownership. As another example from FPDS-NG data, a contract awarded by the U.S. Agency for International Development for internet services performed in Malawi and awarded to a foreign-owned business reports the United States as the country of service origin but the United Kingdom as the firm location. Each of the various different ways relevant to the sourcing of USG contracts can be viewed on a continuum based on the extent of foreign involvement associated with the production and service delivery processes. Country of firm location. As found in the procurement databases, suppliers can be located, for example, domestically in the United States or abroad. However, the economic effects related to the country of firm location depend on what is produced in the country relative to what is produced elsewhere. For example, the supplier may be an end product manufacturer doing less skill-intensive assembly and packaging, a high technology and skill-intensive manufacturing firm that substantially transforms a product that is subsequently used as an input in the production process, or a broker providing unskilled labor for product distribution. In each of these examples, the country of firm location could experience different economic effects from the awarded contract. Country of firm ownership. Suppliers could be domestically or foreign owned, and who owns the firm determines who accrues the firm’s profits. However, determining ownership is challenging because a supplier awarded a contract may have various ownership structures. For example, the supplier may be a sole proprietor or a corporation with shareholdings, subsidiaries, ultimate owners, or may be a participant in a corporate group. The supplier may have established a presence in the United States through a foreign-owned subsidiary or may participate in a partnership such as a joint venture with a U.S. firm. Country of product or service origin. Goods and services purchased under government procurement contracts may be domestically produced or imported. In this case, the effects can be analyzed in the same way as trade flows in general. However, the country of product or service origin is more challenging to determine for government procurement contracts compared with general trade in goods and services, since government contracts typically cover more than one good or service. Therefore, the country of origin for certain goods included in a contract may be different from the country of origin for other goods under the same contract. Country of contract performance. USG contracts can be executed within the United States or outside the United States. For example, the country of contract performance may determine where the service is delivered as opposed to the location or ownership of the firm that delivers the service. The place of performance may lead to benefits and costs accruing to the location where the contract is performed. For example, if a service is delivered or the products are produced outside the United States, the contract likely employs local labor and therefore benefits the local labor market. Because available data in government procurement databases do not specify the supplier firm’s economic role, the economic effects of the awarded contract remain uncertain. The potential effects of the awarded contract on other firms, workers, the government, or consumers in the domestic and foreign economies may vary depending on the supplier firm’s economic role. Foreign Sourcing Is a Minor Share of Government Procurement, and Our Analysis Did Not Find a Consistent Relationship with Coverage under the GPA and NAFTA We estimate that foreign sourcing is generally a small share of government procurement for the United States and the other six parties to the GPA and NAFTA. Foreign sourcing by the USG and the other parties’ central governments, estimated by government procurement databases, varied in value from about 2 to 19 percent of overall central government procurement. Foreign sourcing by all levels of government, estimated by data on trade and public sector purchases by the United States and the other six main parties, shows that government imports ranged from about 7 to 18 percent of the goods and services purchased by these countries’ governments. In addition, our analysis of central government contract data found that foreign sourcing is sometimes but not always greater, in terms of value and number of contracts, for contracts covered by procurement agreements than for contracts not covered by those agreements. Foreign Source Procurement by Central Governments Estimated from Country Databases Varied in Value from 2 to 19 Percent of Overall Central Government Procurement Our analysis of available data on firm location from government procurement databases shows that foreign sourcing in 2015 ranged in value from 2 to 19 percent of overall central government procurement (see fig. 4). The central governments of the EU, Mexico, and the United States awarded less than 5 percent of the aggregate value of their procurement contracts to foreign-located firms. The proportions for Canada and Norway were about 11 and 19 percent, respectively. Both Canada and Norway can be characterized as small, open economies bordering much larger, open trading partners, which may contribute to their relatively larger shares of foreign sourcing in central government procurement. Canada’s central government awarded about 10 percent of the value of all its contracts to firms located in the United States. Similarly, Norway’s central government awarded about 19 percent of the value of all its contracts to firms located in the EU. Our analysis of available data on country of product and service origin shows that Japan procured less from foreign sources (2 percent) than both the United States (6 percent) and South Korea (3 percent). See figure 5. We obtained similar results in terms of number of foreign-sourced contracts. Less than 5 percent of the number of central government contracts was sourced from abroad in the EU, Japan, South Korea, and Mexico. For the United States, Norway, and Canada, the numbers of foreign-sourced contracts based on firm location comprise higher percentages (9, 8, and 13 percent, respectively). Canada’s central government awarded about 9 percent of the total number of contracts it awarded to firms located in the United States. Similarly, Norway’s central government awarded about 7 percent of the total number of contracts it awarded to firms located in the EU. Except for the United States, most of the central governments of the other six main parties to the GPA and NAFTA awarded few construction services contracts to foreign-located firms. One possible explanation is that, given the higher dollar value threshold of contracts in this sector, foreign-owned firms may have a greater incentive to establish a local presence through subsidiaries in the host countries. The data in the non- U.S. databases do not provide enough information to explore that hypothesis. However, FPDS-NG data show that construction services contracts are the main contract type awarded to foreign-located firms by the USG, which awarded about 3,090 construction services contracts worth $1.8 billion (or about 20 percent and 8 percent of all construction services contracts, respectively) to foreign-located firms. Less than 1 percent of these contracts’ award value was for contracts performed in the United States and over 70 percent of these contracts’ award value was for contracts covered by the GPA and NAFTA. In addition, the USG awarded a roughly equal share (about 4 percent of all contracts in terms of value) of goods and services contracts to the other six parties to the GPA and NAFTA. Canada, on the other hand, awarded a relatively large percentage of the value of all goods contracts (30 percent) to firms located abroad. Foreign Source Procurement Estimated by an Alternative Method Shows Import Percentages by All Levels of Government Range from 7 to 18 Percent of All Government Purchases We also assessed the degree of foreign sourcing in terms of government import percentages to identify patterns in government procurement that may differ from those based on the location of the supplier and origin of goods and services. Using linked input-output tables and an alternative analytical approach, we were able to broadly estimate the domestic and foreign sources of inputs to the government sector for the United States and the six main parties to the GPA and NAFTA. This alternative approach to estimating foreign source government procurement is based on macroeconomic data on trade flows of goods and services between countries and the types of goods and services purchased by the public sector. Unlike the approach above based on government procurement contract data, this approach allows us to calculate broad estimates of domestic and foreign sourcing in procurement by all levels of government—central, state, and local. Table 3 shows our broad estimates based on a narrow definition of the government sector, which includes only “public administration”. In the table, the columns are the purchasing countries or the EU. The rows indicate where the goods or services are being purchased from. As the table shows, for all the countries and the EU, foreign sourcing generally accounts for a small portion of all governmental purchases. For example: Out of the estimated $1.2 trillion that the central, state, and local governments in the United States purchased, $100 billion was imported from outside the United States—a total foreign source percentage of about 9 percent, including $26 billion (2 percent) from the EU. Out of the $460 billion that the EU governments at every level purchased, $36 billion was imported from outside the EU—a total foreign source percentage of about 8 percent, including $10 billion (2 percent) from the United States. Out of the $178 billion that governments in Japan purchased, $12 billion was imported from outside Japan—a total foreign source percentage of about 7 percent. In general, the smaller economies in terms of government purchases— Canada, South Korea, Mexico, and Norway—imported a relatively larger percentage of such purchases than the United States, EU, and Japan. Specifically, Canada, South Korea, and Norway imported about 9 to 13 percent of their governments’ purchases. Mexico imported a notably large share of about 18 percent. Of the estimated $24 billion in purchases by Mexico’s government sector, about 6 percent was from the United States and about 3 percent from the EU. This inverse relationship between the size of an economy and the relative percentage import share of government purchases has been noted by others that have used the input-output methodology. Basing estimates of foreign source government procurement on the narrow definition of the government sector may not be as appropriate in countries where the government plays a large role in various additional sectors. Figure 6 shows the size of the government sector under the narrow definition as well as two broader definitions which add additional industries. The “typical definition” as defined in the EU study also includes the education and health care sectors. The “broad definition” also includes a portion of the energy and the telecommunications sectors. The relative sizes of the parties change under the different definitions, as shown in the figure. For example, while the EU government sector is less than half the size of the U.S. government sector under the narrow definition ($460 billion for the EU compared with $1,159 billion for the United States), under the broad definition they are comparable in size ($2.4 trillion for the EU and $2.6 trillion for the United States). Figure 7 shows the estimated percentages of each country’s and the EU’s government sector purchases that are imported under the narrow, typical, and broad definitions as described above. Under all three definitions, the United States and EU have some of the smallest percentages of imported government purchases, between 8 and 10 percent. Mexico has one of the largest percentages, between 17 and 22 percent. Canada and Norway are in the middle, from about 12 to 16 percent. For South Korea and Japan, the estimated percentages of government sector purchases that are imported increased under the broad definition—from 7 percent to 17 percent for Japan, and from 9 percent to 22 percent for South Korea. Available Contract Data Indicate Foreign Sourcing Is Not Always Greater for Contracts Covered by the GPA and NAFTA Than for Other Contracts Our analysis of 2015 data from central government procurement databases finds evidence that foreign sourcing was sometimes, but not always, greater for contracts covered by the GPA and NAFTA than for contracts not covered by those agreements. Given the goals promoted by the GPA and NAFTA, one might expect that procurement covered by such agreements would likely result in a higher number or larger aggregate value of contracts awarded to foreign-located firms or for the purchase of foreign goods and services. For the United States and two of the other six main parties to the GPA and NAFTA—Mexico and South Korea—the results bore out that expectation: for all three, more central government foreign sourcing in terms of contract value occurred when procurement was covered by the agreements. However, our analysis also shows that for two other parties, Canada and Norway, the opposite was true; for the remaining two parties, the EU and Japan, we found little difference or could not calculate an estimate. Our previous work showed that only about a third of the estimated average annual government procurement at all levels of government from 2008 through 2012 was covered by the GPA and NAFTA ($1.5 out of $4.4 trillion). The available data from the government procurement databases that we analyzed show that the USG and the central governments of Mexico and South Korea awarded at least twice as much to foreign sources for contracts covered by international agreements—ranging from 2 to 6 percent of the value of covered contracts compared with less than 1 to 2 percent for non-covered contracts (see table 4). In particular, for contracts awarded by the USG, foreign-located firms received more than twice the value of covered compared with non-covered contracts—about $8.8 billion compared to $3.4 billion, respectively. Results for the USG are similar when looking at the amount of foreign source procurement based on product and service origin. Conversely, U.S.-located firms were awarded a higher aggregate value of non-covered contracts from the USG, compared with covered contracts. (See table 4.) For Canada and Norway, more central government foreign sourcing in terms of contract value occurred when procurement was not covered by trade agreements than when it was. For covered contracts, Canada’s central government awarded 1 percent of the value of all contracts to foreign-located firms compared with 10 percent of the value for non- covered contracts. Similarly, Norway awarded foreign-located firms more than 5 times more in non-covered than covered contracts as measured by aggregate contract value. For the EU and Japan, data on the value of foreign sourced contracts and their agreement coverage are either not available or incomplete. The available EU data have a significant number of foreign unclassified contracts and do not include contracts below the GPA threshold values, which limits the reliability of any comparison for covered versus non- covered contracts. In addition, Japan’s 2015 GPA submission of 2013 procurement data did not report on the amount of foreign source procurement broken out by covered and non-covered contracts, because, according to Japanese officials, this is not a GPA statistical reporting requirement. Therefore, we could not calculate a similar comparison of the value of covered versus non-covered procurement for Japan. Finally, with regard to the number of contracts awarded, our analysis of available data from country databases does not show a consistent relationship with international procurement agreement covered awards to foreign-located firms or for foreign-sourced goods or services. In South Korea and the United States, the number of contracts not covered by trade agreements and awarded for foreign sourced products was greater compared with covered contracts. Conversely, in Canada, EU, Mexico, and Norway, the number and share of contracts covered by trade agreements and awarded to foreign-located firms was greater compared with non-covered contracts. In percentage terms, foreign-located firms received the same share (9 percent) of covered and non-covered contracts awarded by the USG. Agency Comments We provided a draft of this product to USTR, Commerce, OMB, and GSA for comment. Commerce provided technical comments on this report, which we incorporated, as appropriate. USTR, OMB, and GSA did not comment on our draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the U.S. Trade Representative, the Secretary of Commerce, the Director of the Office of Management and Budget, the Administrator of the General Services Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope and Methodology This report examines the extent of foreign sourcing in government procurement by the United States and the other six main parties to selected international procurement agreements. Under the World Trade Organization (WTO) Agreement on Government Procurement (GPA), the other main parties, besides the United States, are the European Union (EU), Japan, Canada, South Korea, and Norway. Under the North American Free Trade Agreement (NAFTA), the other main parties are Mexico and Canada. The report (1) provides alternative broad estimates of foreign sourcing by the U.S. government (USG) and the central governments of the other six main parties to the GPA and NAFTA, and (2) assesses foreign sourcing as a share of estimated central government procurement and of estimated procurement by all levels of government, and the extent to which central government contracts that are covered under the GPA and NAFTA are foreign-sourced. We analyzed data from two types of sources: (1) government procurement databases in Canada, the EU, South Korea, Mexico, Norway, and the United States, for 2015, and (2) 2014 trade data merged with data on the types of goods and services purchased by the public sector. Since Japan does not have a national procurement database, data for Japan were based on its WTO GPA submission for 2013, which is the last submission that contains information on its foreign sourcing in government procurement. We also interviewed cognizant government officials in Washington, D.C.; Ottawa, Canada; Mexico City, Mexico; Seoul, South Korea; and Tokyo, Japan, and reviewed available research literature to identify potential methods, sources, and data limitations. We also interviewed government officials at the EU mission in Washington, D.C. and exchanged information with officials knowledgeable of the EU government procurement database. Analysis of Data on Contracts Awarded from Government Procurement Databases We collected and analyzed data from the following five databases: for the United States, the Federal Procurement Data System-Next Generation (FPDS-NG); for the EU and Norway, Tenders Electronic Daily (TED); for Canada, Contract History; for Mexico, the Government of Mexico e-Procurement System CompraNet; and for South Korea, the South Korea ON-line E-Procurement System (KONEPS). Several of these government procurement databases included data on procurement at all levels of government—national, state, and local— while others did not. Therefore, we limited our analysis to data on central government procurement. For a detailed discussion of the characteristics of each database, see appendix II. To identify data fields that could be reasonably compared across databases, we followed a number of methodological steps: First, we looked for fields that capture the total award value of the contract at the time of award (2015); the type of contract in terms of goods, services, and construction services; the contract award date; the contract duration; and the type of tendering procedure. We took into account the following considerations: Units of analysis. We established appropriate units of analysis across databases. Several databases contained a number of fields that were potentially relevant to our work. Specifically, in FPDS-NG the unit of analysis is the contract award. The database contains data at the contract action level (contracts, task orders, and their modifications). We used contract awards for the number of reported contracts, but for certain data on indefinite delivery vehicles (IDVs) such as government-wide acquisition contracts, indefinite delivery contracts, and blanket purchase agreements, we relied on data for task orders awarded in fiscal years 2015 through July 2018 (see discussion of contract valuation and multiple-year, multiple-award contracts below) because they contained information on place of performance, country of product and service origin, and place of manufacture, which were the relevant fields for foreign sourcing. The TED database contains information on contract notices, contract award notices, and contract awards above certain thresholds set by relevant EU legislation. While the EU and Norway use contract award notices to estimate the value of covered procurement in their GPA statistical notifications, we used contract awards because they allowed us to estimate actual foreign sourcing. The databases for Canada, Mexico, and South Korea contain a contract identifier, which is the sole and unique unit of analysis that is available. Contract valuation. We established comparable fields across databases that represented the estimated maximum total value of a procurement awarded in 2015 over its entire duration. For FPDS-NG, we developed a methodology that is consistent with the methodology laid out in the revised GPA and avoids the inconsistencies of the revised U.S. methodology, which we previously reported. In particular, in October 2015, the Office of the U.S. Trade Representative (USTR) notified the WTO that the United States had revised its methodology for preparing GPA statistical reports on U.S. federal procurement. To more precisely reflect the value of the federal procurement market at the time of each report, the revised methodology presented the total amounts obligated under GPA covered contracts over a 6-year period—that is, the year the contract was awarded plus 5 years after the award. As we previously reported, the revised methodology has both advantages and disadvantages. It improves the accuracy of reporting but introduces a 6-year delay, whereas the revised GPA requires reporting within 2 years of the end of the reporting period. In addition, the revised valuation methodology is not consistent with the one used by other countries and creates an internal inconsistency: In measuring actual obligations for procurement contracts rather than the value at the time of award, the revised U.S. methodology is inconsistent with the methodology used by other large GPA members, such as the EU, Norway, Canada, and Mexico, which report contract values at the time of award rather than actual obligations or expenditures. The United States continues to report the number of covered contracts to the WTO based on their award value, which leads to an inconsistency between the reported numbers and values of reported U.S. government procurement contracts. The contracts comprising the reported value of covered procurement are determined at a later time under the revised methodology and can result in a different set of contracts being used to determine the reported value. Our current methodology uses base and all options value for all contracts awarded in fiscal year 2015 unless the contract was an IDV. For IDVs we used the base and all options value of task orders awarded in fiscal years 2015 through July 2018 under those IDVs to avoid overestimating the total value. We used the aggregate base and all options value for task orders under those contracts because the alternative—using the base and all options value on the base IDVs—is inflated due to problematic data entries for multiple awards. As a result, our methodology produces an estimate that is consistent with methods used by other parties, internally consistent, and in accordance with the methodology for valuation in the revised GPA. As we noted earlier, the result is close to the obligations value currently reported in the Trade Agreements Report used by USTR to report to the WTO. In TED, we used the contact award value field, because it captures the appropriate measure and according to EU documentation was corrected for errors in the data. For the EU and Norway, we found that for above-threshold procurement approximately 15 percent and 12 percent of the contract award values were missing, respectively. We took additional steps to address these missing values to generate estimates of the total contract award values. Specifically, we implemented a Predictive Mean Matching (PMM) multiple imputation methodology for the EU and used post-stratification estimation techniques for Norway. (See app. IV for more details on both methods.) However, we excluded the value of below-threshold procurement for the EU and Norway because it is reported on voluntary basis and suffers from missing and implausible values. In particular, for the EU, about 42 percent of the contract award values below threshold are missing and another 10 percent are below €1,000. For Norway, 80 percent of the contract award values below threshold are missing. Nevertheless, as a robustness check of the results from our analysis, we applied our imputation methodology discussed in appendix IV to the entire TED dataset and found that once those values are estimated, the amount of procurement awarded by the EU to U.S.- located firms increases by less than 10 percent. However, we do not consider the estimate sufficiently reliable to be included in our aggregate analysis. In Contract History, we used the contract value field because, according to Canadian officials, it includes the original total value of the contract at the time of the award. In addition, those officials noted that this field was used by Canada in its reporting of covered procurement for its WTO statistical notifications. In CompraNet, we used the contract amount field since, according to Mexican officials, this field reflects the total value of the contract award. In KONEPS, we used the total awarded value field, since it was the only field available for our analysis and contained the value awarded for a given year (see adjustments we made for multiple-year contracts below). Currency denomination. We converted contract values reported in different currencies in the databases into dollars using the period average exchange rate for 2015 as provided by the International Monetary Fund’s International Financial Statistics. Contract modifications or amendments. Since we defined the value of the award at the time of award, we selected contracts awarded in 2015 and excluded any subsequent modifications or amendments in all the databases. Contract types. We used the product and service classifications that each database used to group contracts by type. Different databases used different classification schemes, and we did not independently reclassify any contracts to a uniform classification system, since such a system does not exist and a concordance among all schemes is not possible. In FPDS-NG, we used the U.S. product and service codes to classify federal government contracts in product groups and categorized reported procurement as either goods, services, or construction services. In TED, we used the type of contract field, which categorized reported procurement as supplies, services, and works based on the EU common procurement vocabulary in TED. In Contract History, we used the grouping of goods, services, and construction, which Canadian officials provided to us based on the global shipment identification number codes and description in the database. In CompraNet, we used the type of contract field, which indicates if the contract is for goods, services, or public works. In KONEPS, the data on foreign procurement included goods only, and no classification scheme was available for foreign procurement contracts. Multiple-year, multiple-award contracts. Some countries’ procurement practices include contracts awarded for multiple years, and we accounted for the valuation of those contracts by estimating their total cumulative value over multiple years at the time of award in 2015. In FPDS-NG, we accounted for the value of multiple-award contracts by using the base and all options value of task orders awarded in fiscal years 2015 through 2018 for IDVs initially awarded in fiscal year 2015. In TED, available documents noted that member states can use alternative multiple-year tools such as framework agreements and dynamic purchasing systems for a certain time period or for repeat purchases, respectively. While the indicator field for these data in TED was not sufficiently populated for further analysis of those types of contracts, the contract valuation field we used had already accounted for the total value of the contract, and thus no further adjustment was warranted. Officials in Canada provided data on multiple-year contracts, including call-ups and standing offers. However, since the contract value field we used accounted for the total value of the contract, no further adjustment was needed. For Mexico, CompraNet contains information on framework agreements and multiple-year contracts, but since the contract value field indicated the total value of the contract award, no adjustment was needed. South Korea also uses multiple-year contracts, and we made several adjustments to estimate South Korea’s total value of 2015 awards. We identified multiple-year contracts in KONEPS in 2015; based on solicitation numbers, we then removed the value of contracts originally awarded in prior years, while adding the value of multiple year contracts with solicitations in 2015 and awards in 2016 and 2017. Type of tendering procedure. In all databases we included in our analysis contracts under open and limited tendering procedures. Second, we identified data fields among the five databases that could potentially be used as proxy measures of foreign sourcing in government procurement: contractor data related to firm location contractor data related to firm ownership data on country of product and service origin However, we did not identify a data field common to all five databases that could be used as a proxy measure of foreign sourcing. FPDS-NG contained data on all four measures listed above. TED, CompraNet, and Contract History contained contractor data related to firm location. KONEPS and Japan’s WTO submission on its 2013 procurement contained data on country of product and service origin. Therefore, two data fields—firm location and country of product and service origin—were available in two groups of countries as reasonable proxy measures of foreign source procurement. Finally, we analyzed the contract data from the government procurement databases by GPA coverage. Some databases contain a field for GPA coverage, the data for which we deemed to be reliable for our purposes; for the databases that did not, we developed a proxy measure for GPA coverage. FPDS-NG contains a field on trade agreement coverage, but we found it to be unreliable as reported in previous work; therefore, we constructed a method to identify GPA covered procurement using an approach that USTR confirmed is consistent with the steps applied by the USG in developing its GPA statistical notifications. TED contains an identifier for GPA covered procurement, and we used this field to estimate GPA covered procurement for Norway and the EU after taking steps to address missing values for this identifier using other information in the dataset. Contract History contains a field that lists all internal and international agreements applicable to a contract in Canada. Therefore, covered procurement includes all contracts covered under the GPA, NAFTA, and other Canadian international procurement agreements. For Mexico, CompraNet contains a data field on type of procedure, which indicates the eligible firms that can bid on a contract. The data in this field indicate that the contract is (1) open to national firms only; or (2) international procurement under trade agreements, that is, open to both national (Mexican) firms and foreign firms from FTA partner countries; or (3) international procurement open to national firms, foreign firms from FTA partners, and all other foreign bidders. We treated international procurement in CompraNet as a proxy for GPA covered procurement. We grouped all contracts awarded in 2015 into two categories: non-covered procurement, which includes contracts open to national firms only, and covered procurement, which includes contracts open to foreign bidders (i.e. all contracts in categories 2 and 3 described above). KONEPS does not have a data field that specifically identifies covered procurement. Therefore, we defined a proxy for covered procurement as procurement above the revised GPA thresholds by covered entities. However, we were unable to make an adjustment for goods and services excluded from the agreement, since KONEPS does not classify foreign procurement by product service codes. To analyze the extent to which central government contracts that are covered under the GPA and NAFTA are foreign-sourced, we compared the proportion of foreign-sourced award values for contracts covered under the GPA and NAFTA to the same proportion of foreign-sourced contracts, which are not covered by those agreements. Our analysis describing the relationships between trade agreement coverage and procurement award values did not account for additional factors and was limited due to the data available. As we previously reported, the countries within our scope represent over 90 percent of the GPA countries’ total government procurement. Moreover, we previously performed consistency checks across time periods for these countries and determined that covered procurement out of total central government procurement appeared relatively stable over time. However, a more robust test of the relationship between foreign sourcing and selected international agreement coverage would use a larger cross-section of data over time and control for factors such as types of goods and services procured, size of the economy, type of tendering procedure, and other specific details of each agreement, among others. To determine whether the procurement contract data from the five databases were reliable for our purposes, we identified in relevant countries the appropriate data sources used to prepare the countries’ and the EU’s submissions of statistical notifications to WTO and other government procurement reports. To ensure consistency between our methods for estimating foreign sourcing with the methods used by the countries and the EU in their estimates of covered procurement for their GPA statistical notifications, we discussed with government officials in Canada, Japan, Mexico, and South Korea their process and data used to create their statistical notifications and other WTO reports, and we took steps to replicate existing report totals of EU covered procurement. We performed a sensitivity check for the U.S. data in FPDS-NG, where more than one relevant data field was available, to determine whether the definitional differences in the data fields were likely to materially affect our results about foreign sourcing. The results were similar across all six fields that could be used as alternative proxy measures of foreign source procurement in FPDS-NG data (see app. III, tables 11 and 12). In addition, we conducted electronic tests of all five procurement databases to identify whether the data were complete and internally consistent. We determined that the country procurement databases were sufficiently complete and internally consistent after taking the additional steps for the EU and Norway as described earlier, related to missing contract award values (see app. IV). We also shared our analyses of the data with cognizant officials from the corresponding countries who were willing to verify our methodology and replicate our analysis. Procurement and trade officials and researchers in Canada, Mexico, South Korea and Japan answered our questions relevant to data quality including data collection, cross checks of data entries, access controls, internal reviews, primary users, completeness and updates to the data, missing values, reporting mistakes, electronic safeguards and procedures for follow-up if errors are found. In Canada and Mexico officials replicated and confirmed our methodology and results. Results for South Korea and Japan were consistent with alternative available official publications. The various limitations in the procurement contract data that we identified and addressed, to the extent possible, affected our ability to obtain precise estimates of foreign sourcing in government procurement, but they were not an impediment to using the data for broad comparisons of orders of magnitude. Such comparisons include the amount of foreign sourcing, measured using firm location and country of product and service origin, by the USG and central governments of the other six main parties to the GPA and NAFTA. The data also allowed broad comparisons of bilateral procurement flows among the parties, as well as comparisons by type of contract and agreement coverage, as available, for the seven parties to the GPA and NAFTA within our scope. Analysis of Trade Data Linked to Data on the Goods and Services Purchased by the Public Sector To obtain information on the aggregate levels and percentages of procurement by all levels of government that are imported, we relied on input-output tables from the World Input Output Database (WIOD) for 2014. The input-output tables have an industry by industry format, with each country’s industries listed separately. The data in each table are derived from publically available data from both national statistics agencies and international organizations such as the United Nations and the Organisation for Economic Co-operation and Development. We relied on the WIOD to ensure that the combined data from different countries was collected to be consistent. These data do not allow for distinctions between different levels of government. To assess the reliability of estimates based on the WIOD data, we first reviewed available documentation for the database. In cases where we had questions, we received written responses from WIOD officials. In addition, we compared estimates based on the WIOD to estimates based on other databases and found similar results. In general, we found that the data were sufficiently reliable for our purposes. To estimate the level and percent of procurement from the database, we took the following steps. First, we identified the industries associated with the governmental sector. Then, for that industry (or combination of industries), we obtained both the total level of purchases (or inputs), and the inputs that came from within that country, or other countries of interest. To obtain an estimate for the EU, we combined the purchases over the 28 member countries then in the EU. In general, we followed a procedure outlined in a 2017 paper produced by the European Commission. In this paper, the authors describe how input-output tables can be used to measure cross-border penetration in public sector procurement. An essential step in our method is defining which industries make up the government sector. Moreover, because the composition of the government sector and the patterns of government purchases vary by country, different measures of the government sector are more appropriate for different countries—since what goods and services the government provides or performs affects what it procures from the private sector. For example, for the EU, the government funds the majority of services in the area of public administration, defense, social security, education, and health care. In contrast, the USG funds a smaller share of health care services. We followed the model laid out in the European Commission paper and defined the government sector in three ways: 1. Narrowly Defined – (O84) Public administration and defense; compulsory social security 2. Typically Defined – (O84) Public administration and defense; compulsory social security (Q) Human health and social work activities 3. Broadly Defined – (O84) Public administration and defense; compulsory social security (Q) Human health and social work activities (D35) Electricity, gas, steam and air conditioning supply (E36) Water collection, treatment and supply (E37-E39) Sewerage; waste collection, treatment and disposal activities; materials recovery; remediation activities and other waste management services (1/3) * (H49) Land transport and transport via pipelines (1/2) * (H53) Postal and courier activities (1/2) * (J61) Telecommunications However, our procedure deviated from the European Commission report with regard to an additional category of expenditure in the report, final consumption by government. As in our prior reports, we did not include this category. This category includes both spending on social benefits, health care, and education as well as spending on collective items such as defense. We did not include this category in prior reports partly due to data reliability concerns about consistency in measurement of spending on social benefits across countries. However, if we had included it, that would have caused our estimates of import penetration to be smaller, because the WIOD tables do not include any cross-border expenditures for this category. For example, the percentage for the United States would have changed from about 8 percent to about 4 percent. To construct consistent data from different countries over time, certain assumptions were made by the WIOD. An assumption that has important implications for our analysis is known as a “proportionality assumption,” which is typical in the construction of input-output tables. This assumption requires that the percentage of a product that is imported is constant across all industries. In the example provided by the WIOD: “If 20 percent of Czech absorption of electronics is sourced from Germany, then 20 percent of any Czech final or intermediate use of electronics is assumed to originate from Germany.” The WIOD has attempted to improve on the proportionality assumption by making it at a more disaggregated level, but according to the WIOD, the proportionality assumption remains a limitation of the data set and consequently of our analysis. Importantly for our analysis, the proportionality assumption implies that the results we obtained from this method may not capture attempts by the government sector to award a larger share of its procurement to domestic firms relative to other industries. Another important limitation for our analysis is the scope of the industry data reported by the WIOD. Specifically, the input-output data include intermediate inputs but exclude purchases by government for investment. Such purchases could include some government assets that would be considered procurement covered by the GPA and NAFTA. For example, the input-output data could exclude construction services like those government purchases to build highways or schools that have long-term use, which are procurements potentially covered by the GPA and NAFTA. Finally, while we followed a method described above that has been used to study procurement, there are alternative methods that could have also been used based on input-output data. For example, according to industry officials at the U.S. Bureau of Economic Analysis, the “Trade in Value Added” methodology is such a method, and such data are maintained by the Organisation for Economic Co-operation and Development. We conducted this performance audit from March 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Characteristics of Central Government Procurement Databases The following appendix contains descriptive comparative information about the five databases included in our review: for the United States, the Federal Procurement Data System-Next Generation (FPDS-NG); for the EU and Norway, Tenders Electronic Daily (TED); for South Korea, the South Korea ON-line E-Procurement System (KONEPS); for Mexico, the Government of Mexico e-Procurement System CompraNet; and for Canada, Contract History. For each database, we provide its formal name and function, contract and/or agency coverage, and data field(s) related to firm location, firm ownership, source country of goods or services, location of contract execution, contract valuation, trade agreement coverage, and type of contract in terms of goods, services or construction services. Appendix III: Additional Results Related to Foreign Sourcing by the U.S. Federal Government The following appendix provides supplemental information from our analysis of foreign sourcing by the United States in fiscal year 2015 based on data from the Federal Procurement Data System-Next Generation (FPDS-NG). FPDS-NG contains data on four potential proxy measures of foreign sourcing—firm location, firm ownership, product and service origin, and place of performance. The database contains six fields that correspond to these four proxy measures. See tables 11 and 12. For cross-country comparisons, we use two of the six measures—vendor country code (13QQ) and country of product and service origin (9E). We disaggregate the data by country and list the top 20 countries, which are recipients of USG contracts based on firm location. See tables 13 and 14. Since about 10 percent of USG contracts are performed outside the United States, we also provide a breakdown of those contracts that are awarded to foreign-owned and –located firms by agency. See table 15. Finally, since most of these contracts by contract value are awarded by the Department of Defense (DOD), we also provide a country breakdown of DOD contracts performed outside the United States and awarded to foreign-owned and –located firms. See table 16. Appendix IV: Methodology for Addressing Missing Contract Award Values in the Tenders Electronic Daily Database European Union To report on European Union (EU) procurement data in the Tenders Electronic Daily (TED) database for 2015, we took steps to address missing contract award values, which amounted to approximately 15.2 percent of the 38,233 in-scope contract award values. To address these missing contract award values, we implemented a multiple imputation methodology that imputes a range of values for each missing contract award value and allows for estimation of additional uncertainty induced by the imputation methodology. After determining that the data were likely to be conditionally missing at random, we used predictive mean matching (PMM) to address missing values as described below. We determined that using PMM was appropriate because it can provide more robust results when the relevant variable is not normally distributed; PMM, as a form of multiple imputation, allows us to assess the variability introduced through the process of addressing missing data; and PMM, when properly specified, does not distort averages or variance in the underlying data. As we discuss below, the method for addressing missing values used by the EU has none of these features. In PMM, a regression model is first fit to complete cases in the dataset to predict values for the variable of interest for the entire dataset (i.e., including complete and incomplete cases). These predicted values are used to identify complete observations (“donors”) that are close (a “match”) to a given observation that is missing a value for the variable of interest. The PMM model draws matches using the posterior predicted distribution of the regression model. When PMM is used in conjunction with multiple imputations, this process is repeated multiple (m) times for each missing value. As a result, each of m imputations may match to a different donor. The donor’s observed value for this variable is donated to fill the blank data cell—not the predicted value used to match to this donor. The strength of the predictive model used to identify these matches will affect variation in the set of m imputed values because a better predictive model will identify donors that have observed values more consistently close to their predicted values. In order to specify our PMM model, we first explicitly tested the Ordinary Least Squares regression model used to match donors as part of the process discussed above. We performed standard regression diagnostics, including an examination of the included variables and residuals to avoid overfitting. We found that our model was able to explain 86 percent of variation in contract award values and appeared to have well-behaved (homoscedastic) residuals. We drew m=30 imputed values for each missing observation using the PMM process described above, which allowed us to generate estimates of the total contract value amounts and measure the uncertainty induced in those estimates by the imputation methodology. We used these measures of uncertainty to construct 95 percent confidence intervals and express these values as a percentage relative to the estimate itself. To assess the quality and reliability of the multiple imputations that followed from this predictive model, we performed four main sensitivity checks, which are included in tables 17 and 18. 1. We examined the proportion imputed for each subset of the data that we planned to report. The column headed “Percent imputed contract awards” shows the proportion of the count of contracts in a given data subset that were imputed using the methodology described above. We looked to avoid any individual subset being substantially greater than the overall average of 15 percent imputed. In practice, we individually checked any subset exceeding 30 percent imputed. 2. We evaluated the level of uncertainty induced by the imputation methodology across important subgroups of the data. The column headed “95 percent confidence interval +/-” indicates the percent of the “Contract award value estimate” that, when added and subtracted to this estimate, forms the 95 percent confidence interval. The level of uncertainty expressed in the relative confidence interval results from between-imputation variance, which could indicate extreme or inconsistent matches. We looked for confidence intervals that were, in our judgment, narrow as a proportion of point estimates. In practice, nearly all of the subgroups we are choosing to report have confidence intervals smaller than plus or minus 3.5 percent of point estimates. 3. We evaluated the percent of imputed values across important subgroups of the data. “Percent of imputed value duplicates” is a diagnostic column to test for sparseness of imputation matches among the 30 imputed values for each imputed contract award. We determined the number of duplicate imputation draws among the 30 imputed values for each observation, which could indicate sparseness in the number of suitable matches or overfitting of the model. We intended to inspect any finding with more than about 5 out of 30 (17 percent) duplicated imputation draws; in practice, however, this threshold was not reached for any subsets of the data that we have chosen to report. 4. We compared estimates resulting from our imputation methodology to published EU reports across important subgroups of the data. “Alternative estimate (EU’s missing value methodology)” shows the results of replicating a methodology for correcting missing data described in EU documents and used for some EU reports. The EU methodology is based primarily on the average value of contracts that are present in the dataset. This provides a general point of comparison, allowing us to determine which subsets of the data are likely to be responsible for estimation differences with prior EU publications. This comparison methodology therefore provides a benchmark but not a diagnostic for the imputation models. There are several important differences between our imputation methodology and the EU’s methodology. a. Calculation of confidence intervals: The EU’s methodology results in the same value substituted for every contract award of a given type (construction goods, and services). As such, it is not possible to estimate confidence intervals for a given observation or group of observations using this methodology. In contrast, the multiple imputation models include estimates of uncertainty. b. Distortion of subgroup averages: The EU’s methodology is not sensitive to differences in group averages apart from contract type. As a result, it may distort subgroup averages. For example, if hypothetical Country A has services contracts that average $100 but the overall average for services contracts is $1,000, substituting the overall average into missing values for Country A as the EU methodology would have the effect of significantly distorting Country A’s characteristics. In contrast, the imputation models we used are designed to be sensitive to all significant reported differences in contract awards because we included all reported variables in our imputation models. Norway To report on Norway procurement data in the TED database for 2015, we needed to take steps to address missing contract award values (153 of 1,319 missing, or about 11.5 percent). The scale of the missing values is thus smaller than for the EU data, while the dataset as a whole is too small, in our judgment, to support correction through an imputation model. Our statistical tests found no evidence that contract award values were conditionally missing at random. Thus, we assume that the data are missing completely at random and corrected the missing data using post- stratification estimation techniques. To do so, we treated the database of contract awards as the full population of such contract awards, which provides the full joint distribution of contract attributes. We treated the complete observations (88.5 percent of the total) as our sample of this population. Post-stratification adjusts the sampling weights for this sample so that the joint distribution of post-stratifying variables, which we selected based on our reporting needs, matches the known population joint distribution. Based on the resulting confidence intervals, we determined that the post- stratification sampling results in data are sufficiently reliable for subsets defined by foreign status and contract type (see table 20) or by foreign status and GPA coverage (see table 21). Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Kimberly Gianopoulos, (202) 512-8612 or gianopoulosk@gao.gov. Staff Acknowledgments In addition to the contact named above, Adam R. Cowles (Assistant Director), Marisela Perez (Analyst-in-Charge), Gergana T. Danailova- Trainor, Ben Bolitzer, Andrew Kurtzman, and Julia Kennon made major contributions to this report. James Ashley, Peter Choi, David Dayton, Christopher Keblitis, Grace P. Lui, John Yee, and Timothy Young provided technical assistance.
Why GAO Did This Study Globally, government procurement constitutes about a $4 trillion market for international trade. However, little is known about foreign sourcing in government procurement—how much governments procure from foreign-located suppliers or how much they acquire in foreign-made goods. GAO was asked to review the extent of foreign sourcing in government procurement across countries. GAO focused on the United States and the other six main parties to the GPA and NAFTA, selected international agreements that open procurement markets on a reciprocal basis. This report, the fourth of a related series, (1) provides broad estimates of foreign sourcing by the U.S. government and central governments of the other six main parties, and (2) assesses foreign sourcing as a share of estimated central government procurement and of estimated procurement by all levels of government, and the extent to which central government contracts that are covered under selected international procurement agreements are foreign-sourced. GAO analyzed the most recent comparable data available from two sources: (1) government procurement databases used in Canada, the European Union, South Korea, Mexico, Norway, and the United States, for 2015, and (2) 2014 trade data merged with data on the types of goods and services purchased by the public sector. Since Japan does not have a government procurement database, data for Japan were based on its 2015 GPA submission of 2013 data. GAO also interviewed cognizant government officials in Washington, D.C.; Ottawa, Canada; Mexico City, Mexico; Seoul, South Korea; and Tokyo, Japan. What GAO Found The U.S. government awarded contracts valued at about $12 billion to foreign-located firms, of which about $5 billion went to firms with reported locations in the other six main parties to the World Trade Organization Agreement on Government Procurement (GPA) and the North American Free Trade Agreement (NAFTA) (see figure). Conversely, government procurement databases indicated the central governments of these parties awarded an estimated $7 billion to foreign sources, out of which about $2 billion was U.S.-sourced. Canada and Mexico awarded most of the U.S.-sourced contracts. GAO was able to determine that the U.S. government awarded more, by contract value, to foreign-owned firms located abroad than to foreign-owned, U.S.-located firms. Moreover, more than 80 percent of U.S. government contracts awarded to foreign-owned firms located abroad were Department of Defense contracts performed abroad. Overall, while available contract data enable broad cross-country comparisons, they do not necessarily show where the goods are produced, where the services are delivered, or where the profits go, among other economic effects. Foreign sourcing by the seven GPA and NAFTA parties within the scope of the study, using two alternative methods, is less than 20 percent of overall central government procurement. Foreign sourcing by central governments, estimated from government procurement databases of the United States and the other six main parties, varied in value by party from about 2 to 19 percent of overall central government procurement. Foreign sourcing by all levels of government, estimated from data on trade and public sector purchases, showed that the governments' imports likely ranged from about 7 to 18 percent of the goods and services the governments purchased. In addition, contract data show that U.S., South Korean, and Mexican central government foreign sourcing was greater in value under contracts covered by GPA and NAFTA than under noncovered contracts, but the opposite was true for Canada and Norway. For the European Union and Japan, GAO found little difference or could not calculate an estimate.
gao_GAO-19-570
gao_GAO-19-570_0
Background Changing Warfare Environment According to the Summary of the 2018 National Defense Strategy and the Army, the character of warfare is changing. For decades, the United States enjoyed uncontested or dominant superiority in every operating domain—land, air, sea, cyber, and space—but today every domain is likely to be contested by other great-power competitors and potential regional adversaries. Figure 1 below describes these operating domains. Since at least 2012, DOD began shifting its focus from counterinsurgency operations in Iraq and Afghanistan to adversaries who possess more sophisticated capabilities. For example: In 2012, DOD issued strategic guidance that cited efforts by Iran and China to pursue cyber and electronic warfare capabilities with the ability to counter U.S. power projection and limit operational access. The 2014 Quadrennial Defense Review acknowledged the efforts of China and others to counter U.S. strengths using anti-access and area-denial approaches and using new cyber and space control technologies. The 2014 Quadrennial Defense Review also addressed the rapid evolution of modern warfare, including increasingly contested battlespaces in the air, sea, space and cyber domains. In 2016, an Army study of Russia’s operations and doctrine concluded that Russia employs formations, operational concepts, and capabilities that overmatch U.S. capabilities in range and lethality, thus challenging the Army’s ability to conduct operations and win battles. The 2017 National Security Strategy stated that U.S. advantages are shrinking as rival states modernize their forces. The 2017 National Security Strategy identified many of the challenges that China and Russia pose, including Russia’s use of offensive cyber efforts to influence public opinion, and how cyberattacks have become a key feature of modern warfare. A classified National Defense Strategy followed in January 2018, and the unclassified summary cited challenges to the U.S. military advantage as a shift in the global environment. Purpose and Origins of the Multi-Domain Operations Concept The Army’s multi-domain operations concept originates from an Army effort to rethink how it will fight in the new, more complex operating environment. The Army defines multi-domain operations as ways for confronting adversaries in contested environments by presenting them with multiple challenges through the combining of multiple capabilities. This means that ground forces should be able to operate freely in other warfighting domains and, if necessary, be able to overwhelm an adversary’s forces by combining capabilities across different domains, such as land, air, sea, cyber, and space simultaneously. According to Army officials, in 2014 the then-Deputy Secretary of Defense tasked the Army to update its warfighting concept to deal with the threats and challenges posed by great-power competitors in the future operating environment. The Army officials added that around the same time, the Army began developing and running a wargame scenario focused on a threat that employed similar doctrine, tactics, and capabilities as those used by Russia in Ukraine. In 2016, the Army also assessed the increasingly sophisticated Russian military capabilities and identified specific multi-domain challenges that the Army would face if it came into conflict with Russia. Army officials said that its analysis highlighted the urgency of updating how it would fight such an adversary. In beginning to develop this concept, the Army reached out to the Marine Corps, as both services face similar problems in ground-combat operations. Since the Army began developing its concept, the Army established a framework for assessing how the adversary operates and the problems the Army needs to resolve as a ground force. For example, early on the Army developed an expanded battlefield that stretches far beyond the front lines, or “close area”, where ground forces face off against each other. Under this expanded battlefield, adversaries can use more sophisticated weapons and cyber capabilities that are based in distant and protected territories, potentially reaching targets that are located well behind the front lines, even within the continental United States. Figure 2 below depicts the Army’s new expanded battlefield for multi-domain operations, including a description of each area of the battlefield. The Army Is Changing Its Doctrine, Organizations, and Training to Execute Multi-Domain Operations The Army is changing aspects of its doctrine, organizations, and training simultaneously to develop a force that can effectively engage great-power competitors, such as Russia and China, across multiple domains, and expects this process to continue through the 2020s. Army concepts propose new approaches for the Army to develop capabilities against emerging challenges. The new Army Operating Concept built around multi-domain operations is intended to drive capability development, which is addressed through changes to the Army’s doctrine, organizations, and training, among other areas. The Army’s goal is to field a more lethal and capable force by 2028 that is able to dominate adversaries in a multi-domain environment. Figure 3 below summarizes how the Army uses validated concepts to drive changes in capabilities and the force. Doctrine. Given the Army’s attention to multi-domain operations, it has updated or is in the process of updating doctrine that guides how the Army fights. Primary among this effort is updating the Army’s overarching operations field manual, which establishes how the Army conducts large- scale ground combat operations against the threat posed by a great- power competitor, among other things. In its most recent revision to its doctrine, the Army incorporated several aspects of multi-domain operations, such as the expanded battlefield that includes cyber and the electromagnetic spectrum. TRADOC officials stated that they are also in the process of updating doctrine related to cyber operations and field artillery operations in order to build a force that can integrate both cyber capabilities and long-range fires—such as artillery, rockets, and missiles—for multi-domain operations. The officials added that the Army is developing or is planning to develop specific doctrinal guidance for new Army units that will focus on multi-domain operations in the areas of intelligence, cyber, electronic warfare, and space. Organizations. The Army wants to ensure that its warfighting organizations have the engineering, artillery, air defense, and other enabling capabilities needed to conduct multi-domain operations. For example, the Army believes that formations above the brigade level, such as division headquarters and corps headquarters, must have the ability to conduct electronic warfare and cyber operations. To that end, the Army is creating several new organizations focused on cyber and electronic warfare (discussed later in the report). Additionally, the Army is trying to align its multi-domain operations concept with a complementary concept focused on the roles and responsibilities of these organizations above the brigade level. Expanding the roles and responsibilities of formations above the brigade level signifies a departure from the Army’s modular force, which was implemented beginning in 2004. At that time, the Army embedded “key enablers” such as military intelligence, reconnaissance, and logistics functions, as well as other specialized personnel and equipment, into brigade combat teams to provide them independent capabilities. Moving forward, the Army envisions enhancing the capabilities of brigade combat teams for multi-domain operations, as well as providing additional key capabilities to formations above the brigade level. For example: Brigade combat teams. Brigade combat teams are the Army’s primary tactical unit, composed of around 4,400-4,700 soldiers. They are being adjusted to conduct operations in the cyber domain, including new platoons focused on electronic warfare. Army division headquarters. Army divisions command multiple brigade combat teams. The Army expects division headquarters to manage the electromagnetic spectrum and to be the primary echelon for integrating aviation, fires, and electronic warfare into ground maneuver to defeat enemies in a close fight. Army corps headquarters. Army corps command multiple divisions. Under the Army’s concept, the Army corps headquarters will be the primary echelon for defeating mid- and long-range enemy artillery fires. The Army corps will also integrate artillery rockets and missiles, as well as cyber capabilities in support of division or brigade ground operations. Field armies. Field armies, which have the ability to command two or more Army corps, are forward-stationed in regions with capable threats posed by great-power competitors. They will conduct campaigns to compete with adversaries short of armed conflict, and manage the transition to armed conflict should it be needed. The field army will also direct deception operations and provide long-range artillery and fires support. Theater armies. Theater armies are also forward-stationed forces and will be responsible for managing and combining Army capabilities in support of information environment operations and space operations. The theater army must be able to protect joint bases and networks and enable access to the theater. Training. The Army is also updating its training across a broad range of efforts. Army training officials stated that there is a need to train units collectively under multi-domain operations conditions against great-power competitors like Russia and China, per guidance from the Chief of Staff of the Army. The commander of Army Forces Command also issued guidance for fiscal year 2019 to help train and prepare soldiers to conduct multi-domain operations. This guidance included increasing the realism and rigor of every unit rotation to one of the Army’s combat training centers, as well as designing warfighter exercises that focus on units conducting operations in contested electronic warfare, cyber, and space environments. Additionally, the training officials stated that in recent years the Army has updated its decisive-action training scenarios to include regional versions for Europe, the Pacific, and Africa that comply with the multi-domain operations concept. The officials added that, in future years, several Army organizations will be collaborating to modernize the Army’s home-station training and combat training centers in support of fielding a force capable of conducting multi-domain operations. All of this builds upon the Army’s earlier efforts to shift its training focus to large-scale combat after a decade of training for counterinsurgency operations, as we testified to Congress in February 2019. The Army is also taking steps to revise the training for cyber and electronic warfare personnel. These steps include revising the U.S. Army Cyberspace Operations Training Strategy so that it accounts for new equipment and doctrine, but also for the new organizations being created and the tasks those units will be expected to perform, according to Army cyber officials. Additionally, the Army Cyber School is revising its cyber and electronic warfare training so that personnel will be able to conduct multi-domain operations. Furthermore, the Army is working on a joint solution for training cyber personnel on behalf of U.S. Cyber Command, according to Army Cyber Command officials. The Army’s goal is to provide the total cyber force with the ability to conduct joint cyber training, including exercises and mission rehearsals by developing a virtual training environment that simulates realistic cyber threats. This cyber training solution, called the Persistent Cyber Training Environment, will allow for experimentation, unit certification, and assessment and development of the cyber mission force in a virtual training environment. The Army’s goal is that the environment will decrease training time, increase throughput of personnel, and improve training quality. One of the stated operational imperatives of the Persistent Cyber Training Environment is to become integrated with multi-domain exercises. The Army Is Establishing New Cyber and Electronic Warfare Units, but Units Are Facing Staff, Equipment, and Training Shortfalls in Part Due to Incomplete Risk Assessments The Army Is Activating Several New Cyber and Electronic Warfare Units at an Accelerated Pace and Is Facing Challenges The Army is seeking to quickly create or design several new cyber and electronic warfare units in order to execute multi-domain operations; however, Army leadership is activating some units at an accelerated pace due to the sense of urgency imposed by the growing capabilities of potential great-power competitors. Some of these new Army units are more narrowly focused on a particular domain or skill set, such as the recently activated 915th Cyber Warfare Support Battalion based out of Fort Gordon, Georgia, and new Electronic Warfare Companies and platoons. The 915th Cyber Warfare Support Battalion will focus on providing offensive cyber capabilities consistent with its authorities to conduct offensive operations. The battalion is designed to fit with various Army formations—such as corps, divisions, or brigade combat teams—as assigned by the Army. The Electronic Warfare Companies, which are scheduled to be fielded during fiscal years 2023 through 2025 according to Army officials, will be attached to an Army corps and will be capable of planning and conducting electronic warfare operations. Electronic Warfare platoons, which Army officials said are scheduled to be fielded during fiscal years 2020 through 2022, will provide similar capabilities to brigade combat teams and other Army tactical-level formations. Other units are being designed to plan and conduct operations in and across multiple domains, with specialists in cyber, electronic warfare, space, and intelligence assigned to the same unit. For example, a recently activated Intelligence, Cyber, Electronic Warfare, and Space (ICEWS) unit will be capable of planning and directing operations in any or all of those areas. The ICEWS unit will function as part of a larger Multi-Domain Task Force, which will be capable of expanding those operations into other domains such as land and air. The Army plans to field at least two of these ICEWS units by the end of fiscal year 2020. Additionally, the Army is restructuring or creating Cyber, Electromagnetic Activities planning sections in the headquarters of more than 125 Army formations, from special forces units up to theater-level Army headquarters. This restructuring effort will take place during fiscal years 2020 through 2022, according to Army officials. Army guidance states that a unit’s activation date should be identified 1 to 2 years in advance, according to Army officials, in order to provide time to build up trained personnel and equipment in the unit before it is activated and available to be deployed. As a result of accelerating the activation of these units, the Army is facing interrelated challenges in terms of staffing, equipping, and training the units, as discussed below. Accelerated pace creates challenges filling positions. The Army has had difficulty filling its ICEWS unit and the 915th Cyber Warfare Support Battalion with personnel to conduct operations. See table 1 below. By accelerating the activation of the ICEWS unit in October 2018 as a pilot, or test, program, the Army activated the unit with only 32 percent of its personnel in place, and Army headquarters officials report that filling the unit with personnel with the right skills has been a slow process. The 915th Cyber Warfare Support Battalion is facing similar staffing challenges. As of the end of March 2019, the unit was understaffed by more than 80 percent as it filled 30 of 171 authorized positions for fiscal year 2019, according to an Army headquarters official. The official acknowledged that the 915th Cyber Warfare Support Battalion may not meet the authorized staffing levels for fiscal year 2019 if higher priorities arise for the service. Looking ahead, Army officials said that filling all of these new cyber and electronic warfare units could be challenging because cyber personnel are in high demand, with competition for these skilled personnel existing between the Army, other government entities, and the private sector. Army headquarters officials said they are exploring options to address the challenges and have taken steps to retain the personnel that they have, mostly in the form of retention bonuses and incentive pay. Some of those incentives are targeted at the senior enlisted levels, which are some of the personnel that Army officials indicated are in the most demand and of which they have a shortage. Accelerated pace creates equipping challenges. Officials with both Army headquarters and the Army Cyber School cited equipment challenges as one of the key issues that must be addressed when activating a unit on an accelerated basis. For example, in November 2018, an Army headquarters’ official responsible for building the ICEWS unit stated that the Army was having a difficult time identifying where the unit’s equipment would be coming from. By the end of January 2019, the official said the situation was improving and that 55 percent of the equipment had been identified, but the Army was trying to find a source for the remaining 45 percent. However, most of this is common Army equipment, such as firearms, according to an Army official; those percentages do not include the specialized cyber equipment that the unit will need to perform its missions, such as a communications system designed to transfer data beyond the line of sight during air defense operations. An Army headquarters’ official stated that the Army is prototyping different types of specialized equipment in order to expedite the acquisition of such capabilities. Revisions to training not keeping up with activation of units. Army officials acknowledged the need to update its cyber training, in part because the doctrine for new units is still being written. Officials with the Army Cyber School and the Army’s Combined Arms Center stated that the current U.S. Army Cyberspace Operations Training Strategy did not foresee all of the new cyber and electronic warfare organizations the Army now intends to create, including the Cyber Electromagnetic Activities sections attached to various formations. Army headquarters officials stated that they are working on a revision to the U.S. Army Cyberspace Operations Training Strategy to address these issues. However, the first ICEWS unit and the 915th Cyber Warfare Support Battalion were activated without this updated training strategy. With other units scheduled to be activated in fiscal year 2020, it is possible others may be activated without the training strategy as well. Without the updated doctrine and subsequent training strategies that will result from it, TRADOC officials said they would have difficulty designing training for the new units, and soldiers will not have a clear understanding of their tasks and missions. Obtaining equipment also could be a challenge for training servicemembers before they are assigned to cyber or electronic warfare units, according to some Army officials. Officials with the Army Cyber School stated that it could end up growing and producing a workforce that outpaces its ability to procure equipment. However, Army headquarters’ officials stated that equipping operational units is a higher priority than providing equipment to the schools for training, and the Army ensures that those units receiving the equipment get the training they need upon fielding the equipment. If the Army does not acquire new equipment quickly enough, the result could be that soldiers in the Army Cyber School will be trained on outdated equipment, which they will not use when they get to the field. The Army Assessed Staffing, Training, and Equipping Risks for Certain Cyber and Electronic Warfare Units, but Its Assessments for Units Activated at an Accelerated Pace Are Incomplete In the process of creating some new units, the Army assessed the risk of whether it can meet the units’ staffing, equipping, and training requirements before the units’ activation date, but it did not do so for those units activated at an accelerated pace. For example, the Army conducted risk assessments for some new Electronic Warfare platoons and Cyber Electromagnetic Activities sections that it plans to begin activating in fiscal year 2020. Those assessments identified issues and mitigation strategies for the Army to consider when making fielding and resource decisions. For example, the risk of finding a sufficient number of qualified personnel for the Electronic Warfare platoons and Cyber Electromagnetic Activities sections would be mitigated by spreading the activations over a minimum of 3 years. The assessment for the Electronic Warfare platoons also identified some equipping issues that will require either more senior-level input or extending timeframes for completion. In contrast, the Army activated the ICEWS unit and the 915th Cyber Warfare Support Battalion in an accelerated manner because of the urgent need to develop these organizations, given the growing capabilities of potential great-power competitors. However, the Army did so without completely assessing the staffing, equipping, and training risk to those units over the long term. For example: According to Army officials, the Army did not perform a risk assessment for the ICEWS unit currently assigned to and participating in exercises in the Pacific, because the Army initiated the unit as a pilot, or test, program. According to Army officials, a risk assessment was unnecessary prior to activating the unit because the Army expects to refine the unit’s personnel, equipping, and training requirements during the pilot program. However, the ICEWS unit is expected to become part of a larger Multi-Domain Task Force in fiscal year 2020. Until that occurs, the ICEWS unit is attached to another active Army unit and, according to Army officials, eligible to be deployed if needed based on its current capabilities. Unless the Army assesses the staffing, equipping, and training risks of the ICEWS unit, the unit may be unable to provide the expected capabilities, either currently or as part of the larger task force to which it will belong. The Army performed an initial risk assessment for the 915th Cyber Warfare Support Battalion before the unit was activated in December 2018. However, Army officials told us that the Army has plans to grow the unit to as many as 627 personnel by 2024, at which point it would be considered fully operational. Unless the Army performs a more complete risk assessment of the 915th Cyber Warfare Support Battalion’s staffing, equipping, and training requirements prior to achieving full operational capability, the Army may be poorly positioned to make decisions about how to use and support the battalion. Army guidance states that the Army should assess its ability to support a new unit’s staffing, equipping, and training requirements, among other things, so that senior Army leaders can evaluate proposed organizational changes. For example, under a force integration functional area analysis, the Army staff evaluates all proposed organizational changes to ensure that they meet the intent of senior Army leaders, have the resources available to accomplish their mission, and that their projected benefits justify increased resources. These assessments analyze the proposed organization in nine areas, such as staffing, structuring, equipping, and training, and are intended to give senior Army leaders an understanding of whether the organizations are affordable, supportable, and sustainable. According to Army officials, the force integration functional area analysis is similar to a risk assessment. In addition, Standards for Internal Control in the Federal Government state that management should identify, analyze, and respond to the risks related to achieving the defined objective—in this case quickly fielding a cyber force to deal with current threats. Because the Army has not completely assessed the risk of organizing the ICEWS unit and the 915th Cyber Warfare Support Battalion, senior Army leaders may be left with an incomplete picture of the challenges in affording, supporting, and sustaining these units over the long term. Moreover, senior Army leaders lacked key information needed to understand the capability and capacity of the units at the time they were activated. For example, these units currently do not have what they need in terms of personnel and equipment to conduct their missions successfully. Further, according to some Army officials, without such an assessment, the Army does not know whether accelerated activation was the best course of action; what challenges they may face in staffing, equipping, and training the units; or how to mitigate challenges that may arise in other areas, such as deploying and sustainment. Army officials stated that there is a lot of informal discussion between relevant Army offices to try to identify and deal with challenges for these units. However, they also acknowledged the problems inherent in activating a unit by accelerating timelines. Such risk assessments also could inform future Army decisions as it activates new units for multi-domain operations. Given the Army’s perception of the threat environment, the Army may decide to activate other multi-domain operations units in an accelerated manner. For example, the Army is exploring ideas for creating several new units in future years to enhance its capability in multi-domain operations, such as a Theater Space Warfare Battalion. The Army also has been running wargames to see how they would operate new types of units at the division, corps, and theater level for commanding and operating long- range missiles and rockets. Army officials stated that as these units grow and evolve, it is uncertain when more comprehensive risk assessments would take place. If the Army does not perform a risk assessment for the activated ICEWS unit before it joins the larger Multi-Domain Task Force, or a more complete risk assessment for the 915th Cyber Warfare Support Battalion as that unit matures, the Army may end up fielding units that are not capable of providing the needed capabilities. Moreover, these risk assessments could provide vital lessons that could inform future Army decisions on the development, activation, and fielding of other units focused on enhancing the Army’s capability to conduct multi-domain operations. The Army Engaged with the Joint Staff and Other Services and Envisions Opportunities for Further Coordination The Army engaged with the Joint Staff and other services to develop its Army Operating Concept and envisions opportunities for further coordination in the future. The Army’s overarching objective is to field a multi-domain-capable force by 2028, and it considers further engagement with the Joint Staff and other services as essential to accomplishing that goal. According to Army plans, the Army needs to finalize the next version of its Army Operating Concept by the fall of 2019 in order to incorporate multi-domain operations into all levels of Army leadership, training, and education by 2020. The Army plans indicate that maintaining this schedule is important to have a ready, lethal, and modern force for multi- domain operations by 2028. From the outset, the Army engaged with the Marine Corps to begin its concept development. Together the Army and Marine Corps published a white paper in January 2017 where they unveiled “Multi-Domain Battle” as a new concept for combat operations against a sophisticated great- power competitor. This white paper highlighted the need for ground forces to focus on all five warfighting domains and was intended as a first step toward further multi-domain concept development, wargaming, experimentation, and capability development. Once the white paper was written, the Army engaged with the Joint Staff and the other services in several ways to refine its concept: Joint Staff collaboration. The Army engaged with the Joint Staff on an Army-led study of recent contingency operations and used the lessons to refine the Army Operating Concept’s description of the emerging operational environment. Based on that study, the Army also refined some solutions for addressing threats posed by great- power competitors. Joint Staff officials reported that the Army engaged with the Joint Staff through other collaborative events as well, including tabletop exercises that tested and refined multi-domain concept ideas. Marine Corps collaboration. As the Army moved forward from the white paper, the Marine Corps’ input informed the concept’s development in various ways. This included changing the concept’s title from multi-domain battle to multi-domain operations in April 2018 to better reflect the scope of competition and conflict, as well as the inherent joint nature of modern warfare. The Marine Corps also hosted a multi-domain symposium in April 2018 that was attended by the Army, Air Force, Navy, and Joint Staff. Air Force collaboration. The Army initially collaborated with the Air Force Air Combat Command to inform concept-development efforts, and more recently began working with the Air Force Warfighting Integration Capability under Air Force headquarters. Also, the Army and Air Force collaborated on tabletop exercises focused on simulating multi-domain operations. Army officials told us that this helped them refine their thinking on how to enhance the maneuverability of its land forces by combining Army and Air Force capabilities across domains. Navy collaboration. The Army and Navy principally collaborated by testing multi-domain capabilities during real-world exercises. For example, the Army joined the Navy’s 2018 Naval Rim of the Pacific exercise to demonstrate capabilities for multi-domain operations in a real world environment. While the Army took steps to engage with the Joint Staff and the other services, it made the decision to move forward with the latest version of its Army Operating Concept in order to meet its overarching objective to develop a multi-domain operations-capable force by 2028. Given this urgency, Army officials told us that they may have missed opportunities to further refine its Army Operating Concept in 2018 with the perspectives of the Joint Staff and other military services. Joint Staff officials told us that by not fully including the Joint Staff in some tabletop exercises, the Army may have missed the Joint Staff’s perspective on key issues related to multi-domain operations, such as joint command and control. As the Army continues to revise its Army Operating Concept, the Army recognizes the need to continue to engage with the Joint Staff and other services. Joint Staff officials told us that the Joint Staff has initiated its own plans to engage with the services to refine key ideas of multi-domain operations in joint concepts, including logistics, intelligence, and command and control. Army officials told us that they recognize the importance of not getting too far ahead of these efforts, or the efforts of other services related to multi-domain operations. Army officials told us that the mechanisms built into the Joint concept-development framework would provide opportunities to engage the services and Joint Staff as the Army revises its own concept. Army officials added that beginning in the fall of 2019 the Army will participate with the Joint Staff in a wargame designed, in part, to analyze how the Army Operating Concept works with the other military service operating concepts. As a result, the current concepts are likely to evolve in the future as the Army synchronizes its efforts with those of the Joint Staff and other services. Conclusions Rising threats posed by great-power competitors, particularly China and Russia, prompted the Army to initiate a profound and fundamental transformation to the way it plans to fight. The refinement of the Army’s Operating Concept is beginning to drive changes across the Army. The Army is making near-term changes by incorporating multi-domain operations into its doctrine, organizations, and training, which includes the accelerated creation of new cyber and electronic warfare units. However, these units are short of both people and equipment. While Army leadership believes that the urgency to confront threats justifies its decision to accelerate the development of those units, the Army did not assess the risks associated with staffing, equipping, and training its existing ICEWS unit prior to activation to determine whether it is affordable, supportable, and sustainable, and officials said it was uncertain when a more comprehensive assessment would take place. The Army plans to incorporate this unit into the first Multi-Domain Task Force by the end of Fiscal Year 2020, but in the meantime the unit could be deployed if needed. The Army did prepare a preliminary risk assessment for the 915th Cyber Warfare Support Battalion prior to activation, but it is unclear whether the Army will perform a more comprehensive risk assessment as the unit matures and nears full operational capability. For the units already activated, a risk assessment could benefit the Army by providing insights about the ability to deploy and sustain the units. It is important for the Army to assess its efforts before committing resources to activate new units. By formally assessing the risk of all new units activated in an accelerated manner, the Army will have the key information its leaders need for making decisions related to the activation of those units and other related units going forward. Recommendations for Executive Action We are making the following three recommendations to the Secretary of the Army. The Secretary of the Army should ensure that the Deputy Chief of Staff, G-3/5/7 assess the risk associated with staffing, equipping, and training the existing ICEWS unit prior to its incorporation into the first Multi- Domain Task Force in fiscal year 2020. (Recommendation 1) The Secretary of the Army should ensure that the Deputy Chief of Staff, G-3/5/7 conduct a comprehensive risk assessment associated with staffing, equipping, and training the 915th Cyber Warfare Support Battalion prior to approving the expansion of the unit to its full operational capability. (Recommendation 2) The Secretary of the Army should ensure that the Deputy Chief of Staff, G-3/5/7 assess the risk associated with staffing, equipping, and training of new units that it plans to activate in an accelerated manner for the purposes of conducting multi-domain operations, taking into consideration the assessments performed on the first activated ICEWS battalion and the 915th Cyber Warfare Support Battalion. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix I, the Army partially concurred with the first two recommendations and concurred with the third recommendation. The Army partially concurred with the first recommendation for it to conduct a risk assessment, such as a force integration functional area analysis, for the first activated ICEWS unit. The Army stated in its comments that it does not perform force integration functional area analyses for experimental or pilot organizations, and that because the first ICEWS was activated as a pilot, no such assessment was performed. The Army added that it would conduct a risk assessment at the conclusion of the pilot if and when the Army decides to establish such a unit. We met with Army officials to discuss their comments, during which they provided additional information and clarification regarding how they were assessing risks for the unit. Based on this information, we modified the report to reflect the Army’s position that a risk assessment was unnecessary prior to activating the unit because the Army plans on using the pilot period to determine the staffing, equipping, and training requirements for the unit. We also incorporated additional information on the status of the ICEWS unit. As a result, we clarified our recommendation to state that the Army should assess the risk associated with staffing, equipping, and training the existing ICEWS unit prior to its incorporation into the first Multi-Domain Task Force in fiscal year 2020. Army officials generally agreed with the revised recommendation. Moving forward, it will be important for the Army to implement this recommendation to ensure the ICEWS unit, which is active and eligible to be deployed, will be prepared to carry out its mission effectively. The Army partially concurred with the second recommendation for it to conduct a risk assessment, such as a force integration functional area analysis, for the 915th Cyber Warfare Support Battalion. The Army stated in its comments that it does not perform force integration functional area analyses for force generating units such as the 915th Cyber Warfare Support Battalion. Instead, it develops a concept plan, which applies rigor and analysis to determine the most efficient and effective way of fielding a new unit. We met with Army officials to discuss their comments, during which they provided additional information related to assessing risks for the 915th Cyber Warfare Support Battalion. Specifically, Army officials said that prior to activating the battalion, leadership approved the battalion’s concept plan, which included an initial risk assessment. We reviewed the concept plan for the battalion and found that the assessment only addressed the risk of not having the unit’s capabilities activated and in the field for operations. We incorporated this additional information on this initial risk assessment for the 915th Cyber Warfare Support Battalion into the report. As a result of this additional information, we clarified our recommendation to state that the Army should conduct a comprehensive risk assessment associated with staffing, equipping, and training the 915th Cyber Warfare Support Battalion prior to approving the expansion of the unit to its full operational capability. Army officials generally agreed with this. It will be important for the Army to implement the revised recommendation to ensure the 915th Cyber Warfare Support Battalion, which is active and performing operations, will be prepared to carry out its mission effectively. The Army concurred with the third recommendation for it to ensure that a risk assessment is conducted before activating any new organizations it plans to field in an accelerated manner for the purposes of conducting multi-domain operations. The Army added that any lessons learned from the activation of the first ICEWS unit and the 915th Cyber Warfare Support Battalion will be taken into consideration when assessing the risk before the activation of these new organizations. It will be important for the Army to implement the recommendation to ensure that any new organizations are prepared to carry out their missions, while potentially avoiding some of the challenges that the ICEWS and 915th Cyber Warfare Support Battalion have experienced. Lastly, the Army also recommended that we change the title of our report; however, we did not accept the title offered by the Army. We believe the title accurately reflects the issues and recommendations highlighted in the report. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense; the Acting Under Secretary of Defense for Personnel and Readiness; the Chairman of the Joint Chiefs of Staff; the Acting Secretaries of the Departments of the Air Force and the Army; the Secretary of the Navy; and the Chief of Staff of the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of the Army Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kevin O’Neill (Assistant Director), Matt Spiers (Analyst-in-Charge), Tracy Barnes, Shannon Finnegan, Christopher Gezon, Ruben Gzirian, J. Kristopher Keener, Alberto Leff, Joshua Leiling, Amie Lesser, Jon Ludwigson, Ned Malone, and Clarice Ransom made key contributions to this report.
Why GAO Did This Study The rise of great-power competitors, such as China and Russia, prompted the Army to transform the way it plans to fight. The Army is developing a new warfighting concept to guide how its forces will engage jointly with other services in multiple domains, especially in cyber and space. The House Armed Services Committee included a provision in House Report 115-200 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 for GAO to review the Army's implementation of the concept. Among its objectives, this report addresses (1) how the Army is changing its doctrine, organizations, and training in order to execute multi-domain operations; and (2) the extent to which the Army has established new cyber and electronic warfare units, including any challenges faced by these units, and whether the Army assessed risks associated with its plan to establish these units. GAO reviewed Army concepts, doctrine, force design, and training documents concerning multi-domain operations. GAO also interviewed Army and Department of Defense officials. What GAO Found The Army is changing aspects of its doctrine, organizations, and training to develop a force that can effectively engage great-power competitors—Russia and China—through multi-domain operations by 2028. Multi-domain operations present adversaries with multiple challenges across multiple domains (land, air, sea, cyber, and space) in contested environments. To this end, the Army is revising its doctrine to guide how the force and specific units will function. The Army is also reorganizing its force by creating new units to conduct missions in multiple domains and by updating the responsibilities of key Army formations, such as Army divisions. Also, the Army is training its combat forces for multi-domain operations in part by increasing the focus on cyber operations. The Army is establishing new cyber and electronic warfare units for multi-domain operations, but did not fully assess the risk of activating some units at an accelerated pace and is experiencing staffing, equipping, and training challenges. For example, the Army activated a cyber battalion in December 2018, and as of March 2019, this unit was understaffed by more than 80 percent. Army guidance directs the Army staff to conduct assessments on new units to determine whether the Army can staff, equip, and train these organizations. However, Army leadership believed the threats justify developing these units at an accelerated pace. Consequently, the Army did not assess the staffing, equipping, and training risk before activating one unit, and only conducted an initial risk assessment before activating a second unit. As a result, senior Army leaders may not know what other challenges could arise, such as sustainment, as the units grow in capabililty. Army officials told GAO that as these units evolve, it is uncertain when more comprehensive risk assessments would take place. The Army has previously accelerated the activations of other units when it saw fit to do so, and is considering creating other new units for multi-domain operations. If the Army does not assess risks for units activated at an accelerated pace, those units may be unable to effectively conduct multi-domain operations. What GAO Recommends GAO is making three recommendations, including that the Army comprehensively assess the risk of staffing, equipping, and training the cyber and electronic warfare units that it has activated at an accelerated pace, and to do so for new organizations it plans to activate in an accelerated manner for multi-domain operations. The Army concurred with one recommendation and partially concurred with two recommendations. GAO clarified the recommendations, as discussed in the report.
gao_GAO-19-458T
gao_GAO-19-458T_0
Status of Major Space Systems DOD space systems support and provide a wide range of capabilities to a large number of users, including the military services, the intelligence community, civil agencies, and others. These capabilities include positioning, navigation, and timing; meteorology; missile warning; and secure communications, among others. Space systems can take a long time to develop and involve multiple segments, including space, ground control stations, terminals, user equipment, and launch, as figure 1 below shows. DOD satellite systems are also expensive to acquire. Unit costs for current DOD satellites can range from $500 million to over $3 billion. The associated ground systems can cost over $6 billion to develop and maintain and the cost to launch a satellite can climb to well over $100 million. Table 1 provides highlights of the current status of DOD’s major space programs. As the table shows, DOD is also in the beginning phases of acquiring several constellations of new satellites and ground processing capabilities—including for missile warning, protected communications, space-based environmental monitoring, and space command and control. We have work underway to assess the Air Force’s space command and control development efforts and examine DOD’s analysis of alternatives for wideband communication services. For a more complete description of these major space programs, see appendix I. In addition, DOD is exploring alternatives for acquiring wideband satellite communications as well as funding development of new launch vehicles as it pursues a new acquisition strategy for procuring launch services. Our prior work has shown that many major DOD space programs have experienced significant cost increases and schedule delays. For instance, the total program cost for the Advanced Extremely High Frequency (AEHF) satellite program, a protected satellite communications system, has grown 117 percent since the program’s original cost estimate and its first satellite was launched more than 3.5 years late. For the Space Based Infrared System (SBIRS), a missile warning satellite program, the program cost grew 265 percent from its original estimate and the launch of the first satellite was delayed roughly 9 years. Both programs moved to the production phase where fewer problems tend to surface, and where there is typically less risk of significant cost and schedule growth. A more recent major satellite program, Global Positioning System (GPS) III, has seen an almost 4-year delay due to technical issues and program cost growth of about 32 percent. Cost and schedule growth has also been a challenge for satellite ground systems and user equipment. Ground system delays have been so lengthy, that satellites sometimes spend years in orbit before key capabilities can be fully exploited. For example, The command and control system for GPS III satellites, known as the Next Generation Operational Control System, or OCX, is approximately 5 years behind schedule. As a result, the Air Force has had to start two separate back-up efforts to modify the current ground system to ensure the continuity of GPS capabilities and to make anti- jamming capabilities available via Military Code, or M-code, until OCX is delivered. Our ongoing review of GPS includes an assessment of OCX schedule risk and potential impacts on OCX delivery, acceptance, and operation. We expect to issue our report on GPS in spring 2019. Development of GPS user equipment that can utilize the M-Code signal has lagged behind the fielding of GPS M-code satellites for more than a decade, due to prolonged development challenges. In December 2017, we found that while DOD had made some progress on initial testing of the receiver cards needed to utilize the M-code signal, additional development was necessary to make M-code work with the over 700 weapon systems that require it. We also found that DOD had begun initial planning to transition some weapon systems to use M-code receivers, but significantly more work remained to understand the cost and schedule of transitioning to M-code receivers across DOD. Further, in December 2017, we found that multiple entities were separately maturing their own receiver cards. We recommended that DOD assign responsibility to a single organization to collect test data, lessons learned, and design solutions so that common design solutions are employed and DOD could avoid duplication of efforts. DOD concurred with the recommendation, but has not yet taken action on it. We have previously reported that over 90 percent of the capabilities to be provided by Mobile User Objective System communications satellites—currently, five satellites are in orbit, the first of which launched in 2012—are being underutilized because of difficulties with integrating the space, ground, and terminal segments and delays in fielding compatible user terminals. Largely because of technical and management challenges, the Joint Space Operations Center Mission System (JMS) Increment 2 program—intended to replace and improve upon an aging space situational awareness and command and control system—was almost 3 years behind schedule and 42 percent over budget before the Air Force stopped development work last year. Earlier this month, we reported that operational testing in 2018 found that JMS Increment 2 was not operationally effective or suitable due, in part, to missing software requirements, urgent deficiencies that affected system performance, and negative user feedback. Cost and schedule growth in DOD’s space programs is sometimes driven by the inherent risks associated with developing complex space technology; however, over the past 10 years we have identified a number of other management and oversight problems that have worsened the situation. These include making overly optimistic cost and schedule estimates, pushing programs forward without sufficient knowledge about technology and design, and experiencing problems in overseeing and managing contractors, among others. We have also noted that some of DOD’s programs with operational satellites, such as SBIRS, were also exceedingly ambitious, which in turn increased technology, design, and engineering risks. While SBIRS and other satellite programs provide users with important and useful capabilities, their cost growth has significantly limited the department’s buying power at a time when more resources may be needed to protect space systems and recapitalize the space portfolio. Challenges Facing Acquisitions of New Space Systems DOD faces significant challenges as it replenishes its satellite constellations. First, DOD is confronted with growing threats in space, which may require very different satellite architectures and acquisition strategies. Second, DOD is in the midst of planning major changes to its leadership for space. While these changes are designed to streamline decision-making and bring together a dispersed space workforce, they could cause some disruption to space system acquisition programs. Third, in fiscal year 2016, Congress gave DOD authority to speed up acquisition timeframes by streamlining acquisition processes and oversight. GAO is examining DOD’s application of streamlining to its weapons programs. For space, challenges with past streamlining efforts may offer some lessons learned. And fourth, DOD may face resource and capacity challenges in taking on multiple space acquisitions at one time. For example, our work and other reports point to potential gaps in the space acquisition workforce and ongoing difficulties managing software development. Growing Threats to Satellites Require New Approaches According to Air Force Space Command and others, U.S. space systems face intentional and unintentional threats that have increased rapidly over the past 20 years. These include radio frequency interference (including jamming), laser attacks, kinetic intercept vehicles, and ground system attacks. Additionally, the hazards of the already-harsh space environment (e.g., extreme temperature fluctuations and radiation) have increased, including numbers of active and inactive satellites, spent rocket bodies, and other fragments and debris. According to a February 2019 Defense Intelligence Agency report, China and Russia in particular are developing a variety of means to exploit perceived U.S. reliance on space-based systems and challenge the U.S. position in space. The report also states that Iran and North Korea have demonstrated some counterspace capabilities that could pose a threat to militaries using space-based services. In response, recent governmentwide and DOD strategic and policy guidance have stressed the need for U.S. space systems to be survivable or resilient against such threats and DOD has taken steps to be more resilient in some of its new programs. As we found in October 2014, one way to do this is to build more disaggregated systems, including dispersing sensors onto separate satellites; using multiple domains, including space, air, and ground to provide full mission capabilities; hosting payloads on other government or commercial spacecraft; or some combination of these. With capabilities distributed across multiple platforms, rather than centralized onto just a few satellites, it may be more difficult for an adversary to target all assets to attack full system capabilities, and if an attack does take place, the loss of one smaller satellite or payload could result in less capability loss than damage to, or loss of, a large multifunctional satellite. In addition to disaggregation, DOD could make satellites more maneuverable and build in defense capabilities to protect themselves as a means to increase survivability. We also found in October 2014 that some of these options could have beneficial impacts on acquisition. For example, acquiring smaller, less complex satellites may require less time and effort to develop and produce. This may be in part due to improved requirements discipline, as more frequent production rates may allow program managers to delay new requirements to the next production cycle instead of incorporating them into ongoing timelines midstream. Building more, less-complex satellites might also provide DOD the opportunity to use commercial products and systems that have already been tested in the market. At the same time, however, addressing the need to make satellites more resilient could introduce complications. For example, DOD may need to acquire higher quantities of satellites, which may make it more difficult to manage acquisition schedules. In addition, potentially more development and production contracts may result in more complexity for program offices to manage, requiring increased oversight of contractors. Adding more satellites and new technologies may also complicate efforts to synchronize satellite, terminal, and ground system schedules, limiting delivery of capabilities to end users. Our work has also found potential barriers to making satellites more resilient. For example, in October 2014, we found that disaggregation could require DOD to make significant cultural and process changes in how it acquires space systems—for instance, by relying on new contractors, relinquishing control to providers who host government payloads on commercial satellites, using different contracting methods, and executing smaller but more numerous and faster-paced acquisition programs. It will likely require DOD to be more flexible and agile when it comes to satellite acquisitions, especially with regard to coordinating satellite delivery with interdependent systems, such as user equipment. Yet, as we have previously found, DOD’s culture has generally been resistant to changes in space acquisition approaches, and fragmented responsibilities have made it very difficult to coordinate and deliver interdependent systems. Senior leaders have recognized the need to change the space acquisition culture, and as discussed below, changes are being made to space leadership and acquisition approaches. More recently, in July 2018, we found that two factors have contributed to DOD’s limited use of commercially hosted payloads. First, DOD officials identified logistical challenges to matching government payloads with any given commercial host satellite. For example, most of the offices we spoke with cited size, weight, and power constraints, among others, as barriers to using hosted payloads. Second, while individual DOD offices have realized cost and schedule benefits from using hosted payloads, DOD as a whole has limited information on costs and benefits of hosted payloads. Further, the knowledge DOD obtained is fragmented across the agency—with multiple offices collecting piecemeal information on the use of hosted payloads. The limited knowledge and data on hosted payloads that is fragmented across the agency has contributed to resistance among space acquisition officials to adopting this approach. We recommended, and DOD concurred, that the department bolster and centralize collection and analysis of cost, technical, and lessons learned data on its use of hosted payloads. Lastly, in October 2018, we found that DOD faced mounting challenges in protecting its weapon systems—satellites and their ground systems included—from increasingly sophisticated cyber threats. We reported that this was due to the computerized nature of weapon systems, DOD’s late start in prioritizing weapon system cybersecurity, and DOD’s nascent understanding of how to develop more secure weapon systems. In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and even discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and operate largely undetected, due in part to basic issues such as poor password management and unencrypted communications. DOD has recently taken several steps to improve weapon system cybersecurity, including issuing and revising policies and guidance to better incorporate cybersecurity considerations. Further, in response to congressional direction, DOD has also begun initiatives to better understand and address cyber vulnerabilities. Space Leadership Changes Are a Positive Step, But Have Some Risk We and others have reported for over two decades that fragmentation and overlap in DOD space acquisition management and oversight have contributed to program delays and cancellations, cost increases, and inefficient operations. For example, in February 2012 we found that fragmented leadership contributed to a 10-year gap between the delivery of GPS satellites and associated user equipment. The cancellations of several large programs over the past 2 decades were in part because of disagreements and conflicts among stakeholders. In July 2016, in response to a provision of a Senate Report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2016, we issued a report that reviewed space leadership in more depth and concluded that DOD space leadership was fragmented. We identified approximately 60 stakeholder organizations across DOD, the Executive Office of the President, the Intelligence Community, and civilian agencies. Of these, eight organizations had space acquisition management responsibilities; eleven had oversight responsibilities; and six were involved in setting requirements for defense space programs. At the same time, many experts stated that no one seemed to be in charge of space acquisitions. Our report highlighted the pros and cons of various options to reorganize space functions recommended in prior congressionally-chartered studies. The issue has taken on more importance in recent years, as DOD has realized satellites are highly vulnerable to attacks and needs to make dramatic changes in space system architectures and operations. We have found that leadership has not been focused enough to overcome interagency rivalries and resistance to change, and it has not been able to get concurrence on future architectures. The President’s Administration and DOD have taken significant steps to change space leadership. Most recent is the President’s Space Policy Directive-4, issued on February 19, 2019, and DOD’s subsequent legislative proposal submitted on March 1, 2019, to establish a United States Space Force as a sixth branch of the United States Armed Forces within the Department of the Air Force. The Policy Directive states that this is an important step toward a future military department for space and that the Space Force will (1) consolidate existing forces and authorities for military space activities, as appropriate, to minimize duplication of effort and eliminate bureaucratic inefficiencies; and (2) not include the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, the National Reconnaissance Office, or other non-military space organizations or missions of the United States Government. According to the Policy Directive, the Space Force would include the uniformed and civilian personnel conducting and directly supporting space operations from all DOD Armed Forces, assume responsibilities for all major military space acquisition programs, and create the appropriate career tracks for military and civilian space personnel across all relevant specialties. Pertaining to organization and leadership, the Policy Directive creates a civilian Under Secretary of the Air Force for Space, to be known as the Under Secretary for Space, appointed by the President, and establishes a Chief of Staff of the Space Force, who would serve as a member of the Joint Chiefs of Staff. Furthermore, the Policy Directive states that as the Space Force matures, and as national security requires, it will become necessary to create a separate military department, to be known as the Department of the Space Force. This department would take over some or all responsibilities for the Space Force from the Department of the Air Force. The Policy Directive requires the Secretary of Defense to conduct periodic reviews to determine when to recommend that the President seek legislation to establish such a department. Our past work has identified fragmentation in space leadership, but because implementation has not yet occurred, it remains to be seen whether this policy directive and proposed legislation would resolve these issues. In implementing these changes there are many complexities to consider. For example, because space capabilities are acquired and used across the military services and defense agencies, it will be important to address many details on how to implement a Space Force among these equities. Our past work suggests that without close attention to the consequences of the compromises that will inevitably have to be made to carve out a new force structure from existing space functions, there is risk of exacerbating the fragmentation and ineffective management and oversight the Space Force is intended to address. For instance, earlier this month, DOD established the Space Development Agency to unify and integrate efforts across DOD to define, develop, and field innovative solutions. But it is unclear how this new organization will mesh with the Air Force Space and Missile Systems Center, which acquires satellites, the Defense Advanced Research Projects Agency, which creates breakthrough technologies and capabilities, and similar organizations. Moreover, even if changes are implemented effectively, they are only a first step toward addressing space acquisition problems. As we discuss below, programs will still need to embrace acquisition best practices, such as using demonstrable knowledge to make decisions. Our prior work has found that they will also need to be open to flexible and innovative approaches, and work effectively with a very wide range of stakeholders, including those that will not be part of the Space Force, such as the intelligence agencies, civilian space agencies, the current military services, as well as entities within the Office of the Secretary of Defense who help oversee and manage acquisitions. Senior leaders have acknowledged that additional changes are needed and have taken steps to help bring them about, such as the restructuring of the Air Force’s Space and Missile Systems Center, which is designed to break down stovepipes and streamline acquisition processes. Past Streamlining Efforts Offer Lessons Learned DOD is managing a number of new space acquisition programs using a new authority, established under Section 804 of the National Defense Authorization Act for Fiscal Year 2016, which is to provide a streamlined alternative to the traditional DOD acquisition process. Specifically, the programs—which include follow-on missile warning and protected communications satellites, among others—will be exempted from the acquisition and requirements processes defined by DOD Directive 5000.01 and the Joint Capabilities Integration and Development System. Instead, program managers are encouraged to use a tailored approach to documentation and oversight to enable them to demonstrate new technologies or field new or updated systems within 2 to 5 years. We have ongoing work looking across the military departments at how middle-tier acquisition authority is being implemented, including for the Air Force’s space acquisition programs, and plan to issue a report later this spring. GAO and others have highlighted lessons learned from past efforts to streamline, specifically with an approach adopted for space systems in the 1990s known as Total System Performance Responsibility (TSPR). TSPR was intended to facilitate acquisition reform and enable DOD to streamline its acquisition process and leverage innovation and management expertise from the private sector. Specifically, TSPR gave a contractor total responsibility for the integration of an entire weapon system and for meeting DOD’s requirements. We found in May 2009 that because this reform made the contractor responsible for day-to-day program management, DOD did not require formal deliverable documents—such as earned value management reports—to assess the status and performance of the contractor. As a result, DOD’s capability to lead and manage the space acquisition process diminished, which magnified problems related to unstable requirements and poor contractor performance. Further, the reduction in DOD oversight and involvement led to major reductions in various government capabilities, including cost- estimating and systems-engineering staff. This, in turn, led to a lack of technical data needed to develop sound cost estimates. Best practices that we identified in the aftermath of TSPR include retaining strong oversight and insight into programs; using quantifiable data and demonstrable knowledge to make decisions to proceed, not allowing development to proceed until certain thresholds are met, empowering program managers to make decisions on the direction of the program but also holding them accountable for their choices, and canceling unsuccessful programs. Similarly, in its study of TSPR programs, the Defense Science Board/Air Force Scientific Advisory Board Joint Task Force emphasized the importance of managing requirements, sufficiently funding programs, participating in trade-off studies, and assuring that proven engineering practices characterize program implementation, among other actions. See appendix II for a more complete list of the best practices we have identified for developing complex systems. DOD May Face Resource and Capacity Challenges in Taking on Multiple Programs at One Time DOD is simultaneously undertaking new major acquisition efforts to replenish its missile warning, protected communications, GPS, and weather satellites. At the same time, it is boosting efforts to increase space situational awareness and protect space assets. It is also helping to fund the development of new launch vehicles, and it is considering additional significant acquisitions in wideband satellite communications and in support of missile defense activities. While there is increased attention within DOD on funding for space and building the Space Force, such widespread acquisition activities could still pose resource challenges. For example: Funding requests for space system modernization have in the past 10 years represented a small percentage (3.9 to 5 percent) of total weapon system modernization funding DOD requested. Space is competing with ships, aircraft, and the nuclear triad, among other programs for funding. This can be challenging, because over the past 2 years, DOD has begun over 9 new space acquisition programs to recapitalize current space capabilities and enhance system resiliency. In the past, we have found that it has been difficult for DOD to fund multiple new space programs at one time, particularly when it was concurrently struggling with cost overruns and schedule delays from its legacy programs. For example, OCX system development challenges have resulted in a $2.5 billion cost increase and approximate 5-year delay to the system becoming operational— using more resources for a longer time—at a cost to other programs. It is unclear whether DOD has a sufficient workforce to manage multiple new space programs. We issued a report this month that found DOD did not routinely monitor the size, mix, and location of its space acquisition workforce. We collected and aggregated data from multiple DOD space acquisition organizations and found that at least 8,000 personnel in multiple locations nationwide were working on space acquisition activities at the end of 2017. Echoing concerns raised in our prior work, we also found that DOD had difficulty attracting and retaining candidates with the requisite technical expertise. Officials from the Air Force’s Space and Missile Systems Center were concerned that there are not enough experienced mid- level acquisition personnel and also expressed concern that the bulk of military personnel assigned to program management positions were more junior in rank than the Center was authorized to obtain. We recommended that DOD (1) identify the universe of its space acquisition programs and the organizations that support them, and (2) collect and maintain data on the workforce supporting these programs. DOD concurred with our first recommendation but not the second. Software is an increasingly important enabler of DOD space systems. However, DOD has struggled to deliver software-intensive space programs that meet operational requirements within expected time frames. Although user involvement is critical to the success of any software development effort, we found in our report issued earlier this month on DOD software-intensive space programs that key programs that experienced cost or schedule breaches often did not effectively engage users to understand requirements and obtain feedback. Program efforts to involve users and incorporate feedback frequently did not match plans. The lack of user engagement has contributed to systems that were later found to be operationally unsuitable. The programs we reviewed also faced challenges in delivering software in shorter time frames, and in using commercial software, applying outdated tools and metrics, as well as having limited knowledge and training in newer software development techniques. DOD acknowledged these challenges and is taking steps to address them, including identifying useful software development metrics and ways to include them in new contracts. We recommended, and DOD concurred, that the department ensure its guidance addressing software development provides specific, required direction on the timing, frequency, and documentation of user involvement and feedback. Moreover, it should be noted that software development has been a struggle for other non-space weapons programs as well. The Defense Innovation Board recently reported that the department’s current approach to software development is broken and is a leading source of risk to DOD—it takes too long, is too expensive, and exposes warfighters to unacceptable risk by delaying their access to the tools they need to assure mission success. Chairman Fischer, Ranking Member Heinrich, and Members of the Subcommittee, this concludes my statement. I am happy to answer any questions that you have. GAO Contact and Staff Acknowledgements If you or your staff have any questions about this statement, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Rich Horiuchi, Assistant Director; Burns C. Eckert; Emily Bond; Claire Buck; Maricela Cherveny; Erin Cohen; Susan Ditto; Laura Hook, and Anne Louise Taylor. Key contributors for the previous work on which this statement is based are listed in the products cited. Appendix I: Status of Major Department of Defense Space Acquisitions Appendix II: Best Practices GAO Has Identified for Space and Weapons Systems Acquisitions Our previous work on weapons acquisitions in general, and space programs in particular, identified best practices for developing complex systems. We summarize these best practices in table 3, below. Related GAO Products DOD Space Acquisitions: Including Users Early and Often in Software Development Could Benefit Programs. GAO-19-136. Washington, D.C.: March 18, 2019. Defense Space Systems: DOD Should Collect and Maintain Data on Its Space Acquisition Workforce. GAO-19-240. Washington, D.C.: March 14, 2019. Weapon Systems Cybersecurity: DOD Just Beginning to Grapple with Scale of Vulnerabilitie., GAO-19-128. Washington, D.C.: October 9, 2018. Military Space Systems: DOD’s Use of Commercial Satellites to Host Defense Payloads Would Benefit from Centralizing Data. GAO-18-493. Washington, D.C.: July 30, 2018. Weapon Systems Annual Assessment: Knowledge Gaps Pose Risks to Sustaining Recent Positive Trends. GAO-18-360SP. Washington, D.C.: April 25, 2018. Global Positioning System: Better Planning and Coordination Needed to Improve Prospects for Fielding Modernized Capability. GAO-18-74. Washington, D.C.: December 12, 2017. Space Launch: Coordination Mechanisms Facilitate Interagency Information Sharing on Acquisitions GAO-17-646R. Washington D.C.: August 9, 2017 Satellite Acquisitions: Agencies May Recover a Limited Portion of Contract Value When Satellites Fail. GAO-17-490. Washington, D.C.: June 9, 2017 Space Acquisitions: DOD Continues to Face Challenges of Delayed Delivery of Critical Space Capabilities and Fragmented Leadership. GAO-17-619T. Washington, D.C.: May 17, 2017. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-17-333SP. Washington, D.C.: March 30, 2017. Global Positioning System: Observations on Quarterly Reports from the Air Force. GAO-17-162R. Washington, D.C.: October 17, 2016. Defense Space Acquisitions: Too Early to Determine if Recent Changes Will Resolve Persistent Fragmentation in Management and Oversight. GAO-16-592R. Washington, D.C.: July 27, 2016. Evolved Expendable Launch Vehicle: DOD Is Assessing Data on Worldwide Launch Market to Inform New Acquisition Strategy. GAO-16-661R. Washington, D.C.: July 22, 2016 Defense Weather Satellites: DOD Faces Acquisition Challenges for Addressing Capability Needs. GAO-16-769T, Washington, D.C.: July 7, 2016. Defense Weather Satellites: Analysis of Alternatives is Useful for Certain Capabilities, but Ineffective Coordination Limited Assessment of Two Critical Capabilities. GAO-16-252R. Washington, D.C.: March 10, 2016. Space Acquisitions: Challenges Facing DOD as it Changes Approaches to Space Acquisitions. GAO-16-471T. Washington, D.C.: March 9, 2016. Space Acquisitions: GAO Assessment of DOD Responsive Launch Report. GAO-16-156R. Washington, D.C.: October 29, 2015. Space Situational Awareness: Status of Efforts and Planned Budgets. GAO-16-6R. Washington, D.C.: October 8, 2015. GPS: Actions Needed to Address Ground System Development Problems and User Equipment Production Readiness. GAO-15-657. Washington, D.C.: September 9, 2015. Evolved Expendable Launch Vehicle: The Air Force Needs to Adopt an Incremental Approach to Future Acquisition Planning to Enable Incorporation of Lessons Learned. GAO-15-623. Washington, D.C.: August 11, 2015. Defense Satellite Communications: DOD Needs Additional Information to Improve Procurements. GAO-15-459. Washington, D.C.: July 17, 2015. Space Acquisitions: Some Programs Have Overcome Past Problems, but Challenges and Uncertainty Remain for the Future. GAO-15-492T. Washington, D.C.: April 29, 2015. Space Acquisitions: Space Based Infrared System Could Benefit from Technology Insertion Planning. GAO-15-366. Washington, D.C.: April 2, 2015. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-15-342SP. Washington, D.C.: March 12, 2015. Defense Major Automated Information Systems: Cost and Schedule Commitments Need to Be Established Earlier. GAO-15-282. Washington, D.C.: February 26, 2015. DOD Space Systems: Additional Knowledge Would Better Support Decisions about Disaggregating Large Satellites. GAO-15-7. Washington, D.C.: October 30, 2014. U.S. Launch Enterprise: Acquisition Best Practices Can Benefit Future Efforts. GAO-14-776T. Washington, D.C.: July 16, 2014. 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-14-343SP. Washington, D.C.: April 8, 2014. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-14-340SP. Washington, D.C.: March 31, 2014. Space Acquisitions: Acquisition Management Continues to Improve but Challenges Persist for Current and Future Programs. GAO-14-382T. Washington, D.C.: March 12, 2014. Evolved Expendable Launch Vehicle: Introducing Competition into National Security Space Launch Acquisitions. GAO-14-259T. Washington, D.C.: March 5, 2014. The Air Force’s Evolved Expendable Launch Vehicle Competitive Procurement. GAO-14-377R. Washington, D.C.: March 4, 2014. Space Acquisitions: Assessment of Overhead Persistent Infrared Technology Report. GAO-14-287R. Washington, D.C.: January 13, 2014. Space: Defense and Civilian Agencies Request Significant Funding for Launch-Related Activities. GAO-13-802R. Washington, D.C.: September 9, 2013. Global Positioning System: A Comprehensive Assessment of Potential Options and Related Costs is Needed. GAO-13-729, Washington, D.C.: September 9, 2013. Space Acquisitions: DOD Is Overcoming Long-Standing Problems, but Faces Challenges to Ensuring Its Investments are Optimized. GAO-13-508T. Washington, D.C.: April 24, 2013. Satellite Control: Long-Term Planning and Adoption of Commercial Practices Could Improve DOD’s Operations. GAO-13-315. Washington, D.C.: April 18, 2013. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. Washington, D.C.: March 28, 2013. Launch Services New Entrant Certification Guide. GAO-13-317R. Washington, D.C.: February 7, 2013. Evolved Expendable Launch Vehicle: DOD Is Addressing Knowledge Gaps in Its New Acquisition Strategy. GAO-12-822. Washington, D.C.: July 26, 2012. Space Acquisitions: DOD Faces Challenges in Fully Realizing Benefits of Satellite Acquisition Improvements. GAO-12-563T. Washington, D.C.: March 21, 2012. Space and Missile Defense Acquisitions: Periodic Assessment Needed to Correct Parts Quality Problems in Major Programs. GAO-11-404. Washington, D.C.: June 24, 2011. Space Acquisitions: Development and Oversight Challenges in Delivering Improved Space Situational Awareness Capabilities. GAO-11-545. Washington, D.C.: May 27, 2011. Space Acquisitions: DOD Delivering New Generations of Satellites, but Space System Acquisition Challenges Remain. GAO-11-590T. Washington, D.C.: May 11, 2011. Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persis., GAO-10-636. Washington, D.C.: September 15, 2010. Defense Acquisitions: Challenges in Aligning Space System Components. GAO-10-55. Washington D.C.: October 29, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study DOD space systems provide critical capabilities that support military and other government operations. They can also be expensive to acquire and field, costing billions of dollars each year. As DOD seeks to replenish its satellite constellations, it faces a number of challenges to ensuring funds are used effectively. Because space-based capabilities are fundamental to U.S. national security and civilian activities, it is essential that DOD manage its space system acquisitions carefully and avoid repeating past problems. This statement provides an update on DOD's space acquisitions, focusing on challenges facing acquisitions of new space systems. This statement is based on GAO reports issued over the past 10 years on DOD space programs. In addition it draws on recent work performed in support of GAO's 2019 annual reports on the progress of major defense acquisition programs as well as duplication, overlap, and fragmentation across the federal government, among other sources. What GAO Found DOD is simultaneously undertaking new major acquisitions to replenish its missile warning, protected communications, navigation, and weather satellites. At the same time, it is boosting efforts to increase space situational awareness and protect space assets. Such widespread acquisition acitivites could face a wide range of resource and management challenges that GAO has reported on, including: Growing threats to satellites . Threats to satellites from both adversaries— such as jamming and cyber attacks—and space debris are increasing. DOD is making changes to how it designs its space systems to increase the resilience and survivability of space capabilities. But it has been challenged in adopting new approaches, such as using commercial satellites to host payloads, and in prioritizing cybersecurity for all of its weapon systems. For hosted payloads, GAO recommended, and DOD concurred, that the department bolster and centralize collection and analysis of cost, technical, and lessons learned data. Implementing leadership changes . DOD is planning major changes to leadership for space. It recently proposed legislation to establish a United States Space Force—initially to be housed within the Department of the Air Force—that would, according to the President's Space Policy Directive, consolidate existing military space activities and minimize duplicative efforts across DOD. GAO found in July 2016 that changes are needed to reduce fragmentation that has negatively affected space programs for many years. But open questions remain about governance as new programs get underway and whether the changes themselves may result in further fragmentation. For example, it is unclear at this time how the new Space Development Agency will mesh with organizations currently involved in testing and acquiring new space technologies. Having the right resources and know-how . While there is increased attention on funding for space and building the Space Force, new programs can still face resource challenges. DOD has begun over 9 new space programs at a time when it is also seeking increased investments in ships, aircraft, and the nuclear triad, among other programs. Moreover, it is unclear whether DOD has a sufficient workforce to manage its new programs. GAO issued a report earlier this month that found DOD does not routinely monitor the size, mix, and location of its space acquisition workforce. Further, DOD has difficulty attracting and retaining candidates with the requisite technical expertise. GAO recommended that DOD collect and maintain data on its space acquisition workforce. DOD did not concur, but GAO maintains that DOD should have better information on such personnel, especially in light of its proposal for establishing the Space Force. GAO also found in March 2019 that key software-intensive space programs often did not effectively engage users to understand requirements and obtain feedback. GAO recommended, and DOD concurred, that the department ensure its guidance addressing software development provides specific, required direction on the timing, frequency, and documentation of user involvement and feedback. What GAO Recommends Past GAO reports have recommended that DOD adopt acquisition best practices to help ensure cost and schedule goals are met. DOD has generally agreed and taken some actions to address these recommendations.
gao_GAO-20-508
gao_GAO-20-508_0
Background Overview of the National Flood Insurance Program In 1968, Congress created NFIP, with the passage of the National Flood Insurance Act, to help reduce escalating costs of providing federal flood assistance to repair damaged homes and businesses. According to FEMA, NFIP was designed to address the policy objectives of identifying flood hazards, offering affordable insurance premiums to encourage program participation, and promoting community-based floodplain management. To meet these policy objectives, NFIP has four key elements: identifying and mapping flood hazards, floodplain management, flood insurance, and incentivizing flood-risk reduction through grants and premium discounts. NFIP enables property owners in participating communities to purchase flood insurance and, in exchange, the community agrees to adopt and enforce NFIP minimum floodplain management regulations and applicable building construction standards to help reduce future flood losses. A participating community’s floodplain management regulations must meet or exceed NFIP’s minimum regulatory requirements. Insurance offered through NFIP includes different coverage levels and premium rates, which are determined by factors that include property characteristics, location, and statutory provisions. NFIP coverage limits vary by program (Regular or Emergency) and building occupancy (for example, residential or nonresidential). In NFIP’s Regular Program, the maximum coverage limit for one-to-four family residential policies is $250,000 for buildings and $100,000 for contents. For nonresidential or multifamily policies, the maximum coverage limit is $500,000 per building and $500,000 for the building owner’s contents. Separate coverage is available for contents owned by tenants. NFIP also offers Increased Cost of Compliance coverage for most policies, which provides up to $30,000 to help cover the cost of mitigation measures following a flood loss when a property is declared to be substantially or repetitively damaged. Flood Hazard Mapping Through NFIP, FEMA maps flood hazard zones on a Flood Insurance Rate Map, which participating NFIP communities must adopt. According to FEMA, floodplain management standards are designed to prevent new development from increasing the flood threat and to protect new and existing buildings from anticipated flooding. FEMA has a division responsible for flood mapping activities and policy and guidance, but stakeholders from various levels of government and the private sector participate in the mapping process, as appropriate. A community’s Flood Insurance Rate Map serves several purposes. They provide the basis for setting insurance premium rates and identifying properties whose owners are required to purchase flood insurance. Since the 1970s, homeowners with federally backed mortgages or mortgages held by federally regulated lenders on property in a special flood hazard area have been required to purchase flood insurance. Others may purchase flood insurance voluntarily if they live in a participating community. The maps also provide the basis for establishing minimum floodplain management standards that communities must adopt and enforce as part of their NFIP participation. As of May 2020, 22,487 communities across the United States and its territories voluntarily participated in NFIP by adopting and agreeing to enforce flood-related building codes and floodplain management regulations. Community-Level Flood Hazard Mitigation FEMA supports a variety of community-level flood mitigation activities that are designed to reduce flood risk (and thus NFIP’s financial exposure). These activities, which are implemented at the state and local levels, include hazard mitigation planning; adoption and enforcement of floodplain management regulations and building codes; and use of hazard control structures such as levees, dams, and floodwalls or natural protective features such as wetlands and dunes. FEMA provides community-level mitigation funding through its HMA grant programs. In addition, FEMA’s Community Rating System is a voluntary incentive program that recognizes and encourages community floodplain management activities that exceed the minimum NFIP requirements. Flood insurance premium rates are discounted to reflect the reduced flood risk resulting from community actions that meet the three goals of reducing flood damage to insurable property, strengthening and supporting the insurance aspects of NFIP, and encouraging a comprehensive approach to floodplain management. Property-Level Flood Hazard Mitigation At the individual property level, mitigation options include property acquisition—or “buyouts”—to either demolish a building for green space or relocate a building to a low flood risk area, elevation, or floodproofing. Acquisition and demolition (acquisition) is one of the primary methods by which states or localities use FEMA funding to mitigate flood risk. Through this process, a local or state government purchases land and structures that flooded or are at risk from future floods from willing sellers and demolishes the structures. The community restricts future development on the land, which is maintained as open space in perpetuity to restore and conserve the natural floodplain functions. According to FEMA officials, an advantage of property acquisition is that it offers a permanent solution to flood risks, whereas other mitigation methods make properties safer from floods but not immune. Property acquisition and demolition is a voluntary process, and property owners are paid fair market value for their land and structures. Acquisition is typically done on a community-wide scale, purchasing several or all properties in an at-risk neighborhood. Acquisition projects typically require building consensus from property owners and sustained communication and collaboration between residents and the government executing the project. Acquisition and relocation (relocation) refers to purchasing a structure and moving it to another location instead of demolishing it. Through this process, state or local governments use FEMA funding to help purchase land from willing sellers and assist the property owners with relocating the structure. The structure must be sound and feasible to move outside of flood-prone areas. Relocation is a voluntary process and property owners are paid fair market value for their land. Elevation involves raising a structure so that the lowest occupied floor is at or above the area’s base flood elevation. Structure elevation may be achieved through a variety of methods, including elevating on continuous foundation walls; elevating on open foundations, such as piles, piers, or columns; and elevating on fill. Structures proposed for elevation must be structurally sound and capable of being elevated safely. Further, elevation projects must be designed and adequately anchored to prevent flotation, collapse, and lateral movement of the structure from flooding, waves, and wind. Floodproofing falls into two categories: dry floodproofing and wet floodproofing. Dry floodproofing involves sealing a structure to prevent floodwater from entering. Examples of dry floodproofing measures include using waterproof coatings or coverings to make walls impermeable to water, installing waterproof shields, and installing devices that prevent sewer and drain backup. Dry floodproofing is appropriate only where floodwaters do not exceed three feet, the speed of flood waters is low, and the duration of flooding is relatively short because walls and floors may collapse from the pressure of higher water levels. Wet floodproofing involves changing a structure to allow floodwaters to enter and exit with minimal damage. Wet floodproofing is used in parts of a structure that are not used as living space, such as a crawlspace, basement, or garage. Examples of wet floodproofing measures include installing flood openings in the foundation and enclosure walls below the base flood elevation, using flood-resistant building materials and furnishings located below the base flood elevation, and either elevating or floodproofing all utility systems and associated equipment to protect them from damage. FEMA Mitigation Grant Programs FEMA administers three HMA grant programs that can be used to fund flood mitigation projects: the Hazard Mitigation Grant Program (HMGP), Pre-Disaster Mitigation (PDM), and Flood Mitigation Assistance (FMA). Eligible HMA applicants include states, territories, and federally recognized tribal governments. Local communities cannot apply directly to FEMA for HMA funding but instead must collaborate as sub-applicants with their state, territory, or tribal government and then receive funding through that entity. Certain nonprofit organizations can act as sub- applicants but only under HMGP. Generally, individuals may not apply for HMA funding, but they may benefit from a community application. Applicants to all three programs must have FEMA-approved hazard mitigation plans. FEMA evaluates HMA applications based on technical feasibility and cost-effectiveness, among other factors. In fiscal year 2019, HMA awarded $859 million in funding. Eligible activities differ for the three programs but must be consistent with FEMA’s National Mitigation Framework. The Hazard Mitigation Grant Program helps communities implement hazard mitigation measures following a presidential major disaster declaration to improve community resilience to future disasters. HMGP provides funding to protect public or private property through various mitigation measures based on state or tribal priorities. Mitigation project examples include acquisition, relocation, retrofitting structures to minimize damages from various natural hazards, and elevating flood prone structures. HMGP recipients (states, territories, and federally recognized tribal governments) are primarily responsible for prioritizing, selecting, and administering state and local hazard mitigation projects. According to FEMA guidance, although individuals may not apply directly to the state for assistance, local governments engage interested property owners during the application process. A formula based on the size of the presidential disaster declaration determines the amount of money available to HMGP. Pre-Disaster Mitigation seeks to reduce overall risk to the population and structures from future natural hazard events, while also reducing reliance on federal funding in future disasters. PDM grants fund mitigation plans and eligible projects that reduce or eliminate long-term risk to people and property from natural disasters, such as property acquisition, property elevation, earthquake hardening, and construction of tornado and high-wind safe rooms. Generally, local governments (i.e., sub-applicants) submit mitigation planning and project applications to their state, territory, or federally recognized tribal government (i.e., applicants) for review and prioritization. The state, territory, or federally recognized tribal government then submits one PDM grant application to FEMA for consideration. Annual Congressional appropriations fund these grants, and FEMA awards them on a nationally competitive basis. In fiscal year 2019, Congress appropriated $250 million to PDM, which was the program’s final year of funding. In 2018, Congress passed the Disaster Recovery Reform Act, which included amendments to PDM, which FEMA calls the Building Resilient Infrastructure and Communities program. According to FEMA officials, this program is replacing PDM in fiscal year 2020 and will be funded through the Disaster Relief Fund as a 6 percent set-aside from the estimated total amount of grants for each major disaster declaration. FEMA has solicited public input on the program and said it expects to release a notice of funding opportunity in summer 2020. Flood Mitigation Assistance is designed to reduce or eliminate flood insurance claims by funding cost-effective flood mitigation projects that reduce or eliminate long-term risk of flood damage to structures insured under NFIP. Typical projects may include acquisition of RL properties, elevation of buildings, and neighborhood-scale flood defense investment. Generally, local communities will sponsor applications on behalf of homeowners and then submit the applications to their state. A state or federally recognized tribal government must submit the grant applications to FEMA. Annual Congressional appropriations fund FMA grants, and FEMA awards them on a nationally competitive basis. FMA appropriations have remained relatively stable at about $175 million for fiscal years 2016 through 2019. Repetitive Loss Properties RL properties present a financial challenge for NFIP. FEMA has three definitions for such properties that vary slightly to meet the specific needs of different programs: NFIP Repetitive Loss refers to an NFIP-insured structure that has incurred flood-related damage on two occasions during a 10-year period, each resulting in at least a $1,000 claim payment. FEMA uses the NFIP RL definition for insurance purposes related to the Community Rating System, for local hazard mitigation plans, and for eligibility determinations for preferred risk policies and individual assistance. FMA Repetitive Loss refers to an NFIP-insured structure that (a) has incurred flood-related damage on two occasions in which the cost of repair, on average, equaled or exceeded 25 percent of the value of the structure at the time of each such flood event; and (b) at the time of the second incidence of flood-related damage, the flood insurance policy contained Increased Cost of Compliance coverage. FEMA uses this definition for FMA purposes, as these properties are eligible for the largest federal cost share for mitigation, up to 90 percent. This is also the same definition NFIP uses to approve an Increased Cost of Compliance payment. Severe Repetitive Loss refers to an NFIP-insured structure that has incurred flood-related damage for which (a) four or more separate claims have been paid that exceeded $5,000 each and cumulatively exceeded $20,000; or (b) at least two separate claim payments have been made under such coverage, with the cumulative amount of such claims exceeding the fair market value of the insured structure. FEMA has two severe RL definitions for mitigation and insurance, which are similar except that the insurance definition includes only residential structures, while the mitigation definition includes all structures. FEMA uses the severe RL definition for grant eligibility and cost share, the Community Rating System, and insurance rate setting. FEMA Grant Programs Are Key Funding Sources for Property Acquisition FEMA Funds Acquisitions through Three Grant Programs That Have Varying Characteristics and Funding Levels HMGP is the largest of FEMA’s three HMA programs and, unlike the others, it is based on the amount of disaster assistance a state or territory receives following a presidential disaster declaration (see table 1). PDM and FMA are smaller grant programs that receive annual appropriations and are not directly tied to an immediately preceding disaster. Because these programs do not require an immediate disaster declaration, FEMA considers them pre-disaster programs, as their intent is to mitigate potential damage before disasters occur. HMGP and PDM can be used for projects that mitigate the risk of many hazards, including flood, wind, fire, earthquake, and drought, but FMA can only be used to mitigate the risk of flood (see table 1). Furthermore, FMA funds can only be used to mitigate properties that are insured by NFIP, but HMGP and PDM funds can be used to mitigate properties without NFIP coverage. Properties mitigated in a special flood hazard area, where the structure remains on the parcel, must maintain a flood insurance policy after project completion. HMA grants fund a variety of methods to mitigate the flood risk of properties, including acquisition, elevation, relocation, and floodproofing. In most cases, HMA grants cover up to 75 percent of the project cost, and the grantee generally must contribute the remainder using nonfederal funds (although there are some exceptions, discussed below). However, PDM will cover up to 90 percent of project costs for communities that meet FEMA’s definition of small and impoverished. Moreover, FMA will cover up to 90 percent for projects that mitigate RL properties and up to 100 percent for severe RL properties. Funding levels for the three programs have varied over time because they have depended on disaster declarations and annual appropriations (see fig. 1). HMGP is the largest of the three programs—adjusted for inflation, annual HMGP grants have reached $2.9 billion, while PDM and FMA have never exceeded $300 million. According to FEMA officials, the estimated annual funding for the Building Resilient Infrastructure and Communities program, the successor to PDM, will average $300 million to $500 million, as it will be funded by a 6 percent set aside of annual estimated disaster grant expenditures. HMA funding also varies by state. Louisiana has obligated the most funding. After adjusting for inflation, it has obligated more than $3.1 billion from all three programs since HMGP was created in 1989, followed by California ($2.0 billion), Texas ($1.8 billion), New York ($1.6 billion), and Florida ($1.5 billion), while the bottom 18 states and territories each obligated less than $50 million (see fig. 2). Because HMGP is the largest program and is tied to presidential declarations, these totals reflect, in part, the extent to which states and territories have experienced natural disasters in this time period. States and Localities Can Use Other Federal Programs to Fund Cost Share Requirements for Acquisitions Typically, recipients of federal mitigation grants must use nonfederal funds to meet cost share requirements because federal law prohibits the use of more than one source of federal disaster recovery funding for the same purpose. However, according to FEMA, some federal programs are exempt from these requirements due to authorizing statutes and therefore may be used in concert with HMA funds. Department of Housing and Urban Development’s Community Development Block Grant (CDBG) program. The Department of Housing and Urban Development awards CDBG funds to state and local governments to support a variety of community and economic development needs. According to FEMA’s HMA Cost Sharing Guide, HMA applicants may use several categories of CDBG funds as a source of project cost share, as long as the project meets Department of Housing and Urban Development rules. CDBG Disaster Recovery funds are the most frequently used form of HMGP cost share from a federal agency, according to FEMA. FEMA Increased Cost of Compliance coverage. NFIP offers Increased Cost of Compliance coverage, which provides up to $30,000 for policyholders to fund mitigation efforts on their property if they experience substantial damage or if their structure is an RL property. Between 1997 and 2014, the vast majority (99 percent) of Increased Cost of Compliance claims met the substantially damaged property definition, according to a 2017 report from the University of Pennsylvania. Unlike CDBG, which is awarded to states and local governments, Increased Cost of Compliance is awarded directly to individuals. According to FEMA, it is eligible as an HMA nonfederal cost share because it is considered a direct contract between the insurer and policyholder. FEMA allows recipients to assign their funds to the community as part of a collective mitigation project, and the community is then obligated to provide HMA funding to any property owner who contributed Increased Cost of Compliance dollars toward the nonfederal cost share. As of September 2019, FEMA had closed more than 38,000 Increased Cost of Compliance claims with dates of loss since 1997, totaling more than $877 million. Small Business Administration disaster loans. Small Business Administration disaster loans provide up to $200,000 for repairing or replacing a primary residence and $40,000 for repairing or replacing personal items that have been affected by a disaster. The interest rate cannot exceed 4 percent for applicants unable to access credit elsewhere, and cannot exceed 8 percent for all others. Secondary or vacation homes are not eligible, but qualified rental properties may be eligible under the Small Business Administration’s business disaster loan program, which offers loans of up to $2 million. According to FEMA guidance, these loans can serve as a source of cost share if HMA grants are disbursed early enough; however, the differing award timelines often make these funding sources incompatible. Further, disaster loans may not be eligible in conjunction with HMA funds due to duplication of benefits, but general-purpose Small Business Administration loans are not subject to this restriction, according to FEMA. Other Federal and Nonfederal Programs Fund Acquisitions In addition to FEMA’s three HMA programs, other federal, state, and local programs have helped acquire properties. Community Development Block Grants. In addition to its use as a cost- share complement to HMA grants, states and communities can use CDBG Disaster Recovery funding as a stand-alone source of property acquisition funds, according to the Department of Housing and Urban Development. Availability of CDBG Disaster Recovery funds is subject to supplemental appropriations following a presidential disaster declaration and must be used in response to that specific disaster. CDBG Disaster Recovery funds are disbursed to state and local governments and not to individuals directly. However, the governmental recipient can award CDBG Disaster Recovery funds to private citizens, nonprofits, economic development organizations, businesses, and other state agencies. The Bipartisan Budget Act of 2018 appropriated funding for CDBG, of which the Department of Housing and Urban Development allocated almost $6.9 billion for CDBG mitigation funds for the first time, as a result of the 2015 to 2017 disasters. Unlike CDBG Disaster Recovery funds, which the recipient must use in response to a specific disaster, recipients may use CDBG Mitigation funds to mitigate risks from future disasters. U.S. Army Corps of Engineers’ National Nonstructural Committee. The Army Corps of Engineers (Corps) conducts a range of mitigation measures through the National Nonstructural Committee, including acquisitions, elevations, relocations, and floodplain mapping. Nonstructural refers to measures that attempt to mitigate the consequences of floods, as opposed to structural measures intended to prevent floods from occurring. According to the Corps, except for limited research funding, it does not offer grants for flood risk management projects, and large projects generally require specific authorization from Congress. However, the Corps’ Continuing Authority Program allows it to execute smaller projects at its discretion. For example, for one of the programs, the federal government funds 65 percent of a project’s cost, and the project sponsor must provide all land, easement, rights-of-way, relocations, and disposal areas required for the project. The sponsor’s cost share includes credit for provision of the requirements above and pre-approved work-in-kind, but at least five percent must be provided in cash. Department of Agriculture’s Natural Resources Conservation Service Emergency Watershed Protection Program. The Federal Agriculture Improvement and Reform Act of 1996 enables the Emergency Watershed Protection Program to purchase floodplain easements on residential and agricultural land for flood mitigation purposes and to return the land to its natural state. For agricultural and residential land, this program pays up to the entire easement value and also funds property demolition or relocation, according to the Department of Agriculture. Land generally must have flooded in the past year or twice within the previous 10 years to be considered eligible. State and local acquisition programs. While state and local governments are active participants in federal acquisition projects, some have also developed their own acquisition programs. These programs vary on the extent to which they rely on federal funds, if at all. For example: The Harris County Flood Control District, a special purpose district, in Texas acquired about 3,100 properties between 1985 and 2017, according to a 2018 report from Rice University, using a combination of FEMA grants, Corps funds, and local dollars. Charlotte-Mecklenburg Storm Water Services, a joint city-county utility in North Carolina, has acquired more than 400 homes since 1999. Initially, it primarily used federal funds, but now it uses almost solely stormwater fees and other local revenue to fund acquisitions. The utility’s Quick Buys program allows it to acquire properties soon after a flood, before homeowners invest in repairs, whereas federal acquisitions often occur after property owners have begun rebuilding, according to FEMA officials. New Jersey, through its Blue Acres program, plans to acquire up to 1,300 properties damaged by Superstorm Sandy. The program has used state funds, including $36 million in bonds, as well as more than $300 million in federal funding received from multiple agencies. FEMA Has Funded the Mitigation of Many Properties, but the Number of Repetitive Loss Properties Continues to Rise Most Flood Mitigation Spending Is Used for Property Acquisitions after Flooding Occurs Since 1989, the primary means by which FEMA has mitigated flood risk at the property level has been by funding property acquisitions. Acquisitions accounted for about 75 percent of FEMA’s $5.4 billion in flood mitigation spending, adjusted for inflation, from 1989 to 2018 (see fig. 3). Most of the remaining spending was used to elevate properties, with smaller amounts used to floodproof and relocate properties. The average federal cost-per-property was $136,000 for acquisitions and $107,000 for elevations, according to 2008-2014 FEMA data. As seen in figure 4, FEMA-funded property acquisitions have fluctuated over time but have generally increased since FEMA’s HMA programs began. For example, from 1989 through 1992—the first four years of HMGP funding and prior to the creation of PDM and FMA—less than $8 million, adjusted for inflation, was obligated for property acquisitions each year, resulting in fewer than 200 acquisitions each year (see fig. 4). The highest acquisition funding generally was associated with years that had significant flood events, such as Superstorm Sandy (2012) and Hurricanes Harvey, Irma, and Maria (2017). From fiscal years 1989-2018, approximately $3.3 billion of property acquisition funding, adjusted for inflation, occurred through HMGP, resulting in the acquisition of 41,458 properties (see fig. 5). HMGP represented about 90 percent of all property acquisitions and 82 percent of all acquisition funding, with PDM and FMA representing the remainder. As a result, most FEMA-funded acquisitions occurred following flood events. Most of the funding, adjusted for inflation, for HMGP’s and PDM’s flood mitigation projects has been for property acquisition (83 percent and 89 percent of total funds, respectively), while most FMA funding has been for elevation (49 percent). Despite Acquisition and Other Mitigation, Nonmitigated Repetitive Loss Properties Have Increased in Number Although FEMA mitigated more than 57,000 properties for flood risk from 1989 to 2018, including more than 46,000 through acquisition, the number of nonmitigated RL properties increased from 2009 to 2018. Figure 6 shows that this growth in the number of RL properties has outpaced efforts to mitigate their flood risk. From 2009 through 2018, FEMA’s inventory of new RL properties grew by 64,101. During this period, FEMA mitigated 4,436 RL properties through its three HMA programs, and an additional 15,047 were mitigated through other federal or state programs. As a result, the number of nonmitigated RL properties increased by 44,618—more than double the number of RL properties that were mitigated in that time period. Some States Have Mitigated More Properties than Others Relative to Their Population of Repetitive Loss Properties States varied in the extent to which they mitigated high-risk properties, including RL properties, between 1989 and 2018. While FEMA does not require a property to be an RL property to receive flood mitigation funding, the number of properties mitigated by a state relative to its population of RL properties provides context to its flood mitigation progress. For example, some states with large numbers of RL properties, such as Texas, Louisiana, Florida, and New York, mitigated few properties relative to their numbers of RL properties (see table 2). Other states, such as Missouri and North Carolina, have far fewer RL properties but have mitigated more properties relative to their numbers of RL properties. States also varied in their methods for flood mitigation (see table 2). For example, while property acquisition accounted for 81 percent of mitigated properties nationwide, it represented closer to half of mitigated properties in Virginia, New Jersey, and Florida and only 19 percent in Louisiana. According to some FEMA and local officials, high property values in some regions can make acquisitions cost prohibitive and other mitigation methods such as elevation more attractive because they do not incur the cost of purchasing the land. Many other factors could affect mitigation, including homeowners’ preferences. Further, the voluntary nature of FEMA’s HMA programs may limit states’ ability to acquire properties with known flood risk. According to FEMA, acquisition permanently addresses flood risk because, unlike elevation or floodproofing, it moves individuals and structures away from flood risk rather than mitigating a structure in place. In a subsequent report, we plan to explore in more detail the factors, including homeowner demand for acquisition, that have affected the extent to which states have used acquisition to mitigate flood risk. While Property Acquisitions Help Reduce Flood Risk for Properties, Insufficient Premium Revenue Perpetuates Fiscal Exposure NFIP represents a fiscal exposure to the federal government because its premium rates have not kept pace with the flood risk of the properties it insures. Addressing this imbalance would mean reducing the flood risk of the insured properties, increasing premium revenue, or some combination of both. Despite FEMA’s efforts to mitigate its insured properties’ flood risk, premium rates for many properties do not reflect the full estimated risk of loss. As we have reported previously, mitigation alone will not be sufficient to resolve NFIP’s financial challenges; structural reforms to the program’s premium rates will also be necessary. Recent Catastrophic Flood Events and Projections Indicate Potential Increases in Flood Risk NFIP’s total annual flood claim payments have grown in recent years, potentially indicating an increase in flood risk. For example, the eight years of the highest annual NFIP claims have all occurred since 2004, with particularly catastrophic flood events accounting for much of these claims: In 2005, claims reached $17.8 billion ($23.3 billion, adjusted for inflation), largely due to Hurricanes Katrina, Rita, and Wilma. In 2012, claims reached $9.6 billion ($10.7 billion, adjusted for inflation), largely due to Superstorm Sandy. In 2017, claims reached $10.5 billion ($11.0 billion, adjusted for inflation), largely due to Hurricanes Harvey, Irma, and Maria. These severe weather events appear to be contributing to the long-term increases in claims paid by NFIP, as would be expected with infrequent but severe events. As seen in figure 7, the amount of claims paid per policy, adjusted for inflation, does not show a steady increase in claims but rather substantial spikes in certain years associated with catastrophic flooding events. RL properties have contributed heavily to NFIP’s claims and, as noted earlier, the number of RL properties continues to rise despite FEMA’s mitigation efforts. Of the $69.7 billion in claims NFIP paid out from 1978 to 2019, $22.2 billion was for flood damage sustained by RL properties (32 percent). The frequency and intensity of extreme weather events, such as floods, are expected to increase in coming years due to climate change, according to the U.S. Global Change Research Program and the National Academies of Sciences. Further, numerous studies have concluded that climate change poses risks to many environmental and economic systems and a significant financial risk to the federal government. For example, according to the November 2018 National Climate Assessment report, the continued increase in the frequency and extent of high-tide flooding due to sea level rise threatens America’s trillion-dollar coastal property market. According to the National Oceanic and Atmospheric Administration, minor flood events (sometimes referred to as nuisance flooding) also are projected to become more frequent and widespread due to climate change. Several Categories of Premium Rates Do Not Fully Reflect Flood Risk While it is uncertain the exact extent to which flood risk has changed and will continue to change, NFIP’s fiscal exposure will persist as long as premium rates do not keep pace with flood risk. As we have been reporting since 1983, NFIP’s premium rates do not reflect the full risk of loss because of various legislative requirements and FEMA practices. To set premium rates, FEMA considers several factors, including location in flood zones, elevation of the property relative to the community’s base flood elevation, and characteristics of the property, such as building type, number of floors, presence of a basement, and year built relative to the year of the community’s original flood map. Most NFIP policies have premium rates that are deemed by FEMA to be full-risk rates, which FEMA defines as sufficient to pay anticipated losses and expenses. However, FEMA’s overall rate structure may not reflect the full long-term estimated risk of flooding, as discussed below. Subsidized rates. NFIP offers some policyholders subsidized rates—that is, rates that intentionally do not reflect the full risk of flooding. These premium rates are intended to encourage the widespread purchase of flood insurance by property owners and encourage floodplain management by communities. Subsidized rates generally are offered to properties in high-risk locations (special flood hazard areas) that were built before flood maps were created. FEMA staff said they have begun increasing rates for certain subsidized properties as prescribed under the Biggert-Waters Flood Insurance Reform Act of 2012 and the Homeowner Flood Insurance Affordability Act of 2014. In addition, the percentage of subsidized policies is decreasing. According to FEMA data, the percentage of NFIP policies receiving subsidized rates dropped from about 22 percent in July 2013 to about 17 percent in June 2019. In 2013, we recommended that FEMA obtain elevation information to determine full-risk rates for subsidized properties. As of January 2020, FEMA had not fully implemented this recommendation but was in the process of doing so. For example, FEMA had requested proposals from third-party vendors for obtaining the elevation information and was reviewing these proposals. This information remains necessary for FEMA to determine the adequacy of its premium rates and the costs of any subsidization. It will also allow Congress and the public to understand the amount of unfunded subsidization within the program and the federal fiscal exposure it creates. Grandfathered rates. FEMA allows some property owners whose properties are remapped into higher-risk flood zones to continue to pay the premium rate from the lower-risk zone. FEMA data show that about 9 percent of NFIP policies were receiving a grandfathered rate as of June 2019. In 2008, we recommended that FEMA collect data to analyze the effect of grandfathered policies on NFIP’s fiscal exposure. As of February 2020, FEMA officials said they had not fully implemented this recommendation but were in the process of doing so. The officials told us they had finished collecting data on grandfathered policies and that they planned to analyze it as they completed efforts to update their premium rate setting approach. Collection and analysis of data on grandfathered policies will help FEMA understand and communicate the extent to which these policies are contributing to NFIP’s fiscal exposure. Rates designated full-risk. As we reported in 2008 and 2016, it is unclear whether premiums FEMA considers to be full-risk actually reflect the full long-term estimated risk of loss. For example, NFIP full-risk premium rates do not fully reflect the risk of catastrophic losses or the expenses associated with managing them. Private insurers typically manage catastrophic risk using capital, reinsurance, and other instruments, such as catastrophe bonds, and include the associated expenses in premium rates. By contrast, FEMA has traditionally managed catastrophic risk by relying on its authority to borrow from Treasury. In January 2017, FEMA began purchasing reinsurance to transfer some of its flood risk exposure to the private reinsurance market. However, FEMA has not accounted for these expenses in setting its NFIP premium rates. Reinsurance could be beneficial because it would allow FEMA to recognize some of its flood risk and the associated costs up front through the premiums it must pay to the reinsurers rather than after the fact in borrowing from Treasury. However, because reinsurers must charge FEMA premiums to compensate for the risk they assume, reinsurance’s primary benefit would be to manage risk rather than to reduce NFIP’s expected long-term fiscal exposure. Insufficient Premium Revenue Contributes to NFIP’s Fiscal Exposure Congress has directed FEMA to provide discounted premium rates to promote affordability for policyholders but did not provide FEMA with dedicated funds to pay for these subsidies. As a result, premium revenue has been insufficient to pay claims in some years, requiring borrowing from Treasury to make up for the shortfall. While Congress passed reforms to NFIP in 1994 and 2004, neither set of actions sufficiently addressed program revenue. In 2005, Hurricanes Katrina, Rita, and Wilma hit the Gulf Coast and resulted in NFIP borrowing nearly $17 billion from Treasury to pay claims (see fig. 8). In July 2012, Congress passed the Biggert-Waters Flood Insurance Reform Act, which contained significant reforms to NFIP’s premium rates. But a few months later, Superstorm Sandy occurred, pushing NFIP’s debt to $24 billion. Following policyholders’ concerns about the rate increases authorized by the 2012 act, Congress slowed the pace of many of these rate increases in 2014 with the Homeowner Flood Insurance Affordability Act. In the fall of 2017, Hurricanes Harvey, Irma, and Maria occurred, prompting additional borrowing from Treasury and causing NFIP to reach its borrowing limit. In response, Congress canceled $16 billion of NFIP’s debt in October 2017, which allowed NFIP to pay claims from these storms. Since September 2017, NFIP has been operating under a series of short-term authorizations, the most recent of which expires in September 2020. As of March 2020, NFIP’s debt remained at $20.5 billion. To improve NFIP’s solvency and enhance the nation’s resilience to flood risk, we suggested in 2017 that Congress could make comprehensive reforms that include actions in six areas. We reported that it was unlikely that FEMA would be able to repay its debt and that addressing it would require Congress to either appropriate funds or eliminate the requirement that FEMA repay the accumulated debt. However, eliminating the debt without addressing the underlying cause of the debt—insufficient premium rates—would leave the federal taxpayer exposed to a program requiring repeated borrowing. To address NFIP’s fiscal exposure, there are two general approaches: decrease costs or increase revenue. Decreasing costs to the program in the form of claims involves mitigating insured properties’ flood risks. Mitigation can be very costly, but there will be some properties for which the cost to mitigate will be outweighed by the benefit of reduced flood risk and, ultimately, fiscal exposure. Mitigation may be a cost-effective option for those properties for which full-risk rates would be cost-prohibitive. Increasing revenue would require reforms to NFIP’s premium rates. FEMA has begun increasing rates on subsidized properties. But, as we suggested in 2017, Congress could remove existing legislative barriers to FEMA’s premium rate revisions. Members of Congress and others have raised concerns about such reforms because raising premium rates may make coverage unaffordable for some policyholders. To address these concerns, we suggested that all policies include full-risk premium rates, with targeted, means-based, appropriated subsidies for some policies. This would improve the program’s solvency while also addressing affordability concerns. Assigning full-risk premium rates to all policies would remove subsidies from those who do not need them, helping improve solvency. It would also more accurately signal the true flood risk to property owners and enhance resilience by incentivizing mitigation measures, such as acquisition. Means-based subsidies would ensure that property owners who needed help would get it, and an explicit appropriation for the subsidies would make their true cost transparent to taxpayers. We maintain that a comprehensive approach that includes mitigation and rate reform is needed to address NFIP’s fiscal exposure. Concluding Observations Because several categories of NFIP premium rates do not reflect the full risk of flood loss, FEMA has had to borrow $36.5 billion from Treasury to pay claims from several catastrophic flood events since 2005. To address this, some have suggested additional funding to mitigate RL properties. While we acknowledge that mitigation is part of the solution, we maintain that a more comprehensive approach is necessary to address the program’s fiscal exposure. We have made two recommendations to FEMA that, if implemented, could help inform Congress’ efforts to reform NFIP. In 2008, we recommended that FEMA collect information on grandfathered properties and analyze their financial effect on NFIP, and in 2013, we recommended that FEMA obtain elevation information on subsidized properties. By implementing these recommendations, FEMA would better understand NFIP’s fiscal exposure and be able to communicate this information to Congress. Further, we suggested in 2017 that Congress take a comprehensive approach to reforming NFIP. One important first step would be to implement full-risk premium rates for all policies, with appropriated means-based subsidies for some policies. Full-risk premium rates would remove subsidies from those who do not need them, helping improve solvency, and also more accurately signal the true flood risk to property owners and incentivize efforts to mitigate flood risk. Further, means- based subsidies would ensure that property owners who need help will get it, and having Congress explicitly appropriate for the subsidies would make the true cost of the subsidy transparent to taxpayers. While this would be an important step to putting NFIP on a sustainable path, comprehensive reform of the program should also address the other issues we have identified, including mitigating the flood risk of insured properties. Agency Comments We provided a draft of this report to the Department of Homeland Security for its review and comment. The agency provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report addresses the Federal Emergency Management Agency’s (FEMA) National Flood Insurance Program (NFIP). Our objectives were to examine (1) funding programs available for property acquisitions, (2) FEMA’s flood mitigation efforts, and (3) factors contributing to NFIP’s fiscal exposure. To describe funding programs available for property acquisitions, we reviewed authorizing legislation, the Code of Federal Regulations, and FEMA guidance and manuals, including the Hazard Mitigation Assistance Guidance and Cost Share Guide, to identify program characteristics, eligibility requirements, and application guidelines. To identify funding for these programs, we analyzed FEMA’s project-level Hazard Mitigation Assistance (HMA) data from its Enterprise Applications Development Integration and Sustainment system, which FEMA uses to track mitigation projects funded through its HMA grant programs. To summarize Increased Cost of Compliance coverage, which NFIP policyholders can use to fund mitigation efforts, we analyzed FEMA’s NFIP claims database to identify the number and amount of such claims. We also interviewed the FEMA officials responsible for administering these grant programs. Further, we identified other federal agency programs that can fund property acquisitions or meet cost share requirements and reviewed their authorizing legislation and their relevant federal regulations. Finally, to identify examples of state and local programs that have been used to fund property acquisitions, we reviewed academic reports, including from the University of North Carolina and Rice University. To review FEMA’s flood mitigation efforts, we analyzed FEMA’s project- level HMA data from the “Mitigation Universe” of its Enterprise Applications Development Integration and Sustainment system. We analyzed several variables in this dataset, including number of properties, federal share obligated, mitigation type category, grant program area, grant program fiscal year, and state. For the analyses by mitigation type category, we excluded projects (79 percent of the total records) that did not include a flood mitigation activity (those with values of “Other” or “Pure Retrofit”). Of the remaining records, 98 percent were “Pure,” meaning all properties within each project were of a single mitigation method type (acquisition, elevation, floodproof, or relocation). The remaining 2 percent were “Mixed,” indicating a project contained at least one acquisition and at least one elevation but could also contain other mitigation methods. For analyses by grant program area, we treated projects funded through the Severe Repetitive Loss and Repetitive Flood Claims grant programs as being part of the Flood Mitigation Assistance program and projects funded through the Legislative Pre-Disaster Mitigation program as being part of the Pre- Disaster Mitigation program. For data on the number of flood mitigated properties, we used the final number of properties mitigated by a project. For data on funding, we used the federal share of the project’s obligated funding. To analyze mitigated and nonmitigated repetitive loss (RL) properties, we summarized FEMA’s RL property mitigation report, which tracked the cumulative number of RL properties by year from June 2009 through June 2018. To describe the number of RL properties by state, we analyzed FEMA’s list of RL properties as of August 31, 2019, which included every property that at any point FEMA had designated as an RL property under any of its three definitions. The list included properties that had since been mitigated, as well as those that are no longer insured by NFIP. To examine factors contributing to NFIP’s fiscal exposure, we analyzed FEMA’s claims dataset as of September 30, 2019. This dataset includes the more than 2 million claims paid to NFIP policyholders since the beginning of the program. We excluded records whose status was “open” or “closed without payment.” Further, we excluded records whose year of loss was before 1978 because FEMA officials told us that that was the first year they considered their claims data to be reliable and complete. To identify factors that contribute to NFIP’s fiscal exposure and illustrate how this fiscal exposure has materialized and changed over time, we reviewed several of our previous reports and the Department of the Treasury’s statements of public debt. Finally, to summarize how flood risk could change in the future, we reviewed our previous reports on climate change. In general, we adjusted for inflation any dollar figures that we compared or aggregated across multiple years and indicated this accordingly. To do this, we used the Bureau of Labor Statistics’ Consumer Price Index for All Urban Consumers. To assess the reliability of all of the datasets we analyzed for this report, we requested and reviewed preliminary versions of the data and accompanying data dictionaries. We used the data dictionary to identify potential variables for use in our analyses and output statistics on these variables (e.g., frequencies of values, number of blanks or zero values, minimum, maximum, and mean) to identify any potential reliability concerns such as outliers or missing values. We met with relevant FEMA officials to discuss each of the data sets to understand how FEMA collected, used, and maintained the data; the reliability and completeness of key variables; reasons for any potential discrepancies we identified; and whether our understanding of the data and approach to analyzing them were accurate and reasonable. After these meetings, we requested updated versions of the data and updated our analyses accordingly. We determined that all data elements we assessed were sufficiently appropriate and reliable for this report’s objectives. We conducted this performance audit from January 2019 to June 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Significant Events and GAO Reports Related to the National Flood Insurance Program’s Fiscal Exposure January 1983: We recommended that FEMA improve its rate-setting process to ensure adequate income for NFIP and suggested that Congress either limit FEMA’s borrowing for extraordinary losses or establish an emergency fund for such losses, and pay for NFIP subsidies with appropriations. March 1994: We found that NFIP’s premium income was insufficient to meet expected future losses because of subsidized rates and suggested that Congress consider how any changes in premium rates would affect policyholder participation. September 1994: National Flood Insurance Reform Act. Developed a mitigation assistance program and expanded the mandatory purchase requirement. June 2004: Flood Insurance Reform Act. Authorized grant programs to mitigate properties that experienced repetitive flooding losses. August-October 2005: Hurricanes Katrina, Rita, Wilma. Caused $17.1 billion in NFIP claims. FEMA debt to Treasury increased to $16.9 billion in fiscal year 2006. March 2006: We added NFIP to our high-risk list. October 2008: We recommended that FEMA collect data to analyze the effect of grandfathered policies on NFIP’s fiscal exposure. November 2008: We identified three options for addressing the financial impact of subsidies: increasing mitigation efforts; eliminating or reducing subsidies; and targeting subsidies based on need. June 2011: We suggested that Congress allow NFIP to charge full- risk premium rates to all property owners and provide assistance to some categories of owners to pay those premiums. July 2012: Biggert-Waters Flood Insurance Reform Act. Required FEMA to increase rates for certain subsidized properties and grandfathered properties; create a NFIP reserve fund; and improve flood risk mapping. October 2012: Superstorm Sandy. Caused $8.8 billion in NFIP claims. FEMA debt to Treasury increased to $24 billion in fiscal year 2013. February 2013: We added limiting the federal government’s fiscal exposure by better managing climate change risks to our high-risk list. July 2013: We recommended that FEMA obtain elevation information to determine full-risk rates for subsidized policyholders. March 2014: Homeowner Flood Insurance Affordability Act. Reinstated certain rate subsidies removed by the Biggert-Waters Flood Insurance Reform Act of 2012; established a new subsidy for properties that are newly mapped into higher-risk zones; restored grandfathered rates; and created a premium surcharge that would be deposited into the NFIP reserve fund. October 2014: We recommended that FEMA amend NFIP minimum standards for floodplain management to encourage forward-looking construction and rebuilding efforts that reduce long-term risk and federal exposure to losses. July 2015: We recommended that the Mitigation Framework Leadership Group establish an investment strategy to identify, prioritize, and guide federal investments in disaster resilience and hazard mitigation-related activities. August-October 2016: Hurricane Matthew and Louisiana floods. Caused $3.1 billion in NFIP claims. FEMA debt to Treasury debt increased to $24.6 billion in early fiscal year 2017. April 2017: We suggested that Congress make comprehensive reforms to NFIP that include actions in six areas: (1) addressing the debt; (2) removing legislative barriers to full-risk premium rates; (3) addressing affordability; (4) increasing consumer participation; (5) removing barriers to private-sector involvement; and (6) protecting NFIP flood resilience efforts. August-September 2017: Hurricanes Harvey, Irma, and Maria. Caused $10 billion in NFIP claims. FEMA reached the limit of its Treasury borrowing authority of $30.4 billion. September 2017: NFIP’s last long-term authorization ended, resulting in a string of short-term reauthorizations. October 2017: Congress canceled $16 billion of NFIP’s debt to enable FEMA to continue paying flood claims. This reduced FEMA’s debt to Treasury to $20.5 billion. March 2020: FEMA’s debt to Treasury remained at $20.5 billion. September 2020: NFIP’s current short-term authorization ends. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Alicia Puente Cackley, (202) 512-8678 or cackleya@gao.gov In addition to the contact named above, Patrick Ward (Assistant Director), Christopher Forys (Analyst in Charge), Emily Bond, Christina Cantor, William Chatlos, Eli Dile, Lijia Guo, Holly Halifax, Laura Ann Holland, Yann Panassie, Stephen Ruszczyk, Jessica Sandler, Joseph Silvestri, Jena Sinkfield, and Kelsey Wilson made key contributions to this report.
Why GAO Did This Study NFIP has faced significant financial challenges over the years, highlighted by a rise in catastrophic flood events and its $20.5 billion debt to Treasury. Contributing to these challenges are repetitive loss properties—those that have flooded and received a claim payment multiple times. Acquiring and demolishing these properties is one alternative to paying for repeated claims, but questions exist about the cost, efficiency, and effectiveness of this approach. GAO was asked to review FEMA's property acquisition efforts as a means of addressing NFIP's financial challenges. This report examines (1) funding programs available for acquisitions, (2) FEMA's flood mitigation efforts, and (3) factors contributing to NFIP's fiscal exposure. To conduct this work, GAO reviewed FEMA guidance and other documentation; analyzed FEMA data sets related to NFIP policies and claims, repetitive loss properties, and mitigation projects; and interviewed FEMA officials. What GAO Found The Federal Emergency Management Agency (FEMA) administers three grant programs that can fund efforts to mitigate the flood risk of properties insured by the National Flood Insurance Program (NFIP). Together, these three programs funded $2.3 billion in mitigation projects from fiscal years 2014 through 2018. The largest program's funding is tied to federal recovery dollars following presidential disaster declarations, while the other two programs are funded each year through congressional appropriations. States and localities generally must contribute 25 percent of the cost of a mitigation project, but some other federal program funds can be used for that purpose. One example of such a project is property acquisition—purchasing a high-risk property from a willing property owner, demolishing the structure, and converting the property to green space. From 1989 to 2018, FEMA has helped states and localities mitigate more than 50,000 properties; however, the number of nonmitigated repetitive loss properties (generally meaning those that flooded at least twice in 10 years) has grown. Mitigation efforts varied by state. Property acquisition accounted for about 80 percent of mitigated properties nationwide, but, in some states, elevation (raising a structure) was more commonly used. In addition, some states (e.g., Missouri and North Carolina) mitigated a high number of properties relative to their numbers of repetitive loss properties, while others (Florida, New York, Louisiana, and Texas) mitigated a low number. While these efforts can reduce flood risk and claim payments, the federal government's fiscal exposure from NFIP remains high because premium rates do not fully reflect the flood risk of its insured properties. NFIP has experienced several catastrophic flood events in recent years, and the frequency and severity of floods is expected to increase. However, NFIP's premium rates have not provided sufficient revenue to pay claims. As a result, FEMA still owed Treasury $20.5 billion as of March 2020, despite Congress cancelling $16 billion of debt in 2017. As GAO has reported in the past (GAO-17-425), Congress will need to consider comprehensive reform, including mitigation and structural changes to premium rates, to ensure NFIP's solvency. What GAO Recommends GAO suggested in GAO-17-425 that Congress make comprehensive reforms to NFIP to improve the program's solvency. Given NFIP's continued debt growth, GAO maintains that comprehensive reform warrants consideration.
gao_GAO-19-427T
gao_GAO-19-427T_0
Background In April 2018, Facebook disclosed that a Cambridge University researcher may have improperly shared the data of up to 87 million of Facebook’s users with a political consulting firm. This followed other incidents in recent years involving the misuse of consumers’ personal information from the Internet, which about three-quarters of Americans use. These types of incidents have raised public concern because Internet-based services and products, which are essential for everyday social and economic purposes, often collect and use various forms of personal information that could cause users harm if released. The federal privacy framework for private-sector companies is comprised of a set of tailored laws that govern the use and protection of personal information for specific purposes, in certain situations, or by certain sectors or types of entities. Such laws protect consumers’ personal information related to their eligibility for credit, financial transactions, and personal health, among other areas. We reported in 2013 that no overarching federal privacy law governs the collection and sale of personal information among private-sector companies, including information resellers—companies that collect and resell information on individuals. We found that gaps exist in the federal privacy framework, which does not fully address changes in technology and the marketplace. We recommended that Congress consider legislation to strengthen the consumer privacy framework to reflect the effects of changes in technology and the marketplace. Such legislation has not been enacted. FTC’s Role and Authorities for Overseeing Internet Privacy As we reported in January 2019, FTC is primarily a law enforcement agency with authority to, among other things, address consumer concerns about Internet privacy, both for Internet service providers and content providers. It does so using its general authority under section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” Even though the FTC Act does not speak in explicit terms about protecting consumer privacy, the Act authorizes such protection to the extent it involves practices FTC defines as unfair or deceptive. According to FTC, an act or practice is “unfair” if it causes, or is likely to cause, substantial injury not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or competition as a result of the practice. FTC has used this “unfairness” authority to address situations where a company has allegedly failed to properly protect consumers’ data, for example. According to FTC, a representation or omission is “deceptive” if it is material and is likely to mislead consumers acting reasonably under the circumstances. FTC has applied this “deceptiveness” authority to address deceptions related to violations of written privacy policies and representations concerning data security, for example. FTC staff investigate Internet privacy complaints from various sources and also initiate investigations on their own. If FTC staff have reason to believe that an entity is engaging in an unfair or deceptive practice, they may forward an enforcement recommendation to the commission. The commission then determines whether to pursue an enforcement action. With certain exceptions, FTC generally cannot directly impose civil monetary penalties for Internet privacy cases. Instead, FTC typically addresses Internet privacy cases by entering into settlement agreements requiring companies to take actions such as implementing reasonable privacy and security programs. If a company then violates its settlement agreement with FTC, the agency can request civil monetary penalties in court for the violations. In addition, FTC can seek to impose civil monetary penalties directly for violations of certain statutes and their implementing regulations, such as the statute pertaining to the Internet privacy of children and its corresponding regulations. FTC has not promulgated rules under section 5 specific to Internet privacy. According to FTC staff, the process the agency must use to issue such rules—known as the Magnuson-Moss procedures—includes steps that add time and complexity to the rulemaking process. FTC has not promulgated any regulations using the Magnuson-Moss procedures since 1980. Although FTC has not implemented its section 5 authority by issuing regulations regarding internet privacy, it has issued regulations when directed and authorized by Congress to implement other statutory authorities using a different set of rulemaking procedures. These procedures, spelled out in section 553 of the Administrative Procedures Act (APA), are those that most federal agencies typically use to develop and issue regulations. APA section 553 establishes procedures and requirements for what is known as “informal” rulemaking, also known as notice-and-comment rulemaking. Among other things, section 553 generally requires agencies to publish a notice of proposed rulemaking in the Federal Register. After giving interested persons an opportunity to comment on the proposal by providing “data, views, or arguments,” the statute then requires the agency to publish the final rule in the Federal Register. In contrast, the rulemaking procedures that FTC generally must follow to issue rules under the FTC Act are the Magnuson-Moss procedures noted above. These are required by the Magnuson-Moss Warranty Act amendments to the FTC Act and impose additional rulemaking steps beyond APA section 553. These steps include providing the public and certain congressional committees with an advance notice of proposed rulemaking (in addition to the notice of proposed rulemaking). FTC’s rulemaking under Magnuson-Moss also calls for, among other things, oral hearings, if requested, presided over by an independent hearing officer, and preparation of a staff report after the conclusion of public hearings, giving the public the opportunity to comment on the report. FTC has promulgated regulations using the APA section 553 notice-and- comment rulemaking procedures when authorized or directed by specific statutes. For example, the 1998 Children’s Online Privacy Protection Act (COPPA) required FTC to issue regulations concerning children’s online privacy; promulgate these regulations using the APA section 553 process; and, in determining how to treat a violation of the rules, to treat it as an unfair or deceptive act or practice in most cases. COPPA governs the online collection of personal information from children under the age of 13 by operators of websites or online services, including mobile applications. COPPA contained a number of specific requirements that FTC was directed to implement by regulation, such as requiring websites to post a complete privacy policy, to notify parents directly about their information collection practices, and to obtain verifiable parental consent before collecting personal information from their children or sharing it with others. Laws and regulations may be enforced in various ways, for example, by seeking civil monetary penalties for non-compliance. As mentioned, FTC has authority to seek civil monetary penalties when a company violates a settlement agreement or certain statutes or regulations. For example, in March 2018, FTC announced that it is investigating whether Facebook’s current privacy practices violate a settlement agreement that the company entered into with FTC. In the case that resulted in the 2012 settlement, FTC had charged Facebook with deceiving consumers by telling them they could keep their information private, but then allowing it to be shared and made public. FTC also has authority to seek civil monetary penalties for violations of the COPPA statute as well as FTC’s COPPA regulations. In our January 2019 Internet privacy report, we found that during the last decade, FTC filed 101 Internet privacy enforcement actions to address practices that the agency alleged were unfair, deceptive, a violation of COPPA, a violation of a settlement agreement, or a combination of those reasons. Most of these actions pertained to first-time violations of the FTC Act for which FTC does not have authority to levy civil monetary penalties. In nearly all 101 cases, companies settled with FTC, which required the companies to make changes in their policies or practices as part of the settlement. Stakeholders and FTC Identified Potential Actions to Enhance Federal Oversight of Consumers’ Internet Privacy Various stakeholders we interviewed for our January 2019 Internet privacy report said that opportunities exist for enhancing Internet privacy oversight. Most industry stakeholders said they favored FTC’s current approach—direct enforcement of its unfair and deceptive practices statutory authority, rather than promulgating and enforcing regulations implementing that authority. These stakeholders said that the current approach allows for flexibility; that regulations could hinder innovation, create loopholes, and become obsolete; and that rulemakings can be lengthy. Other stakeholders, including consumer advocates and most former FTC and FCC commissioners we interviewed, favored having FTC issue and enforce regulations. Stakeholders said that regulations can provide clarity, flexibility, and act as a deterrent, and may also promote fairness by giving companies notice of what actions are prohibited. Those stakeholders who believe that FTC’s current authority and enforcement approach is unduly limited identified three main actions that could better protect Internet privacy: (1) enactment of an overarching federal privacy statute to establish general requirements governing Internet privacy practices of all sectors, (2) APA section 553 notice-and- comment rulemaking authority, and (3) civil penalty authority for any violation of a statutory or regulatory requirement, rather than allowing penalties only for violations of settlement agreements or consent decrees that themselves seek redress for a previous statutory or regulatory violation. Privacy Statute Stakeholders from a variety of perspectives—including academia, industry, consumer advocacy groups, and former FTC and FCC commissioners—told us that a statute could enhance Internet privacy oversight by, for example, clearly articulating to consumers, industry, and privacy enforcers what behaviors are prohibited. Some stakeholders suggested that such a framework could either designate an existing agency (such as FTC) as responsible for privacy oversight or create a new agency. For example, in Canada, the Office of the Privacy Commissioner, an independent body that reports directly to the Parliament, was established to protect and promote individuals’ privacy rights. Some stakeholders also stated that the absence of a comprehensive Internet privacy statute affects FTC’s enforcement. For example, a former federal enforcement official from another oversight agency said that FTC is limited in how it can use its authority to take action against companies’ unfair and deceptive trade practices for problematic Internet privacy practices. Similarly, another former federal enforcement official from another agency said that FTC is limited in how and against whom it can use its unfair and deceptive practices authority noting, for example, that it cannot pursue Internet privacy enforcement against exempted industries. In addition, some stakeholders said FTC’s section 5 unfair and deceptive practices authority may not enable it to fully protect consumers’ Internet privacy because it can be difficult for FTC to establish that Internet privacy practices are legally unfair. Because of this difficulty, some stakeholders said that FTC relies more heavily on its authority to take enforcement action against deceptive trade practices compared with the agency’s unfair trade practices authority. This is consistent with the results of our analysis of FTC cases, which showed that in a majority of the actions FTC settled, FTC alleged that companies engaged in practices that were deceptive. Furthermore, a recently decided federal appeals court case illustrates potential limits on FTC’s enforcement remedies. The court found that FTC could not direct the company, which was accused of unfair practices, to create and implement comprehensive data security measures for the personal information the company stored on its computer networks as a remedy for the practices alleged. Instead, the court ruled that FTC’s authority was limited to prohibiting specific illegal practices. APA Notice-and-Comment Rulemaking Various stakeholders said that there are advantages to overseeing Internet privacy with a statute that provides APA section 553 notice-and- comment rulemaking authority. Officials from other consumer and worker protection agencies we interviewed described their enforcement authorities and approaches. For example, officials from CFPB and FDA, both of which use APA section 553 notice-and-comment rulemaking, said that their rulemaking authority assists in their oversight approaches and supports their enforcement actions. EEOC officials said that regulations are used to guide investigations that establish whether enforcement action is appropriate. Ability to Levy Civil Penalties for Initial Violations Some stakeholders suggested that FTC’s ability to levy civil penalties could also be enhanced. As noted, FTC can levy civil penalties against companies for violating certain regulations, such as COPPA regulations, or for violating the terms of a settlement agreement already in place. According to most former FTC commissioners and some other stakeholders we interviewed, FTC should be able to levy fines for initial violations of section 5 of the FTC Act. An academic told us that the power of an agency to levy a fine is a tangible way to hold industries accountable. Breaches Involving Personally Identifiable Information Highlight the Importance of Security and Privacy Recent data breaches at federal agencies, retailers, hospitals, insurance companies, consumer reporting agencies, and other large organizations highlight the importance of ensuring the security and privacy of personally identifiable information collected and maintained by those entities. Such breaches have resulted in the potential compromise of millions of Americans’ personally identifiable information, which could lead to identity theft and other serious consequences. For example, the breach of an Equifax online dispute portal from May to July 2017 resulted in the compromise of records containing the personally identifiable information of at least 145.5 million consumers in the United States and nearly 1 million consumers outside the United States. We reported in August 2018 that Equifax’s investigation of the breach identified four major factors— identification, detection, segmenting of access to databases, and data governance—that allowed the attacker to gain access to its network and extract information from databases containing personally identifiable information. In September 2017, FTC and CFPB, which both have regulatory and enforcement authority over consumer reporting agencies such as Equifax, initiated an investigation into the breach and Equifax’s response. Their investigation is ongoing. According to a 2017 National Telecommunications and Information Administration (NTIA) survey conducted by the U.S. Census Bureau, 24 percent of American households surveyed avoided making financial transactions on the Internet due to privacy or security concerns. NTIA’s survey results show that privacy concerns may lead to lower levels of economic productivity if people decline to make financial transactions on the Internet. Consumers who were surveyed indicated that their specific concerns were identity theft, credit card or banking fraud, data collection by online services, loss of control over personal information, data collection by government, and threats to personal safety. Recent data breaches and developments regarding Internet privacy suggest that this is an appropriate time for Congress to consider what additional actions are needed to protect consumer privacy, including comprehensive Internet privacy legislation. Although FTC has been addressing Internet privacy through its unfair and deceptive practices authority and FTC and other agencies have been addressing this issue using statutes that target specific industries or consumer segments, the lack of a comprehensive federal privacy statute leaves consumers’ privacy at risk. Comprehensive legislation addressing Internet privacy that establishes specific standards and includes APA notice-and-comment rulemaking and first-time violation civil penalty authorities could enhance the federal government’s ability to protect consumer privacy, provide more certainty in the marketplace as companies innovate and develop new products using consumer data, and provide better assurance to consumers that their privacy will be protected. In our January 2019 report, we recommended that Congress consider developing comprehensive legislation on Internet privacy that would enhance consumer protections and provide flexibility to address a rapidly evolving Internet environment. Issues that should be considered include: which agency or agencies should oversee Internet privacy; what authorities an agency or agencies should have to oversee Internet privacy, including notice-and-comment rulemaking authority and first-time violation civil penalty authority; and how to balance consumers’ need for Internet privacy with industry’s ability to provide services and innovate. Chairman Portman, Ranking Member Carper, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments For further information regarding this testimony, please contact Alicia Puente Cackley at (202) 512-8678 or cackleya@gao.gov or Mark Goldstein at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Andrew Huddleston, Assistant Director; Kay Kuhlman, Assistant Director; Bob Homan, Analyst-in-Charge; Melissa Bodeau; John de Ferrari; Camilo Flores; Nick Marinos, and Sean Standley. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's January 2019 report, entitled Internet Privacy: Additional Federal Authority Could Enhance Consumer Protection and Provide Flexibility ( GAO-19-52 ). cackleya@gao.gov goldsteinm@gao.gov What GAO Found The United States does not have a comprehensive Internet privacy law governing the collection, use, and sale or other disclosure of consumers' personal information. At the federal level, the Federal Trade Commission (FTC) currently has the lead in overseeing Internet privacy, using its statutory authority under the FTC Act to protect consumers from unfair and deceptive trade practices. However, to date FTC has not issued regulations for Internet privacy other than those protecting financial privacy and the Internet privacy of children, which were required by law. For FTC Act violations, FTC may promulgate regulations but is required to use procedures that differ from traditional notice-and-comment processes and that FTC staff said add time and complexity. In the last decade, FTC has filed 101 enforcement actions regarding Internet privacy; nearly all actions resulted in settlement agreements requiring action by the companies. In most of these cases, FTC did not levy civil penalties because it lacked such authority for those particular violations. The Federal Communications Commission (FCC) has had a limited role in overseeing Internet privacy. From 2015 to 2017, FCC asserted jurisdiction over the privacy practices of Internet service providers. In 2016, FCC promulgated privacy rules for Internet service providers that Congress later repealed. FTC resumed privacy oversight of Internet service providers in June 2018. Stakeholders GAO interviewed had varied views on the current Internet privacy enforcement approach and how it could be enhanced. Most Internet industry stakeholders said they favored FTC's current approach—direct enforcement of its unfair and deceptive practices statutory authority, rather than promulgating and enforcing regulations implementing that authority. These stakeholders said that the current approach allows for flexibility and that regulations could hinder innovation. Other stakeholders, including consumer advocates and most former FTC and FCC commissioners GAO interviewed, favored having FTC issue and enforce regulations. Some stakeholders said a new data-protection agency was needed to oversee consumer privacy. Stakeholders identified three main areas in which Internet privacy oversight could be enhanced: Statute . Some stakeholders told GAO that an overarching Internet privacy statute could enhance consumer protection by clearly articulating to consumers, industry, and agencies what behaviors are prohibited. Rulemaking . Some stakeholders said that regulations can provide clarity, enforcement fairness, and flexibility. Officials from two other consumer protection agencies said their rulemaking authority assists in their oversight efforts and works together with enforcement actions. Civil penalty authority. Some stakeholders said FTC's Internet privacy enforcement could be more effective with authority to levy civil penalties for first-time violations of the FTC Act. Comprehensive Internet privacy legislation that establishes specific standards and includes traditional notice-and-comment rulemaking and broader civil penalty authority could enhance the federal government's ability to protect consumer privacy.
gao_GAO-19-332
gao_GAO-19-332_0
Background Grid Functions, Design, and Operations The U.S. electric grid comprises three distinct functions: generation and storage, transmission, and distribution (see fig. 1). Generation and Storage. Power plants generate electric power by converting energy from other forms—chemical, mechanical (hydroelectric or wind), thermal, radiant energy (solar), or nuclear— into electric power. Energy storage, such as batteries or pumped hydroelectric, can improve the operating capabilities of the grid while also regulating the quality and reliability of power. Transmission. The power transmission system connects geographically distant power plants with areas where electric power is consumed. Substations are used to transmit electricity at varied voltages and generally contain a variety of equipment, including transformers, switches, relays, circuit breakers, and system operations instruments and controls. Distribution. The distribution system carries electric power out of the transmission system to industrial, commercial, residential, and other consumers. Three large electric grids, or interconnections, exist in the contiguous United States that collectively constitute the U.S. electric grid: the Eastern Interconnection, Western Interconnection, and Electric Reliability Council of Texas Interconnection (see fig. 2). These interconnections, which extend into parts of Canada and Mexico, operate independently with limited ability to move electric power between them; electric power is produced within an interconnection to meet demand in the same interconnection. The grid is generally considered to be resilient. Historically, grid operators have been able to respond quickly to the adverse consequences of an incident—whether it is damage from a major hurricane or a falling tree—and quickly restore service. In some cases, electricity may be restored long before utilities fully recover from an incident. For example, in instances with physical damage to grid components, such as an event that damages many substations, it could take months or years to fully restore the equipment. The electricity industry has refined its power restoration processes after decades of experience in responding to disaster-related events, but restoration from a cyber-related event may be more challenging. For example, disaster-related events—such as hurricanes—may involve significant lead time before the incident. This allows owners and operators to take preemptive measures to protect their systems, develop restoration plans, and activate personnel. In contrast, cyberattacks may occur without warning, leaving owners and operators no time to prepare for a response. In addition, cyberattacks could target and damage specific types of components or facilities across a dispersed geographic area. Responding to such an attack could be more difficult than to a localized disaster-related event since resources may be geographically distributed rather than concentrated in the same area. Industrial Control Systems Support the Grid Industrial control systems are typically network-based systems that monitor and control sensitive processes and physical functions, such as the opening and closing of circuit breakers on the grid. These systems support the control of electric power generation, transmission, and distribution. System operators—which are sometimes affiliated with a particular utility or sometimes independent and responsible for multiple utility areas—manage electricity flows through these systems. Early industrial control systems operated in isolation, running proprietary control protocols using specialized hardware and software. In addition, many industrial control system components were in physically secured areas, and the components were not connected to IT systems or the internet. However, industrial control systems are changing in ways that offer advantages to system operators but that also make them more vulnerable to cyberattacks. In particular, proprietary devices in these systems are being replaced by cheaper and more widely available devices that use traditional IT networking protocols—including those that support remote access. These newer devices can provide the system operator with more detailed data on the conditions of the transmission and distribution systems and with better tools to observe and manage the grid. Remote access capabilities in the devices can also make them easier to maintain. Further, industrial control systems are being designed and implemented using traditional IT computers and operating systems, which allow corporate business and industrial control system networks to be connected more easily. Nonetheless, cyberattacks on industrial control systems supporting grid operations may require a degree of sophistication and knowledge beyond what is needed to conduct cyberattacks on IT systems. For example, industrial control systems often use operating systems and applications that may be considered unconventional to typical IT personnel. Critical Infrastructure Protection Roles, Responsibilities, and Key Initiatives Federal policy and public-private plans establish roles and responsibilities for the protection of critical infrastructure, including the electric grid. Presidential Policy Directive 21, issued in February 2013, shifted the nation’s focus from protecting critical infrastructure against terrorism to protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents. The directive identified 16 critical infrastructure sectors, such as the energy sector, which includes the grid. In addition, the directive identified energy and communications systems as uniquely critical because of the enabling functions they provide across all sectors. The directive also outlined roles and responsibilities for protecting these sectors. For example: The directive designated DOE as the sector-specific agency for the energy sector. According to the directive, DOE and other sector-specific agencies are responsible for, among other things, collaborating with critical infrastructure owners and operators, identifying vulnerabilities, and helping to mitigate incidents. In addition, the Fixing America’s Surface Transportation Act of 2015 codified DOE’s role as the sector-specific agency for the energy sector and gave DOE the authority to order emergency measures, following a Presidential declaration of a grid security emergency, to protect or restore the reliability of critical electric infrastructure. The Office of Cybersecurity, Energy Security, and Emergency Response is the lead for DOE’s energy sector cybersecurity efforts. The directive called for DHS to coordinate the overall federal effort to promote the security and resilience of the nation’s critical infrastructure. Within DHS, the Cybersecurity and Infrastructure Security Agency’s National Cybersecurity and Communications Integration Center is the lead for cyber and physical infrastructure security. Private-sector critical infrastructure owners and operators are encouraged, but not required, to report cybersecurity incidents to the center. The directive emphasized that critical infrastructure owners and operators are uniquely positioned to manage risks to their individual operations and assets and to determine effective strategies to make them more secure and resilient. The National Infrastructure Protection Plan, updated by DHS in December 2013, among other things, further integrates critical infrastructure protection efforts between government and private sectors. It describes a voluntary partnership model as the primary means of coordinating government and private-sector efforts to protect critical infrastructure. As part of the partnership structure, the designated sector-specific agencies serve as the lead coordinators for the security programs of their respective sectors. The plan also called for each sector to have a government coordinating council, consisting of representatives from various levels of government, and many sectors have a coordinating council consisting of owner-operators of these critical assets or members of their respective trade associations. For example, the Energy Sector Government Coordinating Council has been established (comprising the electricity subsector, as well as the oil and natural gas subsectors), and an Electricity Subsector Coordinating Council has been established to represent electricity asset owners and operators. Cybersecurity, issued in 2013, among other things, addresses the need to improve cybersecurity through information sharing and collaboratively developing and implementing risk-based standards. It called for NIST to lead the development of a framework to reduce cybersecurity risks to critical infrastructure. It also called for sector- specific agencies to develop mechanisms to encourage adoption of the framework. NIST issued its Cybersecurity Framework in 2014 and updated it in April 2018. The framework provides a set of cybersecurity activities, desired outcomes, and applicable references that are common across all critical infrastructure sectors, including the energy sector. The executive branch has taken steps toward outlining a federal strategy for confronting cyber threats—including those facing critical infrastructure such as the grid. For example: Executive Order 13800: Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, issued in May 2017, required federal agencies to take a variety of actions aimed at improving the cybersecurity of federal networks and critical infrastructure. Among other things, the order required DOE and DHS to assess the potential scope and duration of a prolonged power outage associated with a significant cyber incident, the readiness of the United States to manage the consequences of such an incident, and any gaps or shortcomings in assets or capabilities required to mitigate the consequences of such an incident. The National Cyber Strategy, issued in September 2018, builds upon Executive Order 13800 and describes actions that federal agencies and the administration are to take to, among other things, secure critical infrastructure. For example, one of the strategy’s seven goals is protecting critical infrastructure. To achieve this goal, the strategy outlines a number of priority actions, such as prioritizing risk- reduction across seven key areas, including energy and power. The DHS Cybersecurity Strategy was released in May 2018 with the intent of providing the department with a framework to execute cybersecurity responsibilities during the next 5 years. The plan outlines seven goals the department plans to accomplish in support of its mission related to managing national cybersecurity risks. For example, for the goal of protecting critical infrastructure, the plan outlines a number of objectives and sub-objectives, such as expanding and improving the sharing of cyber threat indicators, defensive measures, and other cybersecurity information. In our 2018 and 2019 updates on government high-risk areas, we reported that these executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Electric Grid Cybersecurity Regulation Federal and state authorities play key roles in regulating the reliability of the grid, which can be impaired by cybersecurity attacks. FERC is the federal regulator of interstate transmission of electricity with responsibility to review and approve standards to provide for the reliable operation of the bulk power system. In addition, FERC oversees NERC, which is the federally designated U.S. electric reliability organization. NERC is responsible for conducting reliability assessments and enforcing mandatory standards to ensure the reliability of the bulk power system—a term that refers to (1) facilities and control systems necessary for operating the electric transmission network and (2) the output from certain generation facilities needed for reliability. NERC develops reliability standards collaboratively through a deliberative process involving utilities and others in the electricity industry. NERC then sends the standards to FERC, which can either approve them or remand them to NERC for revision. These reliability standards include critical infrastructure protection standards for protecting electric utility-critical and cyber-critical assets from cyberattacks. FERC has approved 11 such cybersecurity standards, 10 of which are currently enforced. The standards call for organizations to classify their cyber systems as low-, medium-, or high-impact based on the adverse impact that loss, compromise, or misuse of those systems could have on the reliable operation of the bulk electric system. The classifications are made based on criteria and associated thresholds for, among others, generation resources and transmission substation operations. In turn, the standards apply differently to cyber systems based on whether they are classified as low-, medium-, or high-impact systems. For example: Low-impact systems. Systems that affect net aggregate generation capacity of less than 1,500 megawatts at one power plant location within a single interconnection are classified as low-impact systems and are subject to the requirements in two of the 11 cybersecurity standards. Medium-impact systems. Systems that similarly affect net aggregate generation capacity of at least 1,500 megawatts are classified as medium-impact systems and are subject to requirements in the full set of cybersecurity standards. High-impact systems. Systems that are used by and located at certain control centers are classified as high-impact systems and are subject to the full set of cybersecurity standards. The standards generally require organizations to implement similar controls for medium- and high-impact systems, with more stringent variations of certain controls for high-impact systems. As of December 2017, at most about 20 percent of the nation’s generation capacity comes from power plants with medium-impact systems and therefore is subject to requirements in the full set of cybersecurity standards. Both NERC and FERC have authority to enforce reliability standards. In addition, FERC has the authority to oversee NERC’s enforcement of the FERC-approved reliability standards. Cyber incident reporting is also an important part of federal and nonfederal regulatory efforts. Federal law requires grid owners and operators to report bulk power system incidents to DOE when certain criteria are met, such as a cyber event that causes interruptions of electrical system operations or that could potentially affect power system reliability. In addition, FERC-approved reliability standards require certain registered grid owners and operators to report cybersecurity incidents—that is, cybersecurity events that have compromised or disrupted one or more reliability tasks—to NERC. State regulators generally oversee the reliability of distribution systems, and cybersecurity regulations related to the distribution grid may vary across states. In 2017, the National Association of Regulatory Utility Commissioners released an updated version of its cybersecurity primer for state utility regulators that aims to provide guidance to state regulators. The primer highlights the NIST Cybersecurity Framework as well as the FERC-approved cybersecurity standards as helpful tools for utilities and state regulators. The Grid Faces Significant Cybersecurity Risks and Challenges The U.S. electric grid faces significant cybersecurity risks—that is, threats, vulnerabilities, and impacts—and grid owners and operators face significant challenges in addressing these risks. Threat actors are becoming increasingly capable of carrying out attacks on the grid. At the same time, the grid is becoming more vulnerable to attacks. With respect to the potential impacts of the threats and vulnerabilities, U.S. cybersecurity incidents reportedly have not caused a domestic power outage. In addition, federal agencies have performed three assessments of the potential impacts that cyberattacks could have on the grid, but the potential scale of any associated outages is uncertain due to limitations in the assessments. As grid owners and operators attempt to address cybersecurity risks, they face a number of challenges, such as difficulties in hiring a sufficient cybersecurity workforce and limited public-private information sharing. Various Cyber Threat Actors Are Increasingly Capable of Attacking the Grid A variety of threat actors pose significant cybersecurity threats to the electric grid, and many of these threat actors are becoming increasingly adept at carrying out attacks on industrial control systems, such as those supporting grid operations. Relatedly, the skill needed to attack industrial control systems is decreasing, as tools for exploiting industrial control system vulnerabilities become more available. According to the 2019 Worldwide Threat Assessment of the U.S. Intelligence Community, nations, criminal groups, and terrorists pose the most significant cyber threats to U.S. critical infrastructure. In addition, hackers and hacktivists, as well as insiders, pose significant cyber threats to the grid, according to officials and representatives of key federal and nonfederal entities whom we interviewed. Nations Nations, including nation-state, state-sponsored, and state-sanctioned groups or programs, use cyber tools as part of their information-gathering and espionage activities. According to the 2019 Worldwide Threat Assessment, China and Russia pose the greatest cyberattack threats; of particular concern, they possess the ability to launch cyberattacks that could cause localized, temporary disruptive effects on critical infrastructure. For example, the assessment states that China has the ability to disrupt a natural gas pipeline for days to weeks (which could in turn disrupt grid operations), and Russia has the ability to disrupt an electrical distribution network for at least a few hours. The assessment also states that Russia is mapping U.S. critical infrastructure with the long-term goal of being able to cause substantial damage. Separately, DHS and the Federal Bureau of Investigation have described Russian activities as an intrusion campaign by actors on U.S. government entities and critical infrastructure organizations. In addition, a nation-state has successfully demonstrated its capability to disrupt the grid of another country. Specifically, according to the Office of the Director of National Intelligence, in December 2015 a state-sponsored actor conducted a cyberattack on the Ukrainian power grid that systematically disconnected substations, resulting in a power outage that lasted 3 hours. Officials and representatives of key federal and nonfederal entities we interviewed identified nations as the most capable threat actor but also noted that nations may not take action to disrupt the U.S. grid. For example, representatives from two utilities stated that nation-state actors are of the most concern because they have the resources to persist in their operations. However, officials from Los Alamos National Laboratory explained that nation-states may choose not to sponsor an attack because they could be easily identified. In addition, a representative from one of the utilities that we met with stated that nation-states may not pursue a cyberattack on the U.S. grid because they may be concerned about the potential response by the United States. Federal officials we interviewed noted that nation-states may be interested in gathering information about U.S. critical infrastructure with the intent of conducting a cyberattack at a later date. Criminal Groups Criminal groups, including organized crime organizations, seek to use cyberattacks for monetary gain. According to the 2019 Worldwide Threat Assessment, financially motivated cyber criminals will likely expand their targets in the United States in the next few years, and their actions could disrupt critical infrastructure in non-energy sectors. The intelligence community does not identify criminal groups as a threat specifically to the energy sector, but these groups could still have a large impact on the grid. For example, criminal organizations often use ransomware—malicious software used to deny access to IT systems or data—to hold systems or data hostage until a ransom is paid. Criminal groups have not used ransomware to target industrial control systems, but ransomware has been used to infect IT systems tied to industrial control systems. For example, the Center for Internet Security reported in March 2019 that the LockerGoga ransomware disrupted industrial and manufacturing firms’ networks, including a Norwegian aluminum company, which had to temporarily move to manual production. According to DHS’s Industrial Control Systems Computer Emergency Response Team, ransomware continues to be a major threat to both IT and industrial control systems that support the grid. In addition, officials and representatives of key federal and nonfederal entities we interviewed suggested that nations could hire criminal groups to achieve their objectives. For example, an official from the National Renewable Energy Laboratory stated that criminal groups could be leveraged by other threat actors that have different incentives, such as nations focused on intelligence-gathering operations. Terrorists Terrorists seek to destroy, incapacitate, or exploit critical infrastructures in order to threaten national security, inflict mass casualties, weaken the economy, and damage public morale and confidence. Terrorist groups may be highly motivated to disrupt or damage the grid, but they do not currently have the sophisticated tools or skill necessary to execute a cyberattack that could cause a widespread outage or significantly damage the power system, according to the 2019 Worldwide Threat Assessment. However, terrorist groups could cause disruptive effects, such as defacing websites or executing denial-of-service attacks against poorly protected networks. Hackers and Hacktivists Hackers break into networks for a challenge, revenge, stalking, or monetary gain, among other reasons. By contrast, hacktivists are ideologically motivated and use cyber exploits to further political goals, such as free speech or to make a point. Hackers and hacktivists no longer need a great amount of skill to compromise IT systems because they can download commonly available attack tools. Officials and representatives of key federal and nonfederal entities we interviewed told us that hackers and hacktivists may have less capability to do harm than the most significant threat actors identified by the intelligence community, but they still pose a threat to the grid. For example, officials from the National Energy Technology Laboratory explained that while hacktivists generally are less capable than nations, their intent to inflict harm or to damage operations is typically more immediate than nations’ longer-term goals. In addition, representatives from nonfederal entities stated that hacktivists may be capable of causing problems for electric utilities and systems supporting the delivery of power. Insiders Insiders are entities (e.g., employees, contractors, vendors) with authorized access to an information system or enterprise who have the potential to cause harm through destruction, disclosure, modification of data, or denial of service. Such destruction can occur wittingly or unwittingly. For example, in 2009, a disgruntled former IT employee of a Texas power plant allegedly disrupted the company’s energy forecast system when the company failed to deactivate the employee’s account access and confiscate his company-issued laptop after firing him two days earlier. By contrast, in another case in 2009, contractors were reported to have unwittingly introduced malware on a uranium enrichment facility’s workstations in Iran. Specifically, the attackers introduced malware on the contractor’s business network. The malware then reportedly spread to universal serial bus (USB) devices that were used to transfer information between the contractors’ business IT network and the uranium enrichment facility’s workstations. Officials and representatives of key federal and nonfederal entities that we interviewed stated that while the threat posed by insiders varies, they could cause damaging effects. For example, Sandia National Laboratories officials explained that insiders could include knowledgeable employees with privileged access to critical systems or contractors with limited system knowledge. Further, representatives from another nonfederal entity explained that insider threats are a concern because of the economically valuable information they could steal. The Grid Is Becoming More Vulnerable to Cyberattacks The electric grid is becoming more vulnerable to cyberattacks via (1) industrial control systems, (2) consumer Internet of Things (IoT) devices connected to the grid’s distribution network, and (3) the global positioning system (GPS). Industrial Control Systems As previously noted, cheaper and more widely available devices that use traditional IT networking protocols are being integrated into industrial control systems. The use of these protocols, as well as traditional IT computers and operating systems, has led to a larger cyberattack surface—the different points in a network where attackers can try to enter or extract information—for the grid’s systems. In particular, many industrial control system devices include remote access capabilities, and industrial control systems are increasingly connected to corporate business networks. Remote access capabilities. Vendors are increasingly including remote access capabilities, including modems and wireless networking, as part of industrial control system devices. These capabilities are susceptible to exploitation by malicious actors. For example, malicious actors could scan a range of potential telephone numbers common to an area or published on a company website to find open modem connections to these devices (referred to as “war dialing”). In addition, malicious actors could scan for unsecured wireless networks connected to industrial control system devices while in close proximity to the devices (referred to as “war driving”). If implemented effectively, modern cybersecurity practices often protect against techniques used to remotely access industrial control system devices, and only allow trusted connections. However, to circumvent these practices, a malicious actor could, for example, compromise a vendor’s network—which is often trusted by owners and operators—and use the trusted connection to remotely connect to industrial control system devices. Connections to corporate business networks. Industrial control systems, which were once largely isolated from the internet and business IT systems, are increasingly connected in modern energy systems, allowing cyberattacks to originate in business IT systems and migrate to industrial control systems. For example, malicious nation-state actors used spear phishing emails to deploy malware on business IT networks in the 2015 attack on Ukrainian electricity utilities. After gaining initial access to the business IT networks, the attackers reportedly used a variety of techniques to migrate to the industrial control system networks of the utilities. Moreover, even if industrial control systems are not physically connected to business IT systems, malicious actors can exploit the use of removable media between the two networks. For example, as previously mentioned, contractors were reported to have unwittingly introduced malware on uranium enrichment facility workstations in Iran by using USB devices that were infected with the malware on the contractors’ business IT network to transfer information to the uranium enrichment facility’s workstations. Figure 3 illustrates how malicious actors could leverage this increasing attack surface to compromise industrial control systems. Compounding the risk associated with the increased attack surface, many legacy industrial control systems were not designed with cybersecurity protections because they were not intended to be connected to networks, such as the internet. For example, many legacy devices are not able to authenticate commands to ensure that they have been sent from a valid user and may not be capable of running modern encryption protocols. In addition, some legacy devices do not have the capability to log commands sent to the devices, making it more difficult to detect malicious activity. Additionally, even in the case of more modern devices, the safety and efficiency goals of the grid and the supporting industrial control systems can conflict with the goal of security in the design and operation of industrial control systems. According to an Idaho National Laboratory analysis, grid owners and operators may not always be able to identify industrial control system vulnerabilities in a timely manner. Vulnerability scanning is often used in IT systems to validate proper system configuration and to identify any vulnerabilities that may be present. However, conventional IT vulnerability scanning can disable or shut down energy delivery systems, and testing may not always detect vulnerabilities deep within industrial control system software. Further, even if owners and operators are able to identify industrial control system cybersecurity vulnerabilities, they may not be able to address those vulnerabilities in a timely manner because certain industrial control system devices may have high availability requirements to support grid operations. These devices typically need to be taken offline to apply patches to fix cybersecurity vulnerabilities. In addition, grid owners and operators need to rigorously test the patches before applying them. Security patches are typically tested by vendors, but they can degrade or alter the functionality of industrial control systems, which can have serious consequences for grid operations. Consequently, there is increased risk that malicious actors may be able to exploit vulnerabilities in industrial control system devices before patches can be applied. According to DHS, the number of vulnerability advisories for industrial control systems devices has steadily increased, from 17 advisories in 2010 to 223 advisories in 2018 (see fig. 4). Moreover, supply chains for industrial control systems can introduce vulnerabilities that could be exploited for a cyberattack. For example, there is a potential for manufacturers and developers to—wittingly or unwittingly—include unauthorized code or malware in industrial control system devices and systems that provides a back door into the equipment or that allows the program to “call home” once installed. Further, manufacturers and software developers create their products in many different locations around the world, thus making them potentially susceptible to foreign-based threats. For example, a capable nation-state could gather useful information on the types of equipment used at a particular utility with the intent to undermine security controls at a later time. In addition, manufacturers and developers have made sensitive information publicly available regarding the operation of their hardware and software. For example, manufacturers and developers have published vendor manuals, which include information such as default passwords and operating instructions. These manuals often appear on the internet and can aid malicious actors in conducting cyberattacks on industrial control systems. Consumer IoT Devices Connected to the Grid Researchers and federal agencies have recently identified concerns about the potential introduction of cyber vulnerabilities to the grid through the connection of consumer IoT devices to the grid’s distribution network. For example, university researchers in 2018 used large, real-world grid models to simulate the feasibility and impact on the grid of a coordinated cyberattack on smart home appliances. Specifically, the researchers found that malicious threat actors could compromise a large number of high-wattage IoT devices (e.g., air conditioners and heaters) and turn them into a botnet—a network of devices infected with malicious software and controlled as a group without the owners’ knowledge. The malicious actors could then use the botnet to launch a coordinated attack aimed at manipulating the demand across distribution grids. For example, according to the researchers, one such attack could involve synchronously switching on all of the compromised devices. Such an attack could disrupt the balance of power generation and consumption and ultimately cause an outage. An official from the National Renewable Energy Laboratory explained that the likelihood of attacks on the distribution network using IoT devices is low but could increase in the future. In particular, the official explained that the wattage needed to create a significant disruption in the balance of supply and demand would require a botnet of tens of thousands of smart appliances. Botnets of this size have been created, but the laboratory official explained that it would be very difficult to manipulate all of those devices to turn on at precisely the same time. However, the official cautioned that such an attack could become more plausible in the future as additional high-wattage systems and devices, such as building energy management systems and electric vehicles, are connected to the internet. Global Positioning System Vulnerability The grid is dependent on GPS timing to monitor and control generation, transmission, and distribution functions. According to DOE, the GPS signal is susceptible to exploitation by malicious actors. For example, a malicious actor could inject a counterfeit GPS signal (known as GPS spoofing) that could result in disruptions to grid operations. U.S. Cybersecurity Incidents Reportedly Have Not Caused Power Outages, and the Potential Impacts from a Cyberattack Are Uncertain According to the three entities responsible for collecting information on cybersecurity incidents that affect the electric grid—DHS, DOE, and NERC—none of the cybersecurity incidents reported in the United States have disrupted the reliability or availability of the grid, and none have resulted in a power outage. Even though cyber incidents involving the grid reportedly have not caused power outages in the United States, cyberattacks on foreign industrial control systems have resulted in power outages. For example, in December 2015, malicious actors linked by Ukrainian officials to the Russian government conducted cyberattacks on three Ukrainian power distribution operators, resulting in a loss of power for about 225,000 customers. GAO did not find evidence that these attacks physically damaged grid components, but cyberattacks on industrial control systems in other sectors demonstrates that this is possible. For example, in 2014, malicious cyber actors compromised industrial control systems and caused failures that led to massive damage to a blast furnace at a German steel mill. Further, federal agencies have performed three assessments of the potential impacts of cyberattacks on the industrial control systems supporting the grid. Specifically, DOE and FERC have conducted three assessments of the potential impact of cyberattacks on the grid at the scale of multiple system operators through the scale of an interconnection. The two DOE assessments—which according to DOE officials are early drafts and have not gone through intra-agency review— focused on the impact of a cyberattack within a single interconnection and produced varying reports of the potential scale of power outages that could result from a cyberattack. The remaining assessment—which FERC conducted in 2013—reviewed the impact of a cyber or physical attack on all three interconnections and concluded that an attack could result in a widespread blackout spanning the contiguous United States. Table 1 below describes the three assessments. However, because of limitations in the three federal assessments, the scale of any power outages that may result from a cyberattack is uncertain. In particular: Federal agencies have conducted one study—FERC’s 2013 study— that assesses the potential impact of a coordinated attack in each of the three interconnections. However, in 2015, DOE officials raised concerns about the scenario and related assumptions used in that study that called into question the findings. Specifically, at that time, DOE officials reported that they found several of the scenario’s assumptions highly unlikely, including peak capabilities at all targeted generation stations at the time of an attack and the loss of all safety systems designed to prevent the consequences described in the analysis. Further, DOE officials reported that they found the study’s scenarios even more unlikely to result in a total loss of power or any other consequence that could be reasonably expected to result in damage to national security. The 2017 assessment conducted by DOE’s Argonne National Laboratory was limited in scope to a six-state region. In addition, the assessment focused on a single cyberattack scenario and noted that many other grid cyberattack methods and outcomes were possible. The 2017, 2018, and 2019 editions of DOE’s draft Electricity Subsector Risk Characterization Study have significant methodological limitations. Specifically, officials from Lawrence Livermore National Laboratory who were contracted to perform the analyses cautioned that they used a reduced model of the Western Interconnection as it existed around 1980 and emphasized that their methodology should not be used to predict the behavior of the actual bulk power system. For example, those researchers told us that their selected model of the Western Interconnection had less than a quarter of its actual capacity in 2018. The DOE official responsible for the studies said that the assumption for the worst-case scenario was from that official’s professional judgement, not a documented analysis. Later, officials at Sandia National Laboratories told us that the worst-case scenario in the DOE draft study was a point solution used as a proof of concept, that the study was not of a high level of rigor, and that the assumptions may not represent a vulnerability in the actual bulk power system. Further, the DOE methodology assumed that all assets removed from service were treated equally; accordingly, the researchers did not distinguish the loss of specific assets (such as a substation or transmission line) in the calculation of attack difficulty and likelihood. Because of these limitations, some of the draft studies’ conclusions may not be realistic. For example, one of DOE’s major conclusions in the 2017 Risk Characterization Study—that a cyberattack may result in a relatively small loss of load in the United States about 8 times per year—may not be plausible because there have not been any reported cyberattacks that have caused an outage in the United States. In addition, the three draft DOE studies have widely varying conclusions on the likelihood of cyberattacks across the selected range of loss of load. For example, the 2018 draft study concluded that a cyberattack resulting in a more substantial loss of load had an average likelihood of occurring nearly once every 10 years, while the 2019 draft study concluded that such an attack would occur about once every 100 years. According to a DOE official, there is no documentation of the technical basis for the significant changes in the assessment outcomes between the 2017 and 2018 draft studies and between the 2018 and 2019 draft studies. In addition, DOE officials told us that all three studies are early drafts and have not gone through intra-agency review. Moreover, none of the federal assessments reviewed the risk associated with a cyberattack involving a botnet of high-wattage consumer IoT devices. As previously mentioned, university researchers demonstrated that malicious actors could use a botnet of IoT devices to launch a coordinated attack aimed at manipulating the demand on distribution systems across the grid. A federal official we interviewed agreed that such an attack could occur and could disrupt grid distribution systems— especially as additional high-wattage systems become connected to the internet—but they said it is unclear what impact, if any, such attacks could have on the reliability of the bulk power system. Grid Entities Reported Facing Challenges in Addressing Cybersecurity Risks Officials and representatives of key federal and nonfederal entities we interviewed generally identified five significant challenges grid owners and operators face in addressing cybersecurity risks: (1) difficulties in hiring a sufficient cybersecurity workforce, (2) limited public-private information sharing of classified information, (3) limited resources to invest in cybersecurity protections, (4) reliance on other critical infrastructure that may be vulnerable to cyberattacks, and (5) uncertainties about how to implement cybersecurity standards and guidance. Hiring a Sufficient Cybersecurity Workforce Officials and representatives of key federal and nonfederal entities we interviewed identified difficulties in hiring a sufficient cybersecurity workforce as a significant challenge to addressing cybersecurity risks to the grid. For example, a representative of a nonfederal entity told us that there are a limited number of trained cybersecurity personnel interested in working in the energy sector. The representative added that there are a large number of vacancies for cybersecurity positions and that they are difficult to fill due to the limited amount of available talent and organizational resource constraints, such as providing salaries that are competitive with other sectors. A laboratory official commented that larger grid entities are able to attract the majority of skilled cybersecurity professionals, leaving smaller entities with less skilled personnel. Further, an asset owner explained that training personnel so that they have sufficient cybersecurity knowledge and skills is difficult, and the requisite knowledge of industrial control systems further complicates training these personnel. DOE has also identified difficulties in hiring a sufficient cybersecurity workforce as a challenge. Specifically, according to DOE’s Assessment of Electricity Disruption Incident Response Capabilities, the electricity subsector continues to face challenges in recruiting and maintaining experts with strong knowledge of cybersecurity practices as well as knowledge of industrial control systems supporting the grid. Limited Public-Private Sharing of Classified Information Officials and representatives of key federal and nonfederal entities we interviewed identified limited public-private sharing of classified information, including the sharing of threat intelligence, as a significant challenge to addressing cybersecurity risks to the grid. For example, a laboratory official told us that many grid owners and operators do not have security clearances. Consequently, the official explained, deeming information on certain cybersecurity threats to the grid to be “classified” leaves many utilities without the awareness to address those threats to the grid. The official added that when details are removed from classified threat intelligence in order to develop an unclassified alert, that alert often lacks the specific information utilities need to address the threat. Asset owners told us that, even for those grid owners and operators who are permitted to initiate the clearance process, it can take an extended period of time to complete the associated adjudication to obtain that clearance. In addition, two asset owners noted that, even after clearances have been received and fully adjudicated, it is often difficult to obtain access to secure locations to review classified information. DOE has also identified limited public-private information sharing as a challenge. Specifically, according to DOE’s Assessment of Electricity Disruption Incident Response Capabilities, the bidirectional flow of information and intelligence between industry and government has been highlighted by stakeholders as a continued challenge for the electricity subsector. The assessment explains that the sharing of information is impeded by the slow adoption of automated capabilities and the difficultly of sharing classified information between government and industry— particularly in real time during an incident. Limited Resources to Invest in Cybersecurity Protections Officials and representatives of key federal and nonfederal entities identified limited resources for cybersecurity protections as a challenge to addressing cybersecurity risks to the grid. In particular, most of the asset owners that we met with stated that it can be costly to implement required cybersecurity protections. In addition, officials and representatives of key federal and nonfederal entities that we spoke with explained that costs— including those for cybersecurity protections—must be recovered through electric rates to customers. As a result, a laboratory official explained that many utilities prioritize cybersecurity protections that are the most cost- effective over protections that may be needed to address risks. Reliance on Other Critical Infrastructure That May Be Vulnerable to Cyberattacks Officials and representatives of key federal nonfederal entities we interviewed identified the grid’s reliance on other critical infrastructure (e.g., natural gas pipelines) that may be vulnerable to cyberattacks as a challenge to addressing cybersecurity risks to the grid. For example, a representative of a nonfederal entity stated that the electricity subsector inherits cybersecurity risks from other critical infrastructures, since the electricity subsector relies on those critical infrastructures for its own operations. As such, that representative added that it is difficult to holistically determine how vulnerable the grid may be to a cyberattack. In addition, as previously mentioned, according to the 2019 Worldwide Threat Assessment, China has the ability to disrupt a natural gas pipeline for days to weeks. Uncertainties about Implementation of Cybersecurity Standards and Guidance Officials and representatives of key federal and nonfederal entities we interviewed identified uncertainties about how to implement cybersecurity standards and guidance as a challenge to addressing cybersecurity risks to the grid. In particular, several representatives noted that these uncertainties have led their organizations to devote additional resources to implementing the standards and guidance. For example, one asset owner explained that FERC-approved cybersecurity standards do not always include details that are needed to understand how they apply to that owner’s environment. In addition, another asset owner stated that significant time and effort is required to understand the standards and how they might be implemented. Federal Agencies Have Performed a Variety of Activities Aimed at Addressing Grid Cybersecurity Risks DOE, DHS, and other federal agencies have performed a variety of critical infrastructure protection activities aimed at addressing grid cybersecurity risks, including implementing programs that help protect grid systems from cybersecurity threats and vulnerabilities. In addition, FERC has performed a variety of regulatory activities aimed at addressing grid cybersecurity risks, such as approving mandatory cybersecurity standards for the bulk power system. DOE, DHS, and Other Agencies Have Undertaken Critical Infrastructure Protection Activities Aimed at Addressing Grid Cybersecurity Risks DOE, DHS, and other federal agencies have performed a variety of critical infrastructure protection activities aimed at addressing grid cybersecurity risks. These activities generally align with the functions in the NIST Cybersecurity Framework, which include (1) protecting systems to mitigate cybersecurity threats and vulnerabilities; (2) identifying cybersecurity threats and vulnerabilities and detecting potential cybersecurity incidents; and (3) responding to and recovering from such incidents. Protecting systems to mitigate cybersecurity threats and vulnerabilities Federal agencies assist grid asset owners and operators in implementing protections that mitigate cybersecurity risks by providing capabilities aimed at preventing cybersecurity intrusions and offering training and guidance on cybersecurity practices. For example, DHS’s Enhanced Cybersecurity Services program provides intrusion-prevention capabilities to U.S.-based entities and to state, local, tribal, and territorial organizations. To carry out this voluntary program, DHS provides classified and unclassified threat information to designated commercial service providers. These providers use the information to block access to (1) specific malicious internet addresses and (2) email with specific malicious criteria. NIST, DHS, and DOE also provide cybersecurity training and guidance. For example, NIST has developed numerous special publications on cybersecurity protections for IT and industrial control systems, such as the previously mentioned Cybersecurity Framework and its Guide to Industrial Control Systems. In addition, DHS provides in-person and online training on leading cybersecurity practices for industrial control systems through its National Cybersecurity and Communications Integration Center. Lastly, DHS has taken initial steps to help grid entities manage supply chain cybersecurity risks. For example, in July 2018 DHS created a public-private partnership, known as the Supply Chain Risk Management Task Force. The task force aims to examine risks to the global information and communications technology supply chain and develop consensus recommendations to manage such risks. Identifying cybersecurity threats and vulnerabilities and detecting potential cybersecurity incidents Federal agencies help grid entities identify cybersecurity risks and detect incidents by providing threat and vulnerability information, performing risk assessments, performing forensic analysis, and conducting research. For example, DOE piloted and launched the Cybersecurity Risk Information Sharing Program, which is now managed by the Electricity Information Sharing and Analysis Center. It provides a voluntary, bi-directional public- private IT data sharing and analysis platform. Using both classified and unclassified sources, DOE’s Pacific Northwest National Laboratory analyzes the information to (1) identify threat patterns and attack indicators, and (2) deliver alerts to owners and operators. In addition, DHS’s Automated Indicator Sharing program provides a server housed at each participant’s location that can be used to exchange threat indicators with the department’s National Cybersecurity and Communications Integration Center. Further, the center provides asset owners with alerts, advisories, and situational reports, including information on threats, vulnerabilities, or activity that could affect IT or industrial control system networks. DOE and DHS also offer services aimed at helping grid owners and operators assess cybersecurity risks and perform forensic analysis. For example, DOE has an evaluation tool known as the Electricity Cybersecurity Capability Maturity Model that aims to help the electricity industry evaluate, prioritize, and improve its cybersecurity capabilities. In addition, DHS offers technical assessments through its National Cybersecurity and Assessment and Technical Services Team that can help identify vulnerabilities and simulate a malicious adversary. Further, DHS can review potential cybersecurity incident artifacts, such as malware, phishing emails, and network logs, at its National Cybersecurity and Communications Integration Center to determine the existence or extent of a cybersecurity threat or incident. Moreover, DOE’s Cybersecurity for Energy Delivery Systems program sponsors grid cybersecurity research through DOE’s national laboratories. For example: Oak Ridge National Laboratory has conducted research on mechanisms that could help critical infrastructure entities better detect vulnerabilities in software used in industrial control systems. Four national laboratories have engaged in a project that aims to improve the capability of grid entities to collect and analyze data from their industrial control system networks and detect cybersecurity incidents. Oak Ridge National Laboratory and Pacific Northwest National Laboratory have a joint project to develop mechanisms for more quickly detecting and eradicating malware on industrial control systems. Responding to and recovering from cybersecurity incidents Federal agencies have developed policies, strategies, and plans to define their roles and responsibilities for responding to and recovering from grid cybersecurity incidents. In particular, DHS has responsibility for leading the federal effort to mitigate or lessen the impact of such incidents, the Department of Justice has responsibility for the federal law enforcement response to the threats, and DOE has authority, in designated emergencies, to impose measures to restore the reliability of critical electric infrastructure. DOE is also responsible for coordinating the energy sector-specific response with DHS and the Department of Justice. Federal agencies have also taken steps to help prepare asset owners for cyber response and recovery efforts. For instance, DHS has worked with nonfederal entities to simulate response and recovery efforts to a cyberattack through exercises such as Cyber Storm. In addition, DOE, in conjunction with the National Association of State Energy Officials, has conducted regional energy assurance exercises. These exercises aim to promote state and local preparedness and resilience for future energy emergencies stemming from a cyber incident. FERC Has Performed Regulatory Activities Aimed at Addressing Grid Cybersecurity Risks FERC has performed a variety of regulatory activities aimed at addressing grid cybersecurity risks. These activities include (1) approving mandatory cybersecurity standards for the bulk power system, (2) enforcing regulatory requirements through imposition of civil penalties, (3) auditing the performance of the electric reliability organization—NERC— and its regional entities, and (4) auditing bulk power entities for compliance with the mandatory cybersecurity standards. Approve mandatory cybersecurity standards. FERC has approved mandatory reliability standards relating to cybersecurity protections. For example, in October 2018, FERC approved a new standard to bolster supply chain risk management protections for the nation’s bulk electric system. This new standard, which will become enforceable in July 2020, is intended to augment existing standards that aim to mitigate cybersecurity risks associated with the supply chain for grid- related cyber systems. Enforce regulatory requirements through imposition of civil penalties. FERC has referred violations of its approved cybersecurity standards to NERC to impose penalties on the bulk power entities that committed the violations. For example, such a notification occurred in January 2019 when NERC assessed a $10 million penalty based on 127 violations of the cybersecurity standards made by an undisclosed entity. Audit the performance of the electric reliability organization. FERC has audited NERC’s performance as the electric reliability organization. In this audit, which it completed in 2012, FERC evaluated NERC’s budget formulation, administration, and execution. With respect to cybersecurity, FERC recommended that NERC (1) assess its existing staffing levels to ensure adequate resources to accomplish critical infrastructure protection work related to cybersecurity and (2) devote greater resources to carrying out its oversight duties. In 2013, FERC closed these recommendations after reviewing NERC’s plans for evaluating its staffing levels and its commitment to add resources in its business plan. According to FERC officials, FERC continues to monitor the level of resources NERC devotes to cybersecurity oversight through its annual review of NERC’s budget Audit bulk power entities for compliance with standards. FERC has audited bulk power entities’ compliance with its approved cybersecurity standards. From 2016 through 2018, FERC conducted its own independent audits of eight bulk power entities for compliance with those standards and produced public lessons learned reports based on the results. According to FERC officials, the agency plans to conduct four such audits every fiscal year starting in fiscal year 2019 and to continue producing annual lessons learned reports based on the results. In addition, since the first of the cybersecurity standards became enforceable in 2009, FERC has observed eight NERC regional entity-led audits a year—one in each NERC region—focused on bulk power entity compliance with those standards. DOE Has Not Fully Defined a Strategy to Address Grid Cybersecurity Risks and Challenges National strategies are critical tools used to help address longstanding and emerging issues that affect national security and economic stability. In 2004, we identified a set of desirable characteristics for effective national strategies. These characteristics include: Purpose, scope, and methodology. Addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. Problem definition and risk assessment. Addresses the particular national problems, assesses the risks to critical assets and operations—including the threats to, and vulnerabilities of, critical operations—and discusses the quality of data available regarding the risk assessment. Goals, subordinate objectives, activities, and performance measures. Addresses what the strategy is trying to achieve; steps to achieve those results; and the priorities, milestones, and performance measures that include measurable targets to gauge results and help ensure accountability. Discussion of needed resources and investments. Addresses what the strategy will cost and the types of resources and investments needed. Organizational roles, responsibilities, and coordination. Addresses who will implement the strategy, what their roles will be, and mechanisms to coordinate their efforts. As previously noted, the executive branch has taken steps toward outlining a federal strategy for confronting cyber threats—including threats to critical infrastructure such as the grid. In addition, as the sector- specific agency, DOE has led the development of approaches to implement the federal cybersecurity strategy for the energy sector, including the grid. Table 2 identifies and describes these approaches— specifically, two agency plans and an assessment—for addressing grid cybersecurity risks and challenges. The two plans and the assessment do not fully address all of the key characteristics needed for a national strategy. Collectively, the plans and assessment fully address one characteristic—purpose, scope, and methodology—and partially address the other four characteristics of a national strategy (see table 3). Purpose, scope, and methodology The plans and assessment fully address the characteristic of outlining their purpose, scope, and methodology. For example, the Energy Sector- Specific Plan explains that it was produced to help integrate and guide the sector’s continuing effort to improve the security and resilience of critical infrastructure. In addition, the plan explains that DOE worked closely with the Energy Sector Coordinating Council and the Energy Sector Government Coordinating Council, among others, to develop the plan. Problem definition and risk assessment The plans and the assessment partially address the characteristic of defining the problem and performing a risk assessment. Each defines the problems that it was intended to address and assesses cybersecurity risks to the grid. For example, DOE’s Assessment of Electricity Disruption Incident Response Capabilities states that it was developed in response to Executive Order 13800’s requirement that DOE examine the potential scope and duration of a prolonged power outage associated with a significant cyber incident. In addition, as previously mentioned, the assessment describes the potential range of load loss resulting from four cyberattack scenarios. However, the discussion of the quality of data available regarding DOE’s assessment is inaccurate. According to the assessment, the potential range of load loss resulting from four cyberattack scenarios was based on rigorous modeling and analysis from multiple DOE national laboratory experts. However, these results were based on the 2017 Electricity Subsector Risk Characterization Study, which as previously described, has significant limitations affecting the quality of data. In addition, neither the plans nor the assessment fully analyzed the cybersecurity risks and challenges to the grid. In particular, none of them analyzed the threat of, and vulnerabilities to, a cyberattack spanning all three interconnections. In addition, the initiatives did not assess the vulnerability of the grid to a cyberattack involving high-wattage consumer IoT devices connected to the grid’s distribution system. Goals, subordinate objectives, activities and performance measures The two plans partially address the characteristic of outlining goals, subordinate objectives, activities, priorities, milestones, and performance measures. Both plans outline the goals, objectives, and activities for addressing cybersecurity risks facing the electric grid. For example, the Energy Sector-Specific Plan describes five goals for the energy sector and three related priorities for the electricity subsector. However, the plans’ goals, objectives, and activities do not fully address the cybersecurity risks to the grid. For example, neither plan includes goals and activities that address the vulnerability of the grid to a cyberattack involving high-wattage consumer IoT devices connected to the grid’s distribution system. Further, in light of the previously identified gaps in the analysis of cybersecurity risks and challenges, the plans’ goals, objectives, and activities are likely not commensurate with grid cybersecurity risks and challenges. Moreover, only one of the plans—DOE’s Multiyear Plan for Energy Sector Cybersecurity—includes milestones and performance measures for achieving the goals, objectives, and activities. Additionally, this plan does not include performance measures with measurable targets for all objectives, including those aimed at providing timely cyber threat briefings to energy sector partners and developing cyber incident response processes and procedures. The two plans partially address the characteristic of describing resource and investment needs. Specifically, although the plans identify many resources and investments needed to achieve their goals and objectives, they do not fully identify resource and investment needs. For example, one of the objectives of DOE’s Multiyear Plan for Energy Sector Cybersecurity is to establish a coordinated national cyber incident response capability for the energy sector. However, the plan does not describe the resources or investments needed to meet this objective. This is of particular concern because, as previously mentioned, the Fixing America’s Surface Transportation Act of 2015 authorized DOE to order emergency measures, following a Presidential declaration of a grid security emergency, to protect or restore the reliability of critical electric infrastructure. In addition, the plans do not describe specific investment costs associated with carrying them out. For example, DOE’s Multiyear Plan for Energy Sector Cybersecurity describes the need to develop a laboratory for identifying and analyzing cybersecurity vulnerabilities to energy delivery systems. However, the plan does not identify the specific costs associated with this investment. Further, given the previously discussed gaps in risk analysis, goals, and objectives, it is unclear to what extent the identified resources and investment needs are sufficient to address electric grid cybersecurity risks and challenges. Roles, responsibilities, and coordination The two plans partially address the characteristic of describing roles, responsibilities, and coordination mechanisms for carrying out the goals, objectives, and activities. Specifically, the plans describe mechanisms for coordinating but do not always identify organizations responsible for achieving the goals, objectives, and activities. For example, DOE’s Multiyear Plan for Energy Sector Cybersecurity states that the department will partner with DOE’s national laboratories to carry out several activities in the plan. However, the plan does not indicate which of the 10 national laboratories DOE will partner with for each activity. In a written response, DOE explained that executive branch documents that outline the broader federal strategy for confronting cyber threats— such as the National Cyber Strategy and the DHS Cybersecurity Strategy—address the key characteristics of a national strategy not addressed in DOE’s plans and assessment. In addition, DOE stated that the department’s plans and assessment for addressing risks and challenges facing the grid support and fit within the context of that broader cybersecurity framework while allowing the agency flexibility to accomplish its goals. Although the broader executive branch strategy documents on confronting cyber threats provide a framework for addressing critical infrastructure cybersecurity risks and challenges, they do not address the specific risks and challenges facing the electric grid. In addition, as previously mentioned, we have reported that these broader executive branch strategy documents also do not include key characteristics of a national strategy. Until DOE ensures it has a plan aimed at implementing the federal cybersecurity strategy relating to the grid that addresses all of the key characteristics of a national strategy—including a full assessment of cybersecurity risks—the guidance the plan provides decision makers in allocating resources to address risks and challenges will likely be limited. FERC-Approved Standards Do Not Fully Address Grid Cybersecurity Risks FERC has not ensured that its approved grid cybersecurity standards fully address leading federal guidance for improving critical infrastructure cybersecurity—specifically, the NIST Cybersecurity Framework. In addition, FERC has not evaluated the risk of a coordinated cyberattack on geographically distributed targets in approving the threshold for which grid cyber systems must comply with requirements in the full set of grid cybersecurity standards. FERC-Approved Standards Do Not Fully Address Leading Federal Guidance for Improving Critical Infrastructure Cybersecurity The NIST Cybersecurity Framework provides a set of cybersecurity activities, desired outcomes, and applicable references that are common across all critical infrastructure sectors. The framework also states that while it is not exhaustive, it is capable of being extended, allowing organizations, sectors, and other entities to use references that are most appropriate to enable them to manage their cybersecurity risk. NIST recommends that organizations use the Cybersecurity Framework functions, categories, and subcategories to identify the key controls needed to meet their security objectives (see Table 4 for the functions and categories). To promote widespread adoption of the framework, Executive Order 13636 called for sector-specific agencies to develop mechanisms to encourage the framework’s adoption. In addition, the order called for regulatory agencies to review the framework and determine if current cybersecurity regulatory requirements are sufficient given current and projected risks. However, the FERC-approved cybersecurity standards do not fully address the NIST Cybersecurity Framework’s five functions and associated categories and subcategories. More specifically, the cybersecurity standards substantially address two of the five functions and partially address the remaining three functions. Table 5 depicts the extent to which these standards address the framework’s five functions and 23 categories. (Appendix II contains more detailed information regarding the extent to which the standards address the framework’s 108 subcategories.) Legend: ●—Fully address: the standards address all of the related subcategories. ◕—Substantially address: the standards address at least two-thirds, but not all, of the related subcategories. ◑—Partially address: the standards address at least one-third, but less than two-thirds, of the related subcategories. ◔—Minimally address: the standards address less than one-third of the related subcategories.○—Do not address: the standards do not address any of the related subcategories. As shown in table 5, the FERC-approved cybersecurity standards either fully address or substantially address eight of the 23 categories. For example: The standards fully address the identity management, authentication, and access control category by fully addressing seven associated subcategories. For instance, the standards fully address the subcategories for credentials to be issued, managed, verified, revoked, and audited for authorized devices, users, and processes; network integrity to be protected; and physical access to assets to be managed and protected. The standards fully address the response planning category by fully addressing the associated subcategory—a response plan is to be executed during or after an incident. Conversely, the FERC-approved cybersecurity standards partially address or do not address the remaining 15 of 23 categories. For example: The standards partially address the category for supply chain risk management. In particular, the standards fully address associated subcategories for establishing supply chain risk management processes, security measures in contracts with suppliers and third- party partners, and evaluations of suppliers and third-party partners to ensure they meet their contractual obligations. However, the standards do not address subcategories for response and recovery planning and testing with suppliers and third-party providers, and for using the supply chain risk management process to identify, prioritize, and assess suppliers and third-party partners. The standards do not address the three subcategories associated with the risk management strategy category. Specifically, the standards do not call for risk management processes to be established, organizational risk tolerance to be determined, or for the risk tolerance to be informed by the organization’s role in critical infrastructure and sector-specific risk analysis. In a written response, FERC officials said that the agency did not conduct an assessment to determine how the leading practices identified in the NIST Cybersecurity Framework could be applied to the cybersecurity standards. In addition, FERC officials stated that, while the Commission uses the NIST Cybersecurity Framework as a resource and its approved standards incorporate certain facets of the framework, there is not a one- on-one alignment because the NIST Cybersecurity Framework is not industry specific. According to FERC officials, the framework addresses certain issues outside FERC’s jurisdiction. For example, FERC officials stated that the Commission does not have authority to directly impose obligations on suppliers, vendors, or entities outside its jurisdiction that provide products or services to electric industry stakeholders. However, full implementation of the NIST Cybersecurity Framework does not require regulatory agencies to impose obligations on entities over which the regulatory agencies do not have authority. Framework categories and subcategories that reference suppliers and vendors call for the organization responsible for implementing the framework to establish and implement processes for managing cybersecurity risks relating to those suppliers and vendors. In addition, in a written response, NERC officials disagreed with our assessment and stated that a separate comparison by NERC subject matter experts found substantially more overlap between the FERC- approved cybersecurity standards and the NIST Cybersecurity Framework. Moreover, NERC officials said that the intended purpose of the standards differs from the framework’s voluntary nature, and that NERC must ensure all mandatory standards are auditable and implemented by electric utilities nationwide. The officials noted the importance of the NIST Cybersecurity Framework and emphasized that NERC has considered the framework in developing and updating grid cybersecurity standards. However, we believe our analysis accurately reflects the extent that the FERC-approved standards address the NIST Cybersecurity Framework. Without a full consideration of how the FERC-approved cybersecurity standards address NIST’s Cybersecurity Framework, there is increased risk that bulk power entities will not fully implement leading cybersecurity practices intended to help critical infrastructure entities address cybersecurity risks. FERC Has Not Evaluated the Risk of Geographically Distributed Cyberattacks in Approving the Threshold for Required Compliance with All Cybersecurity Standards As previously mentioned, FERC requires cyber systems affecting a generation capacity of 1,500 megawatts or more to comply with requirements in the full set of approved cybersecurity standards since the loss, compromise, or misuse of those systems could have a medium to high impact on the reliable operation of the bulk electric system. FERC approved the 1,500-megawatt threshold based on the results of a NERC analysis. Specifically, NERC staff selected a threshold value based on the loss of one large electric grid asset from a single disruptive event and assumed a loss of power could be compensated, in part, by power from a neighboring region. However, the analysis did not evaluate the potential risk of a coordinated cyberattack on geographically distributed targets. A coordinated cyberattack could cause multiple power plants, transmission lines, or related grid components in different regions to disconnect from the grid. Such a cyberattack could target, for example, a combination of low- impact systems, each affecting a generation capacity below 1,500 megawatts that, in aggregate, might present a significant risk to the grid. FERC officials told us that the agency considered but did not evaluate the potential impact of a coordinated cyberattack on geographically distributed targets at the time it approved the threshold because the agency did not have the information it needed to develop a credible threat scenario. FERC officials said they anticipate that a future update to the approved cybersecurity standards may require the collection of relevant data on suspicious cyber activity that could inform a threat scenario for evaluating the potential impact of a coordinated cyberattack on geographically distributed targets. Further, NERC officials told us that, while NERC has not determined that a modification of the 1,500 megawatt threshold is warranted at this time, they continue to monitor the risk of a coordinated cyberattack against multiple low-impact systems and acknowledged that the FERC-approved standards must adapt with the evolving understanding of cyber threats. In addition, NERC officials explained in a written response that the intent of the 1,500-megawatt threshold is to ensure that industrial control systems with vulnerabilities that are attributable to a common cause (e.g., cybersecurity vulnerabilities in common hardware or software) that could result in the loss of 1,500 megawatts or more of generation capacity are adequately protected. Those officials added that NERC encourages entities to disaggregate their industrial control systems so that individual systems operate and maintain less than 1,500 megawatts of generation capacity. NERC officials noted that the systems associated with the disaggregated generation capacity are very diverse and are therefore less likely to provide any large single point of failure. NERC officials further explained that this disaggregation minimizes the risk to the grid by requiring a malicious actor to conduct a cyberattack on more facilities to achieve a similar loss of power. However, encouraging grid entities to design industrial control systems so that individual systems operate and maintain less than 1,500 megawatts of generation capacity could still leave the grid vulnerable to a cyberattack on those systems. For example, although a malicious actor may need to attack more systems that fall under the threshold at multiple locations to achieve the attacker’s objective for loss of power (when compared with systems that meet or exceed the threshold), the difficulty of carrying out an attack on additional systems could be less significant if the attacker identifies and exploits vulnerabilities common across the systems. In addition, as previously mentioned, systems that fall under the 1,500- megawatt threshold are not required to follow all of the requirements of the FERC-approved cybersecurity standards; as such, there is increased risk that important security controls have not been implemented for these systems. According to federal standards for internal control, management should identify, analyze, and respond to risks related to achieving organizational objectives. For example, management comprehensively identifies risks that affect its objectives and analyzes the identified risks to estimate their significance, which provides a basis for responding to the risks. Without information on the risk of a coordinated cyberattack on geographically distributed targets, FERC does not have assurance that its approved threshold for mandatory compliance with all cybersecurity standards adequately responds to that risk and sufficiently provides for the reliable operation of the grid. Conclusions The U.S. electric grid faces an increasing array of cybersecurity risks, as well as significant challenges to addressing those risks. To their credit, federal agencies have performed a variety of critical infrastructure protection and regulatory activities aimed at addressing those risks. In particular, DOE has developed plans and an assessment aimed at implementing the federal strategy for confronting the cyber threats facing the grid. However, those documents do not fully address all of the key characteristics needed to implement a national strategy, including a full assessment of cybersecurity risks to the grid. Until DOE ensures it has a plan that does, the guidance the plan provides decision makers in allocating resources to address grid cybersecurity risks and challenges will likely be limited. Additionally, FERC has approved mandatory cybersecurity standards for bulk power entities, but those standards address some but not all of the leading cybersecurity practices identified in NIST’s Cybersecurity Framework. Without a full consideration of how the FERC-approved cybersecurity standards address NIST’s Cybersecurity Framework, there is increased risk that bulk power entities will not fully implement leading cybersecurity practices needed to address current and projected risks. Finally, the threshold for which entities must comply with requirements in the full set of FERC-approved standards is based on the results of an analysis that did not evaluate the potential risk of a coordinated cyberattack on geographically distributed targets. Without information on the risk of such an attack—particularly one that might target low-impact systems that are subject to fewer requirements but in aggregate could affect the grid—FERC does not have assurance that its approved threshold for mandatory compliance adequately responds to that risk and sufficiently provides for the reliable operation of the electric grid. Recommendations for Executive Action We are making a total of three recommendations—one to DOE and two to FERC. Specifically: The Secretary of Energy, in coordination with DHS and other relevant stakeholders, should develop a plan aimed at implementing the federal cybersecurity strategy for the electric grid and ensure that the plan addresses the key characteristics of a national strategy, including a full assessment of cybersecurity risks to the grid. (Recommendation 1) FERC should consider our assessment and determine whether to direct NERC to adopt any changes to its cybersecurity standards to ensure those standards more fully address the NIST Cybersecurity framework and address current and projected risks. (Recommendation 2) FERC should (1) evaluate the potential risk of a coordinated cyberattack on geographically distributed targets and, (2) based on the results of that evaluation, determine whether to direct NERC to make any changes to the threshold for mandatory compliance with requirements in the full set of cybersecurity standards. (Recommendation 3) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this report for review and comment to DOE and FERC—the two agencies to which we made recommendations—as well as DHS, the Department of Commerce (on behalf of NIST), and NERC. DOE and FERC agreed with our recommendations, DHS and the Department of Commerce stated that they had no comments, and NERC disagreed with one of our findings. DOE and FERC agreed with our recommendations. In its written comments, reproduced in appendix III, DOE concurred with our recommendation and stated that it is working through an interagency process to develop a National Cyber Strategy Implementation Plan that will consider DOE’s Multiyear Plan for Energy Sector Cybersecurity. In its written comments, reproduced in appendix IV, FERC stated that our recommendations were constructive and that it would take steps to implement them. DOE and FERC also provided technical comments, which we incorporated as appropriate. In its written comments, reproduced in appendix V, NERC stated that it disagreed with our conclusion that the FERC-approved cybersecurity standards do not fully address the NIST Cybersecurity Framework. NERC recognized the importance of the NIST Cybersecurity Framework and emphasized that NERC has considered the framework in developing and updating its grid cybersecurity standards. However, NERC stated that a separate analysis by NERC subject matter experts found substantially more overlap between the standards and the framework than our analysis. In addition, NERC cited a 2011 GAO report that found that the FERC-approved standards, in combination with NERC supplementary guidance, mostly addressed the information security controls in certain NIST guidance at that time. We reviewed NERC’s analysis comparing the FERC-approved cybersecurity standards to the NIST Cybersecurity Framework and continue to believe our analysis accurately reflects the extent to which the standards address the framework. Further, in this report we assessed the extent to which the FERC-approved standards addressed the NIST Cybersecurity Framework, which is more recent and broader guidance than the NIST guidance that we examined in our 2011 report. In its comments, NERC also stated it has not determined that any changes are needed to the threshold for mandatory compliance with the full set of cybersecurity standards at this time, but it agrees with the concern that low-impact systems may be more vulnerable to a cyberattack and will continue to evaluate whether the current threshold is appropriate given evolving cybersecurity risks. For example, NERC explained that it is studying cybersecurity supply chain risks, including those associated with low-impact assets not currently subject to its supply chain standards. We believe that this effort could help to better position electric grid entities to address supply chain cybersecurity risks. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Commerce, Energy, and Homeland Security, the Chairman of FERC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov, and Nick Marinos at (202) 512-9342 or marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe the cybersecurity risks and challenges facing the electric grid, (2) describe federal efforts to address grid cybersecurity risks, (3) assess the extent to which the Department of Energy (DOE) has a defined strategy for addressing grid cybersecurity risks and challenges, and (4) assess the extent to which Federal Energy Regulatory Commission (FERC)-approved cybersecurity standards address grid cybersecurity risks. To address our first objective, we developed a list of cyber actors that could pose a threat to the grid, identified vulnerable components and processes that could be exploited, reviewed the potential impact of cyberattacks on the grid, and identified key cybersecurity challenges facing the grid. To develop the list of cyber threat actors, we reviewed our prior work on cyber-based threats facing the grid as well as the threats identified by the 2019 Worldwide Threat Assessment of the U.S. Intelligence Community. In addition, we interviewed officials or representatives from the following key federal and nonfederal entities to confirm, add, or remove cyber threat actors identified in our prior work based on their potential impact on grid operations: Federal agencies. We interviewed officials from DOE, the Department of Homeland Security (DHS), FERC, and the National Institute of Standards and Technology (NIST). Nonfederal regulatory organizations. We interviewed representatives of the North American Electric Reliability Corporation (NERC). Grid owners and operators. We interviewed five grid owners and operators. To select these grid owners and operators, we reviewed a membership list of the Electricity Subsector Coordinating Council as of May 2018, divided that list into three categories—investor-owned, municipal, and cooperative utilities—and then randomly selected entities from each of those three categories to interview. The views of the grid owners and operators we selected are not generalizable to the population of utilities in the United States but provide valuable insight into the cybersecurity risks and challenges grid owners and operators face. National associations. We interviewed representatives of national associations that represent various types of asset owners, entities with regulatory or state interests, and those with grid cybersecurity interests generally. Specifically, we interviewed representatives from the American Public Power Association, Edison Electric Institute, Electric Power Research Institute, Independent System Operator/Regional Transmission Operator Coordinating Council, National Rural Electric Cooperative Association, National Association of Regulatory Utility Commissioners, National Association of State Energy Officials, and North American Transmission Forum Association. The views of the association representatives are not generalizable to the industry but provide valuable insight into the cybersecurity risks and challenges facing the grid. To identify grid cybersecurity vulnerabilities, we reviewed reports developed by key federal and nonfederal entities and others related to grid vulnerabilities and met with the key federal and nonfederal entities to understand the scale and complexity of these vulnerabilities. We also compiled DHS-provided advisories from 2010 through 2018 related to industrial control system devices. We then summarized information from the DHS website to determine how many DHS issued per year. With respect to the potential impact of cyberattacks, we reviewed cybersecurity incidents reported to DOE, DHS, and NERC from 2014 through 2018. We also asked these agencies for information on any cybersecurity incidents that occurred prior to 2014 or after 2018 that affected the reliability or availability of the grid. In addition, we reviewed federal reports on cyberattacks that caused power outages in foreign countries and a report developed by the German government regarding a cyberattack on industrial control systems that damaged a German steel mill. Further, we reviewed federal studies assessing the potential for widespread power outages resulting from cyberattacks, and we met with federal officials to discuss the methodologies used to perform these studies. Finally, to identify key cybersecurity challenges facing the grid, we reviewed our prior reports on such challenges as well as federal and industry reports recommended by entities we met with. We also asked the key federal and nonfederal entities to identify challenges facing grid entities in addressing cybersecurity risks, and we compiled the challenges they most often cited. To address the second objective, we identified critical infrastructure protection and regulatory actions that federal agencies are taking to address grid cybersecurity risks by reviewing federal strategies, plans, and reports describing activities that have been conducted or that are under way and by interviewing the key federal and nonfederal entities to obtain additional details on these activities. We also reviewed FERC- approved cybersecurity standards for the bulk power system. We then categorized critical infrastructure protection activities using the functions in NIST’s Framework for Improving Critical Infrastructure Cybersecurity (commonly referred to as NIST’s Cybersecurity Framework). For our third objective, we reviewed two DOE-led plans and one assessment aimed at implementing the federal cybersecurity strategy for the energy sector, including the grid. We then compared those plans and assessment with leading practices identified by GAO on key characteristics for a national strategy. In doing so, we assessed each characteristic as follows: fully addresses—the plan or assessment addresses all aspects of the characteristic, partially addresses—the plan or assessment addresses some but not all of the characteristic, or does not address—the plan or assessment does not address any aspects of the characteristic. We also provided our analysis to DOE officials to review, comment, and provide additional information. For our fourth objective, we compared the FERC-approved cybersecurity standards with leading federal practices for addressing critical infrastructure cybersecurity risks identified in NIST’s Cybersecurity Framework. Specifically, a GAO analyst compared the FERC-approved cybersecurity standards with the subcategories in the Cybersecurity Framework, and another GAO analyst reviewed and confirmed the results of that analysis. We then summarized the results of these assessments for each of the framework’s five functions, 23 categories, and 108 subcategories as follows: fully address—the standards address all of the related subcategories; substantially address—the standards address at least two-thirds, but not all, of the related subcategories; partially address—the standards address at least one-third, but less than two-thirds, of the related subcategories; minimally address—the standards address less than one-third of the do not address—the standards do not address any of the related subcategories. We also provided our analysis to FERC and NERC officials to review, comment, and provide additional information. We also examined the applicability of the FERC-approved cybersecurity standards to non-nuclear power plants and reviewed FERC and NERC information on the analytical basis for that threshold. To calculate the number and aggregate capacity of plants that met the 1,500-megawatt threshold for complying with all FERC-approved cybersecurity standards, we used data from Form EIA-860, “Annual Electric Generator Report,” which includes U.S. plants with generators having nameplate capacity of 1 megawatt or greater. As a proxy for the net real power capability specified in the standards, we selected the generator’s net summer generating capacity. To calculate a total capacity for each individual power plant, we combined the data on the capacity of each plant’s individual operating electric power generators. We then filtered these data to identify plants whose primary purpose is generating electricity for sale as reported on the Form EIA-860. Ultimately, we compared the number and capacity of non-nuclear plants exceeding the 1,500-megawatt threshold to the total number and total U.S. capacity for plants. We used U.S. Energy Information Administration (EIA) data to estimate the number and capacity of non-nuclear plants exceeding the 1,500- megawatt threshold. To assess the reliability of these data, we reviewed EIA documentation, discussed the quality of the data with EIA officials, and electronically tested the data set for missing data, outliers, or obvious errors. Based on this assessment, we determined that the EIA data were sufficiently reliable for our purposes. We conducted this performance audit from January 2018 to August 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Assessment of the Extent FERC-Approved Cybersecurity Standards Address the NIST Cybersecurity Framework The table below provides additional detail on our assessment of the extent to which Federal Energy Regulatory Commission (FERC)- approved cybersecurity standards address the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity’s (commonly known as the NIST Cybersecurity Framework) 23 categories and 108 subcategories. ID.AM-1: Physical devices and systems within the organization are inventoried. ID.AM-2: Software platforms and applications within the organization are inventoried. ID.AM-3: Organizational communication and data flows are mapped. ID.AM-4: External information systems are catalogued. ID.AM-5: Resources (e.g., hardware, devices, data, time, personnel, and software) are prioritized based on their classification, criticality, and business value. ID.AM-6: Cybersecurity roles and responsibilities for the entire workforce and third-party stakeholders (e.g., suppliers, customers, and partners) are established. ID.BE-1: The organization’s role in the supply chain is identified and communicated. ID.BE-2: The organization’s place in critical infrastructure and its industry sector is identified and communicated. ID.BE-3: Priorities for organizational mission, objectives, and activities are established and communicated. ID.BE-4: Dependencies and critical functions for delivery of critical services are established. ID.BE-5: Resilience requirements to support delivery of critical services are established for all operating states (e.g. under duress/attack, during recovery, normal operations). ID.GV-1: Organizational information security cybersecurity policy is established and communicated. ID.GV-2: Cybersecurity roles and responsibilities are coordinated and aligned with internal roles and external partners. ID.GV-3: Legal and regulatory requirements regarding cybersecurity, including privacy and civil liberties obligations, are understood and managed. ID.GV-4: Governance and risk management processes address cybersecurity risks. ID.RA-1: Asset vulnerabilities are identified and documented. ID.RA-2: Cyber threat intelligence is received from information-sharing forums and sources. ID.RA-3: Threats, both internal and external, are identified and documented. ID.RA-4: Potential business impacts and likelihoods are identified. ID.RA-5: Threats, vulnerabilities, likelihoods, and impacts are used to determine risk. ID.RA-6: Risk responses are identified and prioritized. Risk Management Strategy (ID.RM): The organization’s priorities, constraints, risk tolerances, and assumptions are established and used to support operational risk decisions. ID.RM-1: Risk management processes are established, managed, and agreed to by organizational stakeholders. ID.RM-2: Organizational risk tolerance is determined and clearly expressed. ID.RM-3: The organization’s determination of risk tolerance is informed by its role in critical infrastructure and sector-specific risk analysis. Supply Chain Risk Management (ID.SC): The organization’s priorities, constraints, risk tolerances, and assumptions are established and used to support risk decisions associated with managing supply chain risk. The organization has established and implemented the processes to identify, assess, and manage supply chain risks. ID.SC-1: Cyber supply chain risk management processes are identified, established, assessed, managed, and agreed to by organizational stakeholders. ID.SC-2: Suppliers and third-party partners of information systems, components, and services are identified, prioritized, and assessed using a cyber supply chain risk assessment process. ID.SC-3: Contracts with suppliers and third-party partners are used to implement appropriate measures designed to meet the objectives of an organization’s cybersecurity program and Cyber Supply Chain Risk Management Plan. ID.SC-4: Suppliers and third-party partners are routinely assessed using audits, test results, or other forms of evaluations to confirm they are meeting their contractual obligations. ID.SC-5: Response and recovery planning and testing are conducted with suppliers and third-party providers. Identity Management Authentication and Access Control (PR.AC): Access to physical and logical assets and associated facilities is limited to authorized users, processes, and devices and is managed consistent with the assessed risk of unauthorized access to authorized activities and transactions. PR.AC-1: Identities and credentials are issued, managed, verified, revoked, and audited for authorized devices, users, and processes. PR.AC-2: Physical access to assets is managed and protected. PR.AC-3: Remote access is managed. PR.AC-4: Access permissions and authorizations are managed, incorporating the principles of least privilege and separation of duties. PR.AC-5: Network integrity is protected (e.g. network segregation and network segmentation). PR.AC-6: Identities are proofed and bound to credentials and asserted in interactions. PR.AC-7: Users, devices, and other assets are authenticated (e.g., single-factor, multi-factor) commensurate with the risk of the transaction (e.g., individuals’ security and privacy risks and other organizational risks). Awareness and Training (PR.AT): The organization’s personnel and partners are provided cybersecurity awareness education and are adequately trained to perform their information security-related cybersecurity duties and responsibilities consistent with related policies, procedures, and agreements. PR.AT-1: All users are informed and trained. PR.AT-2: Privileged users understand their roles and responsibilities. PR.AT-3: Third-party stakeholders (e.g., suppliers, customers, and partners) understand their roles and responsibilities. PR.AT-4: Senior executives understand their roles and responsibilities. PR.AT-5: Physical and information security cybersecurity personnel understand their roles and responsibilities. PR.DS-1: Data-at-rest is protected. PR.DS-2: Data-in-transit is protected. PR.DS-3: Assets are formally managed throughout removal, transfers, and disposition. PR.DS-4: Adequate capacity to ensure availability is maintained. PR.DS-5: Protections against data leaks are implemented. PR.DS-6: Integrity checking mechanisms are used to verify software, firmware, and information integrity. PR.DS-7: The development and testing environment(s) are separate from the production environment. PR.DS-8: Integrity checking mechanisms are used to verify hardware integrity. PR.IP-1: A baseline configuration of information technology/industrial control systems is created and maintained incorporating security principles (e.g. concept of least functionality). PR.IP-2: A System Development Life Cycle to manage systems is implemented. PR.IP-3: Configuration change control processes are in place. PR.IP-4: Backups of information are conducted, maintained, and tested periodically. PR.IP-5: Policy and regulations regarding the physical operating environment for organizational assets are met. PR.IP-6: Data are destroyed according to policy. PR.IP-7: Protection processes are continuously improved. PR.IP-8: Effectiveness of protection technologies is shared with appropriate parties. PR.IP-9: Response plans (Incident Response and Business Continuity) and recovery plans (Incident Recovery and Disaster Recovery) are in place and managed. PR.IP-10: Response and recovery plans are tested. PR.IP-11: Cybersecurity is included in human resources practices (e.g., deprovisioning and personnel screening). PR.IP-12: A vulnerability management plan is developed and implemented. Maintenance (PR.MA): Maintenance and repairs of industrial control and information system components are performed consistent with policies and procedures. PR.MA-1: Maintenance and repair of organizational assets are performed and logged, with approved and controlled tools. PR.MA-2: Remote maintenance of organizational assets is approved, logged, and performed in a manner that prevents unauthorized access. PR.PT-1: Audit/log records are determined, documented, implemented, and reviewed in accordance with policy. PR.PT-2: Removable media is protected and its use restricted according to policy. PR.PT-3: The principle of least functionality is incorporated by configuring systems to provide only essential capabilities. PR.PT-4: Communications and control networks are protected. PR.PT-5: Mechanisms (e.g., failsafe, load balancing, hot swap) are implemented to achieve resilience requirements in normal and adverse situations. Anomalies and Events (DE.AE): Anomalous activity is detected and the potential impact of events is understood. DE.AE-1: A baseline of network operations and expected data flows for users and systems is established and managed. DE.AE-2: Detected events are analyzed to understand attack targets and methods. DE.AE-3: Event data are aggregated, collected, and correlated from multiple sources and sensors. DE.AE-4: Impact of events is determined. DE.AE-5: Incident alert thresholds are established. Security Continuous Monitoring (DE.CM): The information system and assets are monitored at discrete intervals to identify cybersecurity events and verify the effectiveness of protective measures. DE.CM-1: The network is monitored to detect potential cybersecurity events. DE.CM-2: The physical environment is monitored to detect potential cybersecurity events. DE.CM-3: Personnel activity is monitored to detect potential cybersecurity events. DE.CM-4: Malicious code is detected. DE.CM-5: Unauthorized mobile code is detected. DE.CM-6: External service provider activity is monitored to detect potential cybersecurity events. DE.CM-7: Monitoring for unauthorized personnel, connections, devices, and software is performed. DE.CM-8: Vulnerability scans are performed. Detection Processes (DE.DP): Detection processes and procedures are maintained and tested to ensure awareness of anomalous events. DE.DP-1: Roles and responsibilities for detection are well defined to ensure accountability. DE.DP-2: Detection activities comply with all applicable requirements. DE.DP-3: Detection processes are tested. DE.DP-4: Event detection information is communicated to appropriate parties. DE.DP-5: Detection processes are continuously improved. Response Planning (RS.RP): Response processes and procedures are executed and maintained, to ensure response to detected cybersecurity events. RS.RP-1: Response plan is executed during or after an event. RS.CO-1: Personnel know their roles and order of operations when a response is needed. RS.CO-2: Incidents are reported consistent with established criteria. RS.CO-3: Information is shared consistent with response plans. RS.CO-4: Coordination with stakeholders occurs consistent with response plans. RS.CO-5: Voluntary information sharing occurs with external stakeholders to achieve broader cybersecurity situational awareness. Analysis (RS.AN): Analysis is conducted to ensure effective response and support recovery activities. RS.AN-1: Notifications from detection systems are investigated. RS.AN-2: The impact of the incident is understood. RS.AN-3: Forensics are performed. RS.AN-4: Incidents are categorized consistent with response plans. RS-AN-5: Processes are established to receive, analyze, and respond to vulnerabilities disclosed to the organization from internal and external sources (e.g. internal testing, security bulletins, or security researchers). RS.MI-1: Incidents are contained. RS.MI-2: Incidents are mitigated. RS.MI-3: Newly identified vulnerabilities are mitigated or documented as accepted risks. Improvements (RS.IM): Organizational response activities are improved by incorporating lessons learned from current and previous detection/response activities. RS.IM-1: Response plans incorporate lessons learned. RS.IM-2: Response strategies are updated. Recovery Planning (RC.RP): Recovery processes and procedures are executed and maintained to ensure timely restoration of systems or assets affected by cybersecurity events. RC.RP-1: Recovery plan is executed during or after a cybersecurity event. Improvements (RC.IM): Recovery planning and processes are improved by incorporating lessons learned into future activities. RC.IM-1: Recovery plans incorporate lessons learned. RC.IM-2: Recovery strategies are updated. RC.CO-1: Public relations are managed. RC.CO-2: Reputation after an event is repaired. RC.CO-3: Recovery activities are communicated to internal and external stakeholders as well as to executive and management teams. Legend: ●—Fully address: the standards address all of the related subcategories. ◕—Substantially address: the standards address at least two-thirds, but not all, of the related subcategories. ◑—Partially address: the standards address at least one-third, but less than two-thirds, of the related subcategories. ◔—Minimally address: the standards address less than one-third of the related subcategories.○—Do not address: the standards do not address any of the related subcategories. Appendix IV: Comments from the Federal Energy Regulatory Commission Appendix V: Comments from the North American Electric Reliability Corporation Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Kaelin Kuhn (Assistant Director), David Marroni (Assistant Director), Andrew Moore (Analyst in Charge), Dino Papanastasiou (Analyst in Charge), David Aja, Christopher Businsky, Kendall Childers, Travis Conley, Rebecca Eyler, Philip Farah, Jonathan Felbinger, Quindi Franco, Wil Gerard, Cindy Gilbert, Mike Gilmore, Andrew Howard, Paul Kazemersky, Lisa Maine, Carlo Mozo, Cynthia Norris, Sukhjoot Singh, Adam Vodraska, and Jarrod West made key contributions to this report.
Why GAO Did This Study The nation's electric grid—the commercial electric power generation, transmission, and distribution system comprising power lines and other infrastructure—delivers the electricity that is essential for modern life. As a result, the reliability of the grid—its ability to meet consumers' electricity demand at all times—has been of long-standing national interest. GAO was asked to review the cybersecurity of the grid. Among other things, this report (1) describes the cybersecurity risks facing the grid, (2) assesses the extent to which DOE has defined a strategy for addressing grid cybersecurity risks, and (3) assesses the extent to which FERC-approved standards address grid cybersecurity risks. To do so, GAO developed a list of cyber actors that could pose a threat to the grid; identified key vulnerable components and processes that could be exploited; and reviewed studies on the potential impact of cyberattacks on the grid by reviewing prior GAO and industry reports, as well as interviewing representatives from federal and nonfederal entities. GAO also analyzed DOE's approaches to implementing a federal cybersecurity strategy for the energy sector as it relates to the grid and assessed FERC oversight of cybersecurity standards for the grid. What GAO Found The electric grid faces significant cybersecurity risks: Threat actors . Nations, criminal groups, terrorists, and others are increasingly capable of attacking the grid. Vulnerabilities . The grid is becoming more vulnerable to cyberattacks—particularly those involving industrial control systems that support grid operations. (The figure below is a high-level depiction of ways in which an attacker could compromise industrial control systems.) The increasing adoption of high-wattage consumer Internet of Things devices—“smart” devices connected to the internet—and the use of the global positioning system to synchronize grid operations are also vulnerabilities. Impacts . Although cybersecurity incidents reportedly have not resulted in power outages domestically, cyberattacks on industrial control systems have disrupted foreign electric grid operations. In addition, while recent federal assessments indicate that cyberattacks could cause widespread power outages in the United States, the scale of power outages that may result from a cyberattack is uncertain due to limitations in those assessments. Although the Department of Energy (DOE) has developed plans and an assessment to implement a federal strategy for addressing grid cybersecurity risks, these documents do not fully address all of the key characteristics needed for a national strategy. For example, while DOE conducted a risk assessment, that assessment had significant methodological limitations and did not fully analyze grid cybersecurity risks. One such key limitation was that the assessment used a model that covered only a portion of the grid and reflected how that portion existed around 1980. Until DOE has a complete grid cybersecurity plan, the guidance the plan provides decision makers in allocating resources to address those risks will likely be limited. The Federal Energy Regulatory Commission (FERC)—the regulator for the interstate transmission of electricity—has approved mandatory grid cybersecurity standards. However, it has not ensured that those standards fully address leading federal guidance for critical infrastructure cybersecurity—specifically, the National Institute of Standards and Technology (NIST) Cybersecurity Framework. (See table below for an excerpt of GAO's analysis of two of the five framework functions.) Without a full consideration of the framework, there is increased risk that grid entities will not fully implement leading cybersecurity practices. In addition, FERC's approved threshold for which entities must comply with the requirements in the full set of grid cybersecurity standards is based on an analysis that did not evaluate the potential risk of a coordinated cyberattack on geographically distributed targets. Such an attack could target, for example, a combination of geographically dispersed systems that each fall below the threshold for complying with the full set of standards. Responding to such an attack could be more difficult than to a localized event since resources may be geographically distributed rather than concentrated in the same area. Without information on the risk of such an attack, FERC does not have assurance that its approved threshold for mandatory compliance adequately responds to that risk. What GAO Recommends GAO is making three recommendations—one to DOE and two to FERC. GAO is making a recommendation to DOE to develop a plan aimed at implementing the federal cybersecurity strategy for the grid and ensure that the plan addresses the key characteristics of a national strategy, including a full assessment of cybersecurity risks to the grid. GAO is also making the following two recommendations to FERC: 1. Consider adopting changes to its approved cybersecurity standards to more fully address the NIST Cybersecurity Framework. 2. Evaluate the potential risk of a coordinated cyberattack on geographically distributed targets and, based on the results of that evaluation, determine if changes are needed in the threshold for mandatory compliance with requirements in the full set of cybersecurity standards. DOE and FERC agreed with GAO’s recommendations.
gao_GAO-20-73
gao_GAO-20-73_0
Background CERCLA established the Superfund program to clean up contaminated sites to protect human health and the environment from the effects of hazardous substances. CERCLA requires the President to establish procedures and standards for prioritizing and responding to releases of hazardous substances, pollutants, and contaminants into the environment and to incorporate these procedures and substances into the National Oil and Hazardous Substances Pollution Contingency Plan (National Contingency Plan). Under CERCLA, PRPs are liable for conducting or paying for the cleanup of hazardous substances at contaminated sites. EPA and PRPs can undertake two types of cleanup actions: removal actions and remedial actions. Removal actions are usually short-term cleanups for sites that pose immediate threats to human health or the environment. Remedial actions are generally long-term cleanups—consisting of one or more remedial action projects—that aim to permanently and significantly reduce contamination. Remedial actions can take a considerable amount of time and money, depending on the nature of the contamination and other site-specific factors. EPA’s Office of Superfund Remediation and Technology Innovation, which is part of the Office of Land and Emergency Management, oversees remedial actions at NPL sites, including nonfederal NPL sites. At each nonfederal NPL site, the lead official who is responsible for compliance with the National Contingency Plan is the remedial project manager. Management of nonfederal NPL sites is the responsibility of the EPA region in which a site is located. EPA has 10 regional offices, and each one is responsible for executing EPA programs within several states and, in some regions, territories. Figure 1 illustrates EPA’s 10 regions. The Superfund process begins with the discovery of a potentially hazardous site or notifications to EPA regarding the possible release of hazardous substances that may pose a threat to human health or the environment. EPA’s Superfund remedial cleanup process for nonfederal NPL sites includes the actions shown in figure 2. Site assessment. EPA, states, tribes, or other federal agencies evaluate site conditions to identify appropriate responses to releases of hazardous substances to the environment. During this process, EPA or other entities, such as state or tribal agencies, collect data to identify, evaluate, and rank sites using agency criteria. Site listing. EPA considers whether to list a site on the NPL based on a variety of factors, including the availability of alternative state or federal programs that may be used to clean up the site. Sites that EPA proposes to list on the NPL are published in the Federal Register. After a period of public comment, EPA reviews the comments and makes final decisions on whether to list the sites on the NPL. Remedial investigation and feasibility study. EPA or the PRP will generally begin the remedial cleanup process for an NPL site by conducting a two-part study of the site. First, EPA or the PRP conducts a remedial investigation to characterize site conditions and assess the risks to human health and the environment, among other actions. Second, EPA or the PRP conducts a feasibility study to assess various alternatives to address the problems identified through the remedial investigation. Under the National Contingency Plan, EPA considers nine criteria, including long-term effectiveness and permanence, in its assessment of alternative remedial actions. Record of decision. EPA issues a record of decision that identifies its selected remedy for addressing the contamination at a site. A record of decision typically lays out the planned cleanup activities for each operable unit of the site as well as an estimate of the cost of the cleanup. Remedial design and remedial action. EPA or the PRP plans to implement the selected remedy during the remedial design phase, and then, in the remedial action phase, EPA or the PRP carries out one or more remedial action projects. Construction completion. EPA generally considers construction of the remedial action to be complete for a site when all physical construction at a site is complete, including actions to address all immediate threats and to bring all long-term threats under control. Postconstruction completion. EPA, the state, or the PRP performs operation and maintenance for the remedy, if needed, such as by operating a groundwater extraction and treatment system. EPA generally performs reviews of the remedy at least every 5 years to evaluate whether it continues to protect human health and the environment. Deletion from the NPL. EPA may delete a site, or part of a site, from the NPL when the agency and the relevant state authority determine that no further site response is needed. Contaminants and Remedies at Nonfederal NPL Sites Nonfederal NPL sites may include a variety of contaminants, and EPA may select different types of remedies to clean up the sites. EPA had recorded more than 500 contaminants at nonfederal NPL sites as of fiscal year 2014, the most recently available data. According to the Agency for Toxic Substances and Disease Registry, the highest-priority contaminants—based on a combination of their prevalence, toxicity, and potential for human exposure—are arsenic, lead, mercury, vinyl chloride, and polychlorinated biphenyls. For example, in 2016, the Agency for Toxic Substances and Disease Registry reported that exposure to arsenic in drinking water is associated with various health effects, such as pulmonary and cardiovascular disease, diabetes, and certain cancers. Contaminants may be found in different media at nonfederal NPL sites. In 2017, EPA reported that groundwater and soil were the most common contaminated media, including at the nonfederal NPL sites it analyzed. To clean up a nonfederal NPL site, EPA may select various on-site or off- site remedies. For example, EPA may select on-site remedies that include treatment as well as those that do not, such as on-site containment, monitored natural recovery, and institutional controls. In 2017, EPA reported that about a quarter of the decision documents for sites it analyzed included on-site treatment. EPA may also treat or dispose the contamination off-site. Examples of off-site treatment and disposal include incineration and recycling. EPA reported that sites it analyzed may have various combinations of remedies, including treatment, on-site containment, off-site disposal, and institutional controls. Available Federal Data on Flooding, Storm Surge, Wildfires, and Sea Level Rise Various federal agencies provide nationwide data on flooding, storm surge from hurricanes, wildfires, and sea level rise. Data on flooding, storm surge, and wildfires are generally based on current or past conditions. NOAA models the extent of inundation for various heights of sea level rise compared to the most recently available data on average high tide. Flooding FEMA provides flood hazard and risk information to communities nationwide. Among other information, FEMA provides data on coastal and riverine flooding in the National Flood Hazard Layer, a database that contains the most current flood hazard data. Federal law requires FEMA to assess the need to revise and update the nation’s flood maps at least every 5 years. Among other uses, the flood hazard data are used for flood insurance ratings and floodplain management. The National Flood Hazard Layer identifies areas at the highest risk of flooding, which are those that have a 1 percent or higher annual chance of flooding. In some locations, the National Flood Hazard Layer also identifies areas with 0.2 percent or higher annual chance of flooding, which FEMA considers to be a moderate flood hazard, as well as other flood hazards. The National Flood Hazard Layer also identifies areas with minimal flood hazard, including those with less than 0.2 percent annual chance of flooding, and unknown flood hazard, including areas FEMA had not assessed for flood hazards. In 2018, the Technical Mapping Advisory Council noted that FEMA has produced modernized data (i.e., digital maps) for areas of the United States where 98 percent of the population resides, but has not determined the flood hazard for 40 percent of streams. In general, flood hazards are based on existing conditions in the watershed and floodplains. However, in certain cases, FEMA may include flood hazard information that is based on future conditions, according to FEMA regulations. Storm Surge NOAA provides estimates of hurricane storm surge using a model called Sea, Lake and Overland Surges from Hurricanes. Estimates are available for eastern U.S. coastal areas from Texas through Maine and other areas affected by storm surge, including Hawaii, Puerto Rico, and the U.S. Virgin Islands. As of June 2019, NOAA had not modeled storm surge for the West Coast of the United States or other Pacific islands. The model takes into account a specific locale’s shoreline, incorporating bay and river configurations, water depths, bridges, roads, levees, and other physical features. It estimates the maximum extent of storm surge at high tide by modeling hypothetical hurricanes under different storm conditions, such as landfall location, storm trajectory, and forward speed. NOAA models storm surge from Category 1 through Category 5 hurricanes for the Atlantic coast south of the North Carolina–Virginia border, the Gulf of Mexico, Puerto Rico, and the U.S. Virgin Islands and Category 1 through Category 4 hurricanes for the Atlantic coast north of the North Carolina–Virginia border and Hawaii. According to NOAA’s website, the model is to be used for educational purposes and awareness of the storm surge hazard at a city or community level. In accordance with federal law, the model is also used for other purposes, such as hurricane evacuation studies. According to NOAA’s website, the agency updates the model for portions of the shoreline each year to account for, among other changes, new data and the addition of flood protection devices, such as levees. The model does not account for future conditions such as erosion, subsidence (i.e., the sinking of an area of land), construction, or sea level rise. Wildfires The U.S. Forest Service maps wildfire hazard potential based on landscape conditions and other observations. According to the U.S. Forest Service, the primary intended use of the wildfire hazard potential map is to identify priority areas for hazardous fuels treatments from a broad, national- to regional-scale perspective. The U.S. Forest Service maps an index of wildfire hazard potential for the contiguous United States, based on, among other factors, annual burn probabilities and potential intensity of large fires. The U.S. Forest Service categorizes the wildfire hazard potential index into five classes of very low, low, moderate, high, and very high. For example, the U.S. Forest Service designates as “high” those areas with wildfire hazard potential index from the 85th to the 95th percentile, and “very high” above the 95th percentile. The U.S. Forest Service also categorizes areas as nonburnable (including agricultural and developed lands) and water. According to the U.S. Forest Service, areas with higher values of wildfire hazard potential represent vegetation that is more likely to burn with high intensity under certain weather conditions. However, areas with moderate, low, and very low wildfire hazard potential may still experience wildfire, particularly if they are near areas that have higher wildfire hazard potential. Wildfire hazard potential is not a forecast or wildfire outlook for any particular season as it does not include any information on current or forecasted weather or fuel moisture conditions. Sea Level Rise NOAA models the extent of inundations from various heights of sea level rise (up to 10 feet above average high tides) for the contiguous United States, Hawaii, the Pacific islands, Puerto Rico, and the U.S. Virgin Islands and provides the results in a web mapping tool called the Sea Level Rise Viewer. NOAA’s guidance on the Sea Level Rise Viewer states that data are not available for Alaska. The uses of the sea level rise data include planning and education but not site-specific analysis, according to a NOAA document. NOAA labels areas as not mapped if elevation data of sufficient quality for the areas are not available. NOAA does not model natural processes, such as erosion, subsidence, or future construction, or forecast how much sea level is likely to rise in a given area. Rather, for various heights of local sea level rise, NOAA determines extent of inundation based on the elevation of an area and the potential for water to flow between areas. Enterprise Risk Management Enterprise risk management is a tool that allows agencies to assess threats and opportunities that could affect the achievement of their goals. In a December 2016 report, we updated our 2005 risk management framework to reflect changes to the Office of Management and Budget’s Circular A-123, which calls for agencies to implement enterprise risk management. We also incorporated recent federal experience and identified essential elements of federal enterprise risk management. Our December 2016 report states that beyond traditional internal controls, enterprise risk management promotes risk management by considering the effect of risk across the entire organization and how it may interact with other identified risks. Additionally, it addresses other topics, such as strategy determination, governance, communicating with stakeholders, and measuring performance. The principles of enterprise risk management apply at all levels of the organization and across all functions, such as those related to managing risk to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites. The six essential elements of enterprise risk management that we identified in our December 2016 report are as follows: 1. Align risk management process with goals and objectives. Ensure that the process maximizes the achievement of agency mission and results. Agency leaders examine strategic objectives by regularly considering how risks could affect the agency’s ability to achieve its mission. 2. Identify risks. Assemble a comprehensive list of risks—both threats and opportunities—that could affect the agency’s ability to achieve its goals and objectives. 3. Assess risks. Examine risks, considering both the likelihood of the risk and the impact of the risk to help prioritize risk response. 4. Respond to the risks. Select risk treatment response (based on risk appetite), including acceptance, avoidance, reduction, sharing, or transfer. 5. Monitor risks. Monitor how risks are changing and whether responses are successful. 6. Communicate and report on risks. Communicate risks with stakeholders and report on the status of addressing the risks. About 60 Percent of Nonfederal NPL Sites Are Located in Areas That May Be Impacted by Selected Climate Change Effects, According to Available Data Available federal data on flooding, storm surge, wildfires, and sea level rise suggest that about 60 percent (945 of 1,571) of all nonfederal NPL sites are located in areas that may be impacted by one or more of these potential climate change effects. These data, however, may not fully account for the number of nonfederal NPL sites that may be in such areas because (1) federal data are generally based on current or past conditions; (2) data are not available for some areas; and (3) the NCA has reported that climate change may exacerbate flooding, storm surge, and wildfires in certain regions of the United States. In addition, EPA does not have quality information on the boundaries of nonfederal NPL sites, which could affect its ability to identify the number of sites that may be impacted by one or more of these potential climate change effects. About 60 Percent of Nonfederal NPL Sites Are Located in Areas That May Be Impacted by Selected Climate Change Effects; Additional Sites May Be Impacted in the Future Available federal data suggest that 945 of 1,571 nonfederal NPL sites, or about 60 percent, are located in areas that may be impacted by selected climate change effects—that is, 0.2 percent or higher annual chance of flooding or other flood hazards, storm surge from Category 4 or 5 hurricanes, high and very high wildfire hazard potential, and sea level rise of up to 3 feet. The locations of these sites are shown in figure 3; the full results of our analysis and additional information on these sites is available in the interactive map and downloadable data file, which can be viewed at https://www.gao.gov/products/GAO-20-73. Our analysis, however, may not fully account for the number of nonfederal NPL sites that may be impacted by the effects of climate change for various reasons. First, we represented the areas of nonfederal NPL sites based on a 0.2-mile radius around their primary geographic coordinates, which may not accurately reflect their area (i.e., they may be larger or smaller). We did not analyze site-specific information for these nonfederal NPL sites, including the extent of contamination and location of remedies. Such site-specific analyses would be needed to determine whether there is a risk to human health and the environment at nonfederal NPL sites as a result of these potential climate change effects. Further, according to the NCA, EPA documents, and interviews with EPA officials, there may be other climate change effects that could impact nonfederal NPL sites, such as potential increases in salt water intrusion (the movement of saline water into freshwater aquifers), drought, precipitation, hurricane winds, and average and extreme temperatures; we did not analyze these effects because we did not identify relevant national-level federal data sets. Flooding We identified 783 nonfederal NPL sites—approximately 50 percent—in areas that FEMA had identified as having 0.2 percent or higher annual chance of flooding, which FEMA considers moderate flood hazard, or other flood hazards, as of October 2018. Of these 783 sites, our analysis shows that 713—approximately 45 percent of all sites—are currently located in areas with 1 percent or higher annual chance of flooding, FEMA’s highest flood hazard category. We provide information on the number of sites in areas with moderate or other flood hazards because, according to the NCA, heavy rainfall is increasing in intensity and frequency across the United States and is expected to continue to increase, which may lead to an increase in flooding in the future. The full results of our analysis—which include information on the sites in areas that may have 1 percent or higher annual chance of flooding, 0.2 percent or higher annual chance of flooding or other identified flood hazards, unknown flood hazard or no data, and minimal flood hazard—are available in our interactive map, which can be viewed here. For example, there are a number of nonfederal NPL sites in EPA Region 7, where states experienced record flooding in early 2019. Specifically, as seen in figure 4, there are 51 sites that are located in areas with 0.2 percent or higher annual chance of flooding or other identified flood hazards, of which 42 are located in areas with 1 percent or higher annual chance of flooding. Nationwide, the number of nonfederal NPL sites in areas that may be impacted by flooding currently may be higher than 783. Specifically, 217 nonfederal NPL sites are located in areas that FEMA has not assessed for flood hazards or that we did not analyze because the data were not available in a form we could use with our mapping software. Storm Surge We identified 187 nonfederal NPL sites—12 percent—in areas that may be inundated by storm surge corresponding to Category 4 or 5 hurricanes, the highest possible category, based on NOAA’s storm surge model as of November 2018. Of these sites, 102 are located in areas that may be inundated by a storm surge corresponding to Category 1 hurricanes. We analyzed areas that may be inundated by a storm surge corresponding to the highest possible category because, according to the NCA, a projected increase in the intensity of hurricanes in the North Atlantic could increase the probability of extreme flooding because of storm surge along most of the Atlantic and Gulf Coast states, beyond what would be projected based solely on relative sea level rise. However, the NCA stated that there is uncertainty in the projected increase in frequency or intensity of Atlantic hurricanes, and other factors may affect the potential for flooding because of storm surge, such as changes in overall storm frequency or tracks. The full results of our analysis, which include information on the number of sites in areas that may be inundated by storm surge from Category 1 and from Category 4 or 5 hurricanes, are available in our interactive map, which can be viewed here. In EPA Regions 2 and 3, where states experienced damage from two major hurricanes in 2017, there are 87 nonfederal NPL sites located within areas that may be inundated by storm surge from Category 4 or 5 hurricanes. Figure 5 shows these 87 sites, of which 54 sites may be inundated by storm surge from Category 1 hurricanes. Nationwide, the number of nonfederal NPL sites in areas that may be impacted by storm surge may be higher than 187 because NOAA has not modeled areas along the West Coast and Pacific islands other than Hawaii. Further, our analysis did not include other potential impacts from hurricanes, such as rainfall. Figure 6 shows an example of the impact of rainfall caused by a hurricane at the American Cyanamid NPL site. We identified 234 nonfederal NPL sites—15 percent—located in areas that have high or very high wildfire hazard potential—those more likely to burn with a higher intensity, based on a U.S. Forest Service model as of July 2018. For this analysis, we combined the high and very high wildfire hazard potential categories; we did not identify the number of sites in each of these categories separately. We did not analyze areas that currently have moderate or lower wildfire hazard potential because those with moderate or lower wildfire hazard potential are less likely to experience high-intensity wildfire and the extent to which wildfire hazard potential may change in the future is unknown. The full results of our analysis on the number of sites in areas with high or very high wildfire hazard potential are available in our interactive map, which can be viewed here. As seen in figure 7, there are 22 nonfederal NPL sites in areas with high or very high wildfire hazard potential in EPA Region 9, a region that experienced wildfires in 2018, including the highly destructive Carr Fire. Nationwide, the number of nonfederal NPL sites in areas that currently have high wildfire hazard potential may be higher than 234 because wildfire hazard data are only available for the contiguous United States (i.e., there are no data for Alaska, Hawaii and other Pacific islands, Puerto Rico, and the U.S. Virgin Islands). According to the NCA, the incidence of large forest fires in the western United States and Alaska has increased since the early 1980s and is projected to further increase in those regions as the climate changes. However, the NCA noted that analyses regarding the effect of climate change on the incidence of wildfire in other parts of the United States are not readily available, so it is unknown how climate change will affect the number of nonfederal NPL sites in areas rated with high or very high wildfire hazard potential nationwide. As figure 8 shows, wildfires can pose risks at nonfederal NPL sites, such as the Iron Mountain Mine site near Redding, California. We identified 110 nonfederal NPL sites—7 percent—located in areas that would be inundated by a sea level rise of 3 feet, based on our analysis of EPA and NOAA data as of March 2019 and September 2018, respectively. Our analysis shows that if sea level in these areas rose by 1 foot, 97 sites would be inundated. If sea level in these areas rose by 8 feet, 158 sites would be inundated. We also identified 84 nonfederal NPL sites that are located in areas that may already be inundated at high tide. We provide the number of sites in areas that may be impacted by these sea level rise heights because, according to the NCA, global average sea levels are very likely to continue to rise by at least several inches in the next 15 years and by 1.0 to 4.3 feet by 2100. Further, the NCA states that a rise of as much as 8 feet by 2100 cannot be ruled out. The full results of our analysis, which include information on the number of sites in areas that may already be inundated at high tide and that would be inundated if sea level rose by 1 foot, 3 feet, and 8 feet, are available in our interactive map, which can be viewed here. There are 23 nonfederal NPL sites located within areas that may be impacted if sea level rose by up to 3 feet in EPA Region 6, a region that has experienced land loss because of sea level rise and coastal flooding, according to the NCA. In addition, as seen in figure 9, 16—or 70 percent—of these sites may already be inundated at high tide. Nationally, the number of nonfederal NPL sites that may be inundated by various heights of sea level rise will vary from the results of our analysis because different parts of the United States may experience higher or lower sea level rise than the global average. For example, the NCA states that sea level rise will be higher than the global average on the East and Gulf Coasts of the United States and lower than the global average in most of the Pacific Northwest and in Alaska. As can be seen in figure 10, sea level rise and other coastal hazards may impact nonfederal NPL sites, such as the one in the San Jacinto River Waste Pits site in Texas, parts of which are already under water. EPA Does Not Have Quality Information on the Boundaries of Nonfederal NPL Sites EPA does not have quality information on the boundaries of nonfederal NPL sites, which could affect its ability to identify the number of sites that may be impacted by one or more of these potential climate change effects. According to EPA officials, EPA has not validated data on site boundaries and EPA’s regional offices do not use a consistent geographic standard, which makes it difficult to produce a national data set. In general, EPA officials told us that information on the boundaries of NPL sites has not been a focus at a national level and is not yet subject to quality standards. For example, EPA officials told us that boundary information for each NPL site represents the remedial project manager’s professional judgment and remedial project managers may determine and record the boundaries of sites differently. EPA has taken some initial actions to improve the quality of information on the boundaries of nonfederal NPL sites. In November 2017, the Office of Superfund Remediation and Technology Innovation issued a directive to all regional Superfund division directors recommending national standards for collecting and maintaining geographic information, including site boundaries. EPA’s 2017 directive notes that using national standards to collect geographic information, including site boundaries, promotes EPA’s reporting and analytical efforts to support program implementation and evaluation. In addition, in May 2018, EPA’s Office of Land and Emergency Management developed technical guidance for all its regions and programs for collecting, documenting, and managing geographic information on Superfund sites, including their boundaries. EPA officials told us that in 2019 and 2020, the agency plans to move toward recording site boundaries in a consistent format across regions and instituting procedures to validate and update them at least annually. However, EPA officials told us that there is no schedule in place for completing this effort and they are uncertain when they will complete it because of competing priorities. By developing a schedule for completing the standardization and improvement of the quality of the information on the boundaries of nonfederal NPL sites, EPA could more reasonably ensure that it would have quality information with which to fully identify nonfederal NPL sites that are located in areas that may be impacted by climate change effects. EPA Has Taken Some Actions to Manage Risks from the Potential Impacts of Climate Change Effects at Nonfederal NPL Sites EPA’s actions to manage risks from the potential impacts of climate change effects align with three of the six essential elements of enterprise risk management. Specifically, for the six essential elements, EPA’s actions do not align with one essential element, aligning its enterprise risk management process with goals and objectives; partially align with two essential elements, assessing risks and responding to risks; and align with three essential elements, identifying risks, monitoring risks, and communicating about and reporting on risks. Table 1 shows the alignment of EPA’s actions with the essential elements of enterprise risk management. Aligning Risk Management Process with Goals and Objectives This essential element calls for agencies to align their risk management processes with the goals and objectives of the agency, but EPA has not taken action to clearly align its process for managing risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites with agency-wide goals and objectives. For example, the 2018 to 2022 EPA strategic plan does not include goals and objectives related to climate change or discuss strategies for addressing the impacts of climate change effects. Moreover, neither the fiscal years 2018 to 2019 nor fiscal years 2020 to 2021 national program manager guidance for EPA’s Office of Land and Emergency Management mentions climate change among its goals and priorities. In contrast to the current strategic plan, the 2014 to 2018 EPA strategic plan included addressing climate change as one of four strategic goals and specifically discussed climate change as an external factor or emerging issue in the context of planned, current, and completed cleanups, including at nonfederal NPL sites. In addition, the fiscal years 2016 to 2017 national program manager guidance for the office that oversees the Superfund program listed climate change adaptation as one of four national areas of focus for the office. According to an EPA official, when the 2018 to 2022 strategic plan was drafted, senior agency officials were not aware of the potential risks to the Superfund program mission from the impacts of climate change effects. According to this official, senior EPA officials have expressed support for certain activities related to climate change, such as the work of the Cross- EPA Work Group on Climate Adaptation, but have not issued related documents or policy statements. Without clarifying how the agency’s ongoing actions to manage these risks at nonfederal NPL sites align with current agency goals and objectives, EPA will not have reasonable assurance that senior officials will take an active role in supporting these actions, which would help EPA achieve its mission of protecting human health and the environment. Identifying Risks EPA’s actions to identify risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites align with this essential element of enterprise risk management. Specifically, EPA identified climate change effects that may impact nonfederal NPL sites—and pose risks to human health and the environment—in studies and climate change adaptation and implementation plans. For example, in a 2012 study of adaptation of Superfund remediation to climate change, EPA identified eight climate change effects that may impact certain NPL site remedies: flooding, sea level rise, extreme storms, large snowfall, wildfires, drought, extreme heat, and landslides. In 2014, EPA issued an agency-wide climate change adaptation plan, which identified climate change effects that may impact NPL sites. The same year, EPA issued a climate change adaptation implementation plan for the office that oversees the Superfund program that identified nine climate change effects that may impact NPL sites. Each of the 10 EPA regional offices identified relevant regional climate change effects in their 2014 climate change adaptation implementation plans. For example, the Region 3 plan states that increased flooding and sea level rise may increase risks of releases of contaminants, salt water intrusion may impact the performance of remedies, and increased temperatures may impact vegetation that prevents erosion. Additionally, five regional offices have conducted or are conducting additional screening-level studies to identify which climate change effects, if any, may impact each of the NPL sites in these regions. For example, Region 10 conducted a study in 2015 that identified, among other effects, sea level rise and wildfires as potential climate change effects that may impact NPL sites in the region. Assessing Risks EPA’s actions to assess risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites partially align with this essential element. In a 2012 study of adaptation of Superfund remediation to climate change, EPA assessed the impacts of eight climate change effects on certain remedies to determine the risk they presented to the agency’s mission to protect human health and the environment. EPA issued climate change adaptation implementation plans for the office that oversees the Superfund program and all regions, as described above, which assessed potential impacts of climate change effects. In addition, five EPA regional offices assessed or are assessing potential impacts of climate change effects on NPL sites in their regions as a whole, and one of these regions assessed both the impacts and likelihood of climate change effects, consistent with this essential element. Specifically, Region 4 identified the sites most likely to face major climate change risks and then examined these sites in greater detail. Additionally, Region 3 has developed a mapping tool on climate change vulnerability that provides site-level assessments of sea level rise, among other potential impacts. EPA provides training and direction to remedial project managers—the lead EPA officials at nonfederal NPL sites—on conducting site-level risk assessments that incorporate information on potential impacts of climate change effects. Since 2014, EPA has offered optional training to remedial project managers and others on integrating climate change into the Superfund cleanup process. From 2013 through 2015, EPA issued fact sheets as guidance for assessing the potential impacts of climate change effects for three types of remedies. According to EPA officials, these fact sheets constitute the direction that EPA provides to remedial project managers on assessing risks from climate change effects. EPA plans to update these fact sheets in 2019 and is also in the process of developing a compilation of resources for assessing potential flood risks in coastal areas to inform cleanup and reuse decision-making, according to an EPA official. In addition, EPA provides resources on climate change on the Superfund program website, such as links to tools and data on drought and coastal flooding. EPA also offers technical assistance on incorporating climate change information into risk assessments to remedial project managers through groups such as the Contaminated Sediments Technical Advisory Group and the Cross-EPA Work Group on Climate Adaptation. EPA officials in four regions provided us with site-specific examples of how they used climate change information to assess risks from the potential impacts of climate change effects, but officials from other regions stated that they have not always integrated climate change information into their risk assessments. For example, according to a record of decision for the site, EPA Region 2 incorporated the potential for increased storm flow intensities into the model of the Passaic River used in the remedial investigation and feasibility study at the Diamond Alkali site in Newark, New Jersey. Conversely, officials in six regions told us that they have not used climate change projections for flooding or rainfall in site-level risk assessments. In addition, officials in Region 6 told us that they do not incorporate potential impacts of climate change effects or changes in the frequency of natural disasters into their assessments. EPA officials have not consistently incorporated climate change information into their assessment of site-level risks because they do not always have the climate data they need to do so, according to our review of EPA documents and interviews with EPA officials and stakeholders. For example, officials in three regions told us that they have not used rainfall or flood projections because the data are not available or they were unsure which data to use. In addition, in the record of decision for the Diamond Alkali site in New Jersey, Region 2 officials stated that they did not integrate sea level rise information into their storm flow modeling for the Passaic River at the site because of the uncertainty in expected future sea level rise values, especially at the regional and local levels. We reported on similar challenges with climate data in our 2015 report on climate information, which found that existing federal efforts do not fully meet the climate information needs of federal, state, local, and private sector decision makers, and we made a related recommendation in that report. Further, current EPA practice for assessing risks at NPL sites does not always include consideration of climate change, according to agency documents we reviewed and officials from three regions and a stakeholder we interviewed. EPA’s climate change adaptation plan noted that EPA and its partners will need to alter their standard practices—such as their standard methods for estimating the frequency of floods or runoff of pollutants into rivers—to account for a continuously changing climate. The Region 4 climate change adaptation implementation plan, for instance, noted that preliminary assessments and site investigations are typically based on historic information, not future projections and therefore may not fully address risks. Officials in two regions told us that they do not have direction on how to alter their practices to account for climate change. For example, officials in Region 2 said they do not have instructions that identify a particular set of expectations, data, or maps that they should use when considering future risks from flooding. Officials in Region 5 told us that they do not have any formal direction on how to address risks from climate change and are waiting for EPA headquarters to provide information on how to do so. According to EPA documents and a headquarters official, EPA believes that its existing direction, including general guidance on conducting risk assessments and the fact sheets for assessing potential impacts of climate change effects for three types of remedies, discussed above, provide a robust structure for considering such impacts. However, without providing direction to remedial project managers on how to integrate information on the potential impacts of climate change effects into site- level risk assessments at nonfederal NPL sites across all regions and types of remedies, EPA cannot ensure that remedies will protect human health and the environment in the long term. Responding to Risks EPA’s actions to respond to risks that potential impacts of climate change effects may pose to human health and the environment at nonfederal NPL sites partially align with this essential element. In two national studies EPA conducted in 2012 and 2017, EPA examined potential impacts of some climate change effects on selected remedies at NPL sites, including nonfederal NPL sites, and generally found that it has taken actions to respond to risks through its existing cleanup processes. In 2012, as noted above, EPA studied the vulnerability of selected remedies to some climate change effects and found that existing processes—such as EPA’s Five-Year Review and operation and maintenance—could adequately address the potential impacts of climate change effects. In addition, EPA studied the impacts of three hurricanes in 2017 on sites with selected remedies in place, including nonfederal NPL sites, and found that the agency has generally taken resiliency measures to respond to risks at these sites. EPA also provided guidance and training to remedial project managers on responding to risks to human health and the environment from the potential impacts of climate change effects and recently added requirements for certain potential site contractors to describe their capacity to respond to such risks. EPA provided guidance in its fact sheets on integrating climate change information into risk response decisions at nonfederal NPL sites and optional training on integrating climate change into the Superfund cleanup process. In addition, EPA provided relevant information and resources for EPA officials on resiliency measures on the agency website. In 2016, EPA issued performance work statements to potential contractors for environmental services and operations and for remediation environmental services that required contractors to describe their ability to conduct climate change vulnerability analyses and adaptation, as needed, to ensure the resiliency of remedies to climate change impacts. According to an EPA headquarters official, EPA is currently working on developing technical guidance on how remedial project managers can integrate requests for climate change– related analysis into their task orders for contractors. With respect to site-level responses, EPA officials from three regions provided us with examples of site decision documents that described how climate change information will be incorporated into remedy selection and design. For example, the record of decision for the Portland Harbor site in Oregon states that a containment cap will be constructed to withstand more frequent floods with higher peak flows more common with climate change. Officials from Region 3 told us that they take into account a number of factors, including climate change impacts, if any, when they design and select site remedies. However, according to our interviews with regional officials, they have not consistently integrated climate change information into remedy selection and design. For example, officials from two regions stated that they are not aware of any remedial project managers in their regions who are taking action at nonfederal NPL sites to respond to climate change or consider future conditions. EPA officials have not consistently taken the potential impacts of climate change effects into account in site-level risk response decision making because they do not always have sufficient direction to do so, according to our interviews with EPA officials. EPA officials from three regions told us that they are unsure how to translate data on potential impacts of climate change effects into the design of remedies. For example, officials from Region 10 told us that EPA does not have direction for remedial project managers on how to integrate response to climate change impacts into remedial design. These officials noted that it is up to remedial project managers to be aware of this issue and it is done on an ad hoc basis. Further, EPA headquarters officials who review proposed remedies told us that additional guidance from EPA on managing the risks from potential impacts of climate change effects would be useful. According to EPA documents and another EPA headquarters official, EPA has determined that existing direction—guidance and processes— for risk response provide a robust structure to integrate climate change information into remedy selection and design. However, without providing direction for remedial project managers on how to integrate information on potential impacts of climate change effects into site-level risk response decision making at nonfederal NPL sites, EPA cannot ensure that remedies will protect human health and the environment in the long term. Monitoring Risks EPA’s actions to monitor risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites through its Five-Year Review process align with this essential element. In 2016, EPA introduced a new recommended template for the Five-Year Review that includes a section for officials to document their consideration of whether any newly available information related to climate change may call into question a remedy’s protectiveness. Officials in three regions told us they use the Five-Year Review process to identify and evaluate newly available information on climate change effects that may impact nonfederal NPL sites. For example, in the 2014 Five-Year Review report for the Publicker Industries site in Pennsylvania, Region 3 considered newly available information on projected sea level rise in the region to determine if those projections called into question the protectiveness of the existing remedies at the site. Officials in that region told us that they rely on their biological and technical assistance group to identify any new relevant climate change data to incorporate into their Five-Year Reviews. Region 7 officials also told us that they assess any potential changes in future conditions, especially flooding, during the Five-Year Review process. Officials from two other regions told us that they monitor changes in site conditions that may be related to climate change during the Five-Year Review process. For example, Region 2 officials developed additional guidance to help remedial project managers and site project teams consider changes in site conditions related to climate change in the Five-Year Review process. Region 6 officials told us that during the Five-Year Review process, they take into account any current flood hazard information from FEMA as well as current sea levels, but they do not monitor projections about sea level rise. Communicating and Reporting on Risks EPA’s actions to communicate about and report on risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites align with this essential element. For example, as described above, EPA reported on the potential impacts of climate change effects—which may pose risks to human health and the environment—on NPL sites in its 2014 national climate change adaptation plan and the climate change adaptation implementation plans for the office that oversees the Superfund program and all regions. In addition, publicly available site-level documents, such as the records of decision described above, may include information on risks from climate change and EPA’s actions to manage these risks. EPA officials may also communicate this information in response to questions from the public. EPA officials from four regions told us that they have not received many direct questions on risks from climate change from the public. However, members of the public can comment on climate change risks through EPA’s existing public engagement mechanisms, and some have done so. For example, EPA officials in Region 7 received questions on the draft record of decision for the West Lake Landfill site in Missouri during the public comment period and responded to those questions in the final version of the document, describing how they addressed risks of increased flooding from climate change in the remedy selection processes. EPA has also communicated with stakeholders and the public on risks to human health and the environment from the potential impacts of climate change effects in other ways. For example, officials from Region 10 convened a workshop in 2017 to discuss climate change impacts on sediment cleanup and upland source control for the Lower Duwamish Waterway site in Washington with other federal agencies, state and local officials, universities, companies, and community groups. In addition, EPA provides an online mapping tool that can help members of the public identify sites located in areas that would be impacted by up to 6 feet of sea level rise or in flood hazard areas as determined by FEMA. EPA Recognizes Various Challenges in Managing Risks from the Potential Impacts of Climate Change Effects at Nonfederal NPL Sites EPA recognizes institutional, resource, and technical challenges in managing risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites, according to agency and other documents that we reviewed and EPA officials and stakeholders we interviewed. Institutional Challenges According to agency and other documents we reviewed and officials and stakeholders that we interviewed, EPA faces institutional challenges in managing risks to human health and the environment from the potential impacts of climate change effects. As discussed above, officials from three regions told us that they do not have the direction they need to manage these risks. For example, EPA officials in Region 2 told us that during Five-Year Reviews, engineers may analyze several different maps on flooding potential and must use their professional judgment to determine how resilient to design the remedy, because there is no standard guidance on how to do so. Further, EPA officials in two regions and stakeholders we interviewed stated that it may not be clear whether EPA could require PRPs to consider climate change impacts in the cleanup process. However, according to EPA headquarters officials, considering climate change is consistent with the National Contingency Plan and the CERCLA criterion that requires officials to consider the long- term effectiveness of remedies when evaluating cleanup alternatives. Another institutional challenge that EPA faces is that its ability to manage these risks may depend on actions of other entities that are outside of EPA’s control, according to EPA documents we reviewed and EPA officials we interviewed. For example, EPA officials from Region 1 told us that they are not certain whether a hurricane barrier built by the U.S. Army Corps of Engineers that protects the New Bedford Harbor site in Massachusetts is designed to withstand future storms. Managing risks may also require internal coordination within EPA, which presents another challenge. For example, an EPA headquarters official told us that it can be challenging for regional Superfund program staff to connect with EPA experts on climate change, who may be in different program offices. In April 2019, EPA restructured its regional offices, consolidating cross- cutting issue areas in the immediate office of each Regional Administrator and Deputy Regional Administrator. Although it is too early to evaluate the effect of this restructuring, EPA headquarters officials told us that the restructuring may help address this challenge. Furthermore, EPA officials from three regions told us that they face challenges related to the sensitive nature of climate change. For example, officials in Region 6 told us that when they engaged with the local community during the decision making process for the San Jacinto River Waste Pits site in Texas, they avoided using the term climate change because of concerns that the charged term would alienate some community members. Resource Challenges Documents from four EPA regions and headquarters officials and officials from three regions we interviewed stated that insufficient or changing resources—specifically funding and staffing—makes managing risks to human health and the environment from the potential impacts of climate change effects challenging for EPA. For example, according to two regional climate change adaptation implementation plans and EPA officials, assessing these risks may require more resources than assessing risks based on current or past conditions. In addition, designing or modifying existing remedies to respond to these risks could increase costs, according to EPA documents we reviewed and EPA officials we interviewed. EPA officials from three regions told us that staffing constraints can make it difficult to manage risks. For example, EPA officials from Region 9 told us that the need for remedial project managers to respond to other emergencies, such as overseeing hazardous materials removal after fires, means that they have less time to oversee cleanup of nonfederal NPL sites. Officials from Region 10 told us that they had a climate change advisor who helped integrate climate change into all aspects of the region’s work, but that person retired, and the region was unable to fill the position because of resource constraints. As noted above, according to an EPA headquarters official, EPA’s recent restructuring of its regional offices may help address this challenge. Technical Challenges EPA faces technical challenges in managing risks to human health and the environment from the potential impacts of climate change effects in terms of available expertise and data, according to documents we reviewed and EPA regional officials we interviewed. In its 2014 agency- wide climate change adaptation plan, EPA reported that site vulnerabilities may be difficult to assess because of limited scientific understanding. EPA officials told us that they need additional expertise and training to better manage risks. For example, an EPA official in Region 2 told us that it would be useful to have training on assessing risks for projects located in floodplains. As noted above, EPA has developed training for officials on managing risks from climate change, such as a course on building resilient Superfund remedies that EPA offered at the annual National Association of Remedial Project Managers meeting in August 2019. The course’s focus is to help remedial project managers incorporate consideration of adaptation and build resilience into Superfund remedies at extreme weather event–impacted sites, according to the course agenda. According to EPA documents and EPA officials from two regions, appropriate climate change data may not be available to inform assessments that help manage risk. For example, the Region 4 study of the vulnerability of NPL sites stated that climate model projections of temperature and precipitation patterns are not available at a spatial resolution that is useful for assessing vulnerabilities at the site level. In Region 6, officials told us that when the U.S. Army Corps of Engineers modeled flooding for the San Jacinto River Waste Pits site in Texas, it had to rely on past flooding data because the only information available was on historical storms. In addition, the level of uncertainty inherent in climate change data may make it challenging for EPA to incorporate that information into risk management processes, according to agency documents we reviewed and some agency officials we interviewed. As noted above, we made recommendations to address similar challenges with climate data in a prior report. Conclusions Climate change may result in more frequent or intense extreme events, such as flooding, storm surge, and wildfires, among other effects, which could damage remedies at nonfederal NPL sites and lead to releases of contaminants that could pose risks to human health and the environment. Our analysis of EPA, FEMA, NOAA, and U.S. Forest Service data has shown that more than half of nonfederal NPL sites—based on a point coordinate with a 0.2-mile radius as a proxy for the site boundaries—are located in areas that may be impacted by selected climate change effects. To help ensure the long-term protectiveness of remedies, it is important for EPA to understand how climate change effects may impact nonfederal NPL sites, and the agency has taken steps to do this. However, EPA does not have quality information on the precise boundaries of nonfederal NPL sites, which could make it difficult to determine the nonfederal sites located in areas that may be impacted by climate change effects. The agency has taken initial steps to develop this information but does not have a schedule in place for completing this effort. EPA has taken actions to manage risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites. These actions align with three of the six essential elements of enterprise risk management. However, EPA has not clarified how its actions to manage risks from these effects at nonfederal NPL sites align with current agency goals and objectives, which could limit its senior officials’ ability to manage these risks. Further, EPA officials do not always have direction to ensure that they consistently integrate climate change information into site-level risk assessments and risk response decisions, according to EPA documents and officials. Without providing such direction for remedial project managers, EPA cannot ensure that remedies at nonfederal NPL sites will protect human health and the environment in the long term. Recommendations for Executive Action We are making the following four recommendations to EPA: The Director of the Office of Superfund Remediation and Technology Innovation should establish a schedule for standardizing and improving information on the boundaries of nonfederal NPL sites. (Recommendation 1) The Administrator of EPA should clarify how EPA’s actions to manage risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites align with the agency’s current goals and objectives. (Recommendation 2) The Director of the Office of Superfund Remediation and Technology Innovation should provide direction on how to integrate information on the potential impacts of climate change effects into risk assessments at nonfederal NPL sites. (Recommendation 3) The Director of the Office of Superfund Remediation and Technology Innovation should provide direction on how to integrate information on the potential impacts of climate change effects into risk response decisions at nonfederal NPL sites. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to EPA for its review and comments. In its comments, reproduced in appendix II, EPA stated that it recognizes the importance of ensuring Superfund sites cleanups are resilient in the face of existing risks and extreme weather events. EPA added that it has taken actions to include vulnerability analyses and adaptation planning in its Superfund activities. We acknowledge that EPA has taken some action to manage risks. However, EPA has not clarified how its risk-related actions align with agency goals and objectives. Further, it has not provided direction to ensure that officials consistently integrate climate change information into site-level risk assessments and risk response decisions. Regarding our recommendations, EPA agreed with one and disagreed with the other three. We continue to believe that all recommendations are warranted. In response to our recommendation that the Director of the Office of Superfund Remediation and Technology Innovation establish a schedule for standardizing and improving information on the boundaries of nonfederal NPL sites, EPA noted that it agrees with our finding and acknowledges a lack of consistent standards to identify site boundaries at the national level. According to EPA, it has taken initial steps to develop an approach to standardize and improve information on nonfederal NPL site boundaries. EPA stated that it expects to establish a schedule for this effort by the second quarter of fiscal year 2020, with the aim to have collected an initial set of site boundaries for all NPL sites by the fourth quarter of fiscal year 2021. In response to our recommendation that EPA clarify how its actions to manage risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites align with the agency’s current goals and objectives, EPA said that it believes managing risks from exposure to contaminants in the environment is integral to EPA’s current strategic goal 1.3, Revitalize Land and Prevent Contamination. We agree that protectiveness is a key part of strategic objective 1.3. However, this strategic objective does not include any measures related to climate change or discuss strategies for addressing the impacts of climate change effects. An essential element of enterprise risk management is to align risk management processes with goals and objectives. Consequently, we believe that our recommendation is still warranted. In response to our recommendations that the Director of the Office of Superfund Remediation and Technology Innovation provide direction on how to integrate information on the potential impacts of climate change effects into risk assessments and risk response decisions at nonfederal NPL sites, EPA said that it strongly believes the Superfund program’s existing processes and resources adequately ensure that risks and any effects of severe weather events are woven into risk assessments and risk response decisions at nonfederal NPL sites. However, as we noted in our report, EPA’s current direction does not address all types of cleanup actions or climate change effects. Further, EPA officials from some regions told us that current EPA practice for assessing risks at NPL sites does not always include consideration of climate change and that they have not consistently integrated climate change information into site- specific remedy selection and design. EPA noted that it may issue a memorandum to reinforce the tools and resources available to NPL site teams and would determine whether to issue this memorandum by the end of January 2020. EPA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report examines (1) what available federal data suggest about the number of nonfederal National Priorities List (NPL) sites that are located in areas that may be impacted by selected climate change effects; (2) the extent to which the Environmental Protection Agency (EPA) has managed risks to human health and the environment from the potential impacts of climate change effects at nonfederal NPL sites; and (3) the challenges, if any, EPA faces in managing these risks. To determine what available federal data suggest about the number of nonfederal NPL sites that are located in areas that may be impacted by selected climate change effects, we reviewed the Fourth National Climate Assessment (NCA) to identify potential climate change effects. Based on our review of the NCA, we identified the following potential climate change effects: sea level rise, which may lead to increased frequency and extent of extreme flooding from coastal storms; greater frequency and magnitude of drought; increased intensity and frequency of heavy precipitation events, which may lead to increased local flooding; salt water intrusion; increased incidence of large wildfires; increased frequency and intensity of extreme high temperatures and sustained increases in average temperatures; decreased permafrost; and increased intensity—including higher wind speeds and precipitation rates—and frequency of very intense hurricanes and typhoons. We reviewed EPA documents (such as EPA’s climate change adaptation implementation plans) to identify potential climate change effects that may impact nonfederal NPL sites and interviewed EPA officials. Through a review of federal agencies’ documents and databases and interviews with officials about their data and research on these effects, we identified available national federal data sets on three current hazards: flooding, storm surge, and wildfires—which the NCA reports will be exacerbated by climate change—from the Federal Emergency Management Agency (FEMA), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Forest Service. We also identified data on sea level rise from NOAA. In this report, we refer to (1) flooding, (2) storm surge, (3) wildfires, and (4) sea level rise as potential climate change effects. We used the most recently available data for each of these climate change effects; these data do not provide estimates of the projected changes in the future. To the extent that data were available, we analyzed a range of these potential climate change effects. For example, we used the maximum extent of storm surge from Category 1 hurricanes as well as Category 4 or 5 hurricanes, the highest possible categories, as modeled by NOAA. We focused on a range because, for three of the four effects, we had data on current hazards, which may become more intense and frequent in the future, according to the NCA. Additionally, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) directs EPA to give preference to remedies that would result in the permanent and significant decrease in toxicity, mobility, or volume of the contamination. According to EPA officials, remedies at nonfederal NPL sites may have to be operational indefinitely, during which time the potential effects of climate change may become more extreme. The range of estimates we provide in our report is as follows: For flooding, we used data from FEMA’s National Flood Hazard Layer as of October 2018. FEMA identifies a variety of flood hazards, and for reporting purposes, we grouped flood hazard zones into four categories: (1) 1 percent or higher annual chance of flooding, (2) 0.2 percent or higher annual chance of flooding or other flood hazards, (3) unknown flood hazards, and (4) minimal flood hazard. For storm surge, we used data from NOAA’s model on Sea, Lake and Overland Surges from Hurricanes as of November 2018 for Category 1 and Category 4 or 5 hurricanes. For wildfire, we used data from the U.S. Forest Service’s 2018 wildfire hazard potential map, which the U.S. Forest Service released in July 2018. We used areas with high or very high wildfire hazard potential in our analysis. The U.S. Forest Service based the 2018 map on wildfire likelihood and intensity data from 2016, spatial fuels and vegetation data from 2012, and point locations of past fire occurrence from 1992 to 2013. For sea level rise, we used NOAA data, last updated in September 2018. We downloaded inundation data on 0, 1, 3, and 8 feet of sea level rise and “not mapped” areas. We obtained data from EPA’s Superfund Enterprise Management System on the location and other characteristics of active and deleted nonfederal NPL sites. In our analysis, we used a 0.2-mile radius around the primary geographic coordinate point of each nonfederal NPL site, which may not accurately represent their actual areas because the sites vary in size and shape. The EPA data we used in our analysis on the location of nonfederal NPL sites are current as of March 2019. We also obtained EPA data on contaminants and types of remedies that are current as of fiscal year 2014 to determine the number of contaminants EPA has identified in nonfederal NPL sites. We did not conduct further site-specific analyses, such as those related to the extent of contamination and location of remedies. We reviewed documents from the Agency for Toxic Substances and Disease Registry on the health effects of hazardous substances in nonfederal NPL sites and interviewed officials from that agency. To analyze whether nonfederal NPL sites are located in areas that may be impacted by flooding, we used ArcGIS mapping software to intersect the area of a 0.2-mile radius around the primary coordinate of the sites with the categories we defined from the National Flood Hazard Layer. If a site overlapped with areas in more than one of the four reporting groups, we categorized the site in the group representing the highest flood hazard. For the purposes of our report, we considered the highest flood hazard to be, in descending order, 1 percent or higher annual chance of flooding, other flood hazards (including 0.2 percent or higher annual chance of flooding), unknown flood hazard or no data, and minimal flood hazard. To analyze whether nonfederal NPL sites are located in areas that may be impacted by storm surge, wildfires, and sea level rise, we used MapInfo mapping software to intersect the area of a 0.2-mile radius around the primary coordinates of sites with each of these layers. Overlap indicates that a site is potentially in an area that may be impacted. To assess the reliability of FEMA’s National Flood Hazard Layer, we reviewed FEMA’s methodology, guidelines, and standards; interviewed FEMA officials to assess the timeliness and accuracy of the data as well as any limitations of the data; conducted data testing to check for missing data and inconsistencies; and reviewed internal controls. We also reviewed a prior GAO report on the methodology FEMA uses to map flood hazards. To assess the reliability of NOAA’s data on Sea, Lake and Overland Surges from Hurricanes, we reviewed NOAA’s methodology for developing the model, interviewed NOAA officials to assess the timeliness and accuracy of the data as well as any limitations of the data, and reviewed internal controls. To assess the reliability of the U.S. Forest Service’s wildfire hazard potential data, we reviewed the agency’s documentation of the methodology, interviewed U.S. Forest Service officials to assess the timeliness and accuracy of the data as well as any limitations of the data, and reviewed internal controls. We also reviewed our past reports that cited the 2014 versions of these data. To assess the reliability of NOAA’s data on sea level rise, we reviewed the methodology NOAA used for developing the model, interviewed NOAA officials to assess the timeliness and accuracy of the data as well as any limitations of the data, and reviewed internal controls. To assess the reliability of EPA’s data, we reviewed agency manuals and data dictionaries to understand data elements, interviewed EPA officials to assess the timeliness and accuracy of the data and related internal controls, conducted data testing, discussed inaccuracies with EPA officials; and obtained corrected data. For example, we compared the zip code of each nonfederal NPL site to its coordinate to check the accuracy of site locations. We shared potential errors with EPA officials, who corrected the coordinates of six sites. As a result of the steps described above, we found data from EPA, FEMA, NOAA, and the U.S. Forest Service to be sufficiently reliable for our purposes. To determine the extent to which EPA has managed risks to human health and the environment from the potential impacts of climate change effects on nonfederal NPL sites, we examined relevant provisions in CERCLA, EPA’s implementing regulations, and executive orders. We also reviewed EPA documents, including climate change adaptation and implementation plans; vulnerability studies; training materials; and site- specific documents, our prior work, and relevant documents from other organizations, such as the National Research Council. We identified these documents by conducting a search of (1) websites of relevant agencies and organizations and (2) article databases. We also reviewed documents provided to us by agency officials and stakeholders that we identified as described below. We interviewed EPA officials at headquarters and all regional offices to identify information on agency actions for managing risks. In addition, to obtain their views of EPA’s actions, we interviewed former EPA officials, representatives of two associations representing state officials (the Environmental Council of States and the Association of State and Territorial Solid Waste Management Officials), a professor of environmental law, and a private consultant who has worked on Superfund issues, which we identified in the search described above and recommendations from other interviewees. We generally contacted all stakeholders that we identified who appeared to be currently working on issues related to Superfund and climate change and who agreed to speak with us. We also interviewed stakeholders at the three sites we selected as illustrative examples in order to obtain their views of EPA’s actions. We selected three nonfederal NPL sites as illustrative examples of how EPA has managed risks to human health and the environment from potential impacts of climate change effects and challenges EPA may face in managing these risks. The three sites we selected are the (1) American Cyanamid site in Bridgewater, New Jersey; (2) Iron Mountain Mine site near Redding, California; and (3) San Jacinto River Waste Pits site in Channelview, Texas. To select these sites, we initially identified 43 sites based on information in EPA documents, news articles, and interviews with EPA officials and other stakeholders as described above. We selected relevant sites in three different EPA regions that illustrate a variety of potential climate change effects and that had experienced an extreme weather event in the past 10 years. To gather more in-depth information about these sites, we reviewed EPA and other documents; toured the sites; and interviewed EPA officials and relevant stakeholders at these sites, including state and local officials, representatives of potentially responsible parties, and community organizations. The results from these illustrative examples are not generalizable to nonfederal NPL sites that we did not select. We compared EPA’s actions to manage risks to human health and the environment from the potential impacts of climate change effects with essential elements for managing risk as identified in our prior work on enterprise risk management. These essential elements are as follows: (1) align the risk management process with goals and objectives, (2) identify risks, (3) assess risks, (4) respond to the risks, (5) monitor the risks, and (6) communicate and report on the risks. We assessed information on EPA’s actions to determine the extent to which the agency’s actions aligned with these elements. In assessing EPA’s actions against these essential elements, we used “aligned,” “partially aligned,” or “not aligned” to reflect the extent to which EPA took actions aligned with each essential element. If EPA provided evidence that it had taken major actions in alignment with that essential element, we determined the actions were aligned. If EPA provided evidence that it had taken some actions in alignment with that essential element, we determined the actions were partially aligned. If EPA took only a few or no actions in alignment with that essential element, we determined the actions were not aligned. Two GAO analysts independently reviewed the information on EPA’s actions and then reached consensus on the extent to which EPA’s actions were aligned with each element. To identify the challenges EPA faces in managing these risks, we reviewed EPA documents; our prior work; and relevant documents from other organizations, including the National Research Council, that we obtained as described above. We interviewed EPA officials at headquarters and all regional offices and stakeholders in order to obtain their views on the challenges EPA faces. The views of stakeholders we interviewed are illustrative and not generalizable to all stakeholders. We reviewed the challenges that we identified in these documents and interviews and grouped all the challenges into three categories for reporting purposes: institutional, resource, and technical. Two GAO analysts independently reviewed the information and then reached consensus on the challenges and their grouping in the three categories. We conducted this performance audit from April 2018 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Environmental Protection Agency Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Barbara Patterson (Assistant Director), Ruth Solomon (Analyst in Charge), Breanne Cave, Charles Culverwell, Cindy Gilbert, Richard Johnson, Gwen Kirby, Krista Mantsch, Patricia Moye, Eleni Orphanides, Ernest Powell Jr., Dan Royer, and Kiki Theodoropoulos made key contributions to this report.
Why GAO Did This Study Administered by EPA, Superfund is the principal federal program for addressing sites containing hazardous substances. EPA lists some of the most seriously contaminated sites—most of which are nonfederal—on the NPL and has recorded over 500 contaminants, including arsenic and lead, at those sites. Climate change may make some natural disasters more frequent or more intense, which may damage NPL sites and potentially release contaminants, according to the Fourth National Climate Assessment. GAO was asked to review issues related to the impact of climate change on nonfederal NPL sites. This report examines, among other objectives, (1) what available federal data suggest about the number of nonfederal NPL sites that are located in areas that may be impacted by selected climate change effects and (2) the extent to which EPA has managed risks to human health and the environment from the potential impacts of climate change effects at such sites. GAO analyzed available federal data; reviewed laws, regulations, and documents; interviewed federal officials and stakeholders; visited three nonfederal NPL sites that experienced natural disasters; and compared EPA actions to manage risk to GAO’s six essential elements of enterprise risk management. What GAO Found Available federal data—from the Environmental Protection Agency (EPA), Federal Emergency Management Agency, National Oceanic and Atmospheric Administration, and U.S. Forest Service—on flooding, storm surge, wildfires, and sea level rise suggest that about 60 percent of all nonfederal National Priorities List (NPL) sites are located in areas that may be impacted by these potential climate change effects. Additional information on these sites can be viewed in an interactive map and downloadable data file, available here (see figure). EPA’s actions to manage risks to human health and the environment from potential impacts of climate change effects at nonfederal NPL sites align with three of the six essential elements of enterprise risk management GAO previously identified, partially align with two essential elements, and do not align with one essential element. For example, EPA has not taken actions consistent with one essential element because it has not aligned its process for managing risks with agency-wide goals and objectives, which do not mention climate change. Without clarifying this alignment, EPA cannot ensure that senior officials will take an active role in strategic planning and accountability for managing these risks. What GAO Recommends GAO is making four recommendations to EPA, including that it clarify how its actions to manage risks at nonfederal NPL sites from potential impacts of climate change align with current goals and objectives. EPA agreed with one recommendation and disagreed with the other three. GAO continues to believe that all four are warranted.
gao_GAO-19-243
gao_GAO-19-243_0
Background Pertinent Regulations Governing Federal Contractors and Tax Debt The FAR, among other things, sets forth requirements that must be met before agencies can award contracts to prospective contractors. Beginning February 26, 2016, contracting officers are required to include a provision in all contract solicitations that require contractors to report information about unpaid federal taxes regardless of the contract value. Specifically, FAR § 52.209-11 incorporates the language from the fiscal years 2015 and 2016 appropriations acts that prohibits the government from entering into contracts with corporations with unpaid federal taxes that have been assessed, for which all judicial and administrative remedies have been exhausted or have lapsed, and that are not being paid in a timely manner pursuant to an agreement with the authority responsible for collecting the tax liability, where the awarding agency is aware of the unpaid tax liability, unless an agency has considered suspension or debarment of the corporation and made a determination such action is not necessary to protect the interests of the government. If the prospective contractor reports having unpaid federal taxes under this provision, the contracting officer must request additional information from the prospective contractor; in accordance with agency procedures, notify the officials responsible for debarment and suspension actions, commonly referred to as the suspension and debarment officials (SDO); and not award to the corporation unless an agency SDO has considered suspension or debarment of the corporation and has made a determination that suspension or debarment is not necessary to protect the interests of the government. Additionally, the FAR requires that contracting officers include in certain contract solicitations another provision for prospective contractors to report delinquent taxes. Specifically, contracting officers are also required to include FAR § 52.209-5 in contract solicitations in which the contract value is expected to exceed the simplified acquisition threshold, which was generally $150,000 at the time of our review, under which prospective contractors report delinquent federal taxes owed. This requirement has been in place since 2008. Under this provision, the prospective contractor must report whether it or any of its principals have, within the preceding 3-year period, been notified of “delinquent federal taxes” in an amount that exceeds $3,500. For purposes of this provision, “delinquent federal taxes” are those for which the tax liability is finally determined and assessed, with no pending administrative or judicial challenge, and all judicial appeal rights are exhausted; and the taxpayer is delinquent in making payment, unless enforced collection action is precluded (the taxpayer is not delinquent if the taxpayer has entered into an installment agreement and is making timely payments in compliance with the agreement terms). If the prospective contractor reports having federal tax debt under this provision, the contracting officer must (1) request additional information from the prospective contractor and (2) in accordance with agency procedures notify the officials responsible for debarment and suspension actions. Further, the contracting officer is not required to receive a suspension and debarment determination before contract award for tax debt reported under this certification. In addition, the FAR generally requires prospective contractors to register in SAM before a contract can be awarded. As part of registering in SAM, prospective contractors must make up to 54 representations and certifications, which must be updated as necessary but at least annually. Included among these is the federal tax debt FAR § 52.209-11 representation and § 52.209-5 certification. The representations and certifications in SAM must be kept current, accurate, and complete. Unpaid federal tax debts reported under FAR § 52.209-11 and delinquent federal taxes reported under § 52.209-5 do not automatically disqualify the prospective contractor from receiving a contract, but rather are used as part of the contracting officer’s responsibility determination of the prospective contractor. Contracting officers rely on the contractors’ representations and certifications in SAM to identify qualifying federal tax debts. Federal tax law generally prohibits the IRS from disclosing taxpayer data to other federal agencies for the purpose of determining whether potential contractors owe qualifying federal tax debt. As a result, contracting officers cannot verify a contractor’s tax-debt status by obtaining taxpayer information directly from the IRS without the contractor’s prior consent. Pre–Contract Award Requirements Related to Tax Debt In general, the federal pre–contract award process consists of the agency identifying its needs for goods and services, creating an acquisition plan, posting a solicitation that allows interested contractors to submit bids or proposals, and assessing and selecting a prospective contractor to meet its needs. Agency contracting personnel have a variety of pre–contract award responsibilities. As one of these responsibilities, the contracting officer is to identify the FAR provisions and clauses required to be included in contract solicitations based on various criteria, such as the contract type and contract value. For example, contracts expected to be above the simplified acquisition threshold are required to include § 52.209-5 in the solicitation. After the solicitation is issued and prospective contractors’ offers are obtained, the contracting officer, among other tasks, generally must verify that the prospective contractor is registered in SAM, and that the contractor is not suspended or excluded from doing business with the federal government prior to contract award. The contracting officer must also determine whether the prospective contractor is “responsible.” FAR § 9.104-1 requires that to be determined responsible, prospective contractors must have adequate financial resources to perform the contract, or the ability to obtain them; have a satisfactory record of integrity and business ethics; and be otherwise qualified and eligible to receive an award under applicable laws and regulations, among other things. As part of the responsibility determination, the contracting officer must also access, review, and document the prospective contractor’s applicable representations and certifications, including qualifying federal tax debt reported under § 52.209-11 and § 52.209-5. See figure 1 for an overview of the pre– contract award requirements related to tax debt. IRS Levies to Collect Unpaid Taxes The IRS, which is located in the Department of the Treasury (Treasury) and led by a commissioner, may collect assets or payments, including federal contract payments to collect unpaid taxes, and these collections are referred to as a “levy.” The IRS will usually levy only after notifying the taxpayer in writing of the amount of the unpaid tax and the right of the taxpayer to request a hearing within a 30-day period before the levy occurs. However, if the taxpayer is a federal contractor, the taxpayer is given the opportunity for the hearing within a reasonable period after the levy. One way the IRS levies federal contractor payments is through the FPLP, which is an automated program that can collect overdue taxes through a continuous levy on certain federal payments processed by Treasury’s Bureau of the Fiscal Service (Fiscal Service). In addition to the FPLP, the IRS can also levy federal contractors manually. Specifically, the IRS may levy federal contractor payments directly from federal agencies to collect unpaid taxes. Selected Agencies Have Controls to Identify Contractors’ Reported Tax Debt, but the Controls Were Potentially Ineffective at Ensuring Compliance with Regulations Agencies Have Control Activities to Identify Contractors That Reported Qualifying Federal Tax Debt The five selected agencies we examined have established control activities to varying degrees to help contracting officers comply with federal laws and regulations related to identifying prospective contractors’ reported qualifying federal tax debt. These control activities include the following: Class Deviations: The five agencies issued class deviations from the FAR to implement the tax debt–related appropriations restriction prior to February 26, 2016. These class deviations generally required contracting officers to include an alternative provision in solicitations and, if a contractor reported having qualifying tax debt, to not award the contract without a written suspension and debarment determination from an agency SDO. For example, the Department of Defense, DOE, HHS, and VA issued class deviations as early as 2012 that required contracting officers to take two actions: (1) insert an alternate provision when issuing solicitations using appropriated funds and (2) obtain an SDO determination that suspension or debarment is not necessary to protect the interests of the government before awarding a contract to a contractor who reported qualifying tax debts. Policies and Procedures: VA, DOE, and HHS issued policies and procedures to varying degrees that generally direct contracting officers to the relevant sections of the FAR when assessing contractor responsibility. For example, both VA and DOE issued policies or guidance on determining contractor responsibility and including § 52.209-5 in solicitations where the value was expected to exceed the simplified acquisition threshold. In addition, agency officials who supervise contracting officers told us that contracting officers use contractors’ representations and certifications in SAM to identify qualifying federal tax debts and document their review of the information when determining contractor responsibility before contract award. For example, one of the Navy’s responsibility-determination templates requires contracting officers to notate that they verified, in SAM, that the prospective contractor did not report qualifying federal tax debts under FAR § 52.209-5. Further, the five agencies have also issued procedures outlining the SDO suspension and debarment referral and review process, as required by federal regulations. For example, HHS issued guidance on suspension and debarment that includes (1) relevant contact information, (2) required or optional documentation to include, and (3) potential causes for suspension or debarment, such as the contractor reported qualifying federal tax debt. Both the Army and Navy issued policy alerts informing contracting officers of the February 26, 2016, effective date of FAR § 52.209-11 and the requirement that an SDO determine that suspension or debarment is not necessary to protect the interests of the government before awarding a contract to a contractor who reported having tax debts under this provision. Contract-File Compliance Tools: The five agencies told us that contracting officers have tools available that help ensure required information, including information related to federal tax debt, is reviewed and documented in contract files. For example, contracting officer supervisors and policy officials at these agencies told us that contracting officers use agency contract-writing systems to assist with identifying and inserting required FAR provisions and clauses in the contract solicitation. HHS and VA contracting officer supervisors also told us contracting officers use contract-file checklists to ensure required FAR provisions and clauses are included in the contract solicitation. In addition, some of the five selected agencies’ contract- file checklists or memorandums we reviewed generally document that the contracting officer verified the prospective contractor’s SAM registration, and suspension and debarment status, and retrieved the relevant SAM representations and certifications before contract award. Further, some VA and DOE contract checklists we reviewed also document that the contracting officer considered tax debts reported under § 52.209-5 or federal tax debt in general (see fig. 2). Periodic Compliance Reviews of Samples of Contracts: The five agencies’ policy officials and contracting officer supervisors we interviewed told us they generally conduct compliance reviews on a sample of contract files before and after contract award to ensure that the required FAR provisions and clauses are inserted in contract solicitations, including peer-to-peer, management, and legal compliance reviews. Agency officials also told us this includes verifying that the contracting officer considered and documented the prospective contractors’ SAM representations and certifications before contract award. For example, the Army’s procurement management review program is designed to ensure regulatory and policy compliance, among other things, via oversight by a multilevel program that reviews each contracting activity every 3 years. Training: DOE and VA provide training that generally discusses contractor responsibility determinations and references the requirement that contracting officers inform the SDO when prospective contractors report that they have qualifying federal tax debt before contract award. The Department of Defense provides training on the causes for suspension, and the Navy SDO also provides training discussing the requirement to notify the SDO when prospective contractors report qualifying federal tax debt. HHS suspension and debarment staff we interviewed told us that they provide general suspension and debarment training that includes causes for suspension and debarment referrals, such as tax debt. Further, one Navy contracting office also provides training on inserting the tax-debt provision in all contract solicitations. Selected Agencies’ Control Activities Potentially Did Not Ensure Compliance with Requirements Related to Contractors’ Reported Qualifying Tax Debt We identified 1,849 contracts awarded by the five selected agencies in 2015 and 2016 to contractors that reported qualifying federal tax debt that potentially should have resulted in these agencies taking required follow- up actions before contract award, such as notifying the agency SDO of these tax debts. Specifically, according to our analysis of FPDS-NG and SAM data for this period, the five selected agencies potentially should have notified an SDO prior to awarding 1,849 contracts to contractors that reported having qualifying federal tax debt under their § 52.209-11 representation or § 52.209-5 certification, which we discuss further below. However, none of the five selected agencies’ SDOs we interviewed were notified of any instances in which a contracting officer identified a prospective contractor with these reported qualifying federal tax debts, and they did not receive any tax debt-related referrals within this period. Agency officials we interviewed were unable to explain why the SDOs were not notified without reviewing each of the 1,849 contract files. Because referrals were not made to an SDO before awarding the contract, agencies’ control activities do not appear to have operated effectively to identify contractors’ reported tax debt and to consider suspension and debarment when required. As a result, these contracts may have been awarded without required actions being taken—a potential violation of federal regulations and, in some cases, the Antideficiency Act. In addition, we reviewed a nongeneralizable sample of seven contracts where prospective contractors reported qualifying tax debts before receiving contract awards and identified two illustrative examples where agency control activities did not ensure regulatory compliance. The tax debts for these contractors were collectively more than $250,000, and historical IRS tax records include instances where the IRS had assessed a Trust Fund Recovery Penalty (TFRP), indicating willful failure to collect, account for, or pay taxes owed. Nonetheless, the contracting officers awarded these two contracts without taking required follow-up actions for these awards. These contractors were awarded more than $510,000 in contract obligations in total, in 2015 and 2016. Four Agencies Did Not Take Potentially Required Actions before Contract Award When Contractors Reported Qualifying Federal Tax Debt under FAR § 52.209-11 In our analysis of the five selected agencies, we identified 143 contracts at four of the agencies that were awarded to contractors who reported qualifying federal tax debt in SAM under § 52.209-11 from February 26, 2016, through December 31, 2016. Table 1 shows the number of contract awards to contractors who reported qualifying federal tax debt under § 52.209-11 from February 26, 2016, through December 31, 2016, by selected agency. We did not identify contracts awarded by DOE during this period to similar contractors, and thus did not assess the operational effectiveness of the agency’s controls activities for compliance with its relevant class deviation. However, none of the four agencies that awarded these 143 contracts took required follow-up actions that potentially should have resulted from the contractor’s reporting qualifying tax debt before contract award. As mentioned earlier, when prospective contractors report having qualifying federal tax debt under § 52.209-11, the FAR requires that contracting officers (1) request that the contractor provide such additional information as the contractor deems necessary in order to demonstrate responsibility; (2) notify, in accordance with agency procedures, the SDO of the contractor’s reported qualifying federal tax debt, before award, for suspension and debarment review; and (3) not award the contract unless an SDO determines that further action is not required to protect the interest of the government. The FAR also requires that contracting officers possess or obtain information sufficient to determine whether the prospective contractor is responsible. As mentioned above, qualifying federal tax debts reported under this representation do not automatically disqualify the prospective contractor from receiving a contract, but rather are used as part of the contracting officer’s responsibility determination of the prospective contractor. In our review of contract-file documentation for seven contract awards to contractors that reported they had qualifying tax debt under either provision, we could determine for one case under this representation that the contracting officer did not take required follow-up actions to ensure compliance with federal regulations. We highlight this example in the sidebar to the left. Agency contracting officer supervisors we interviewed from the four selected agencies that awarded the 143 contracts discussed earlier told us that they were not aware of any instances in which a contracting officer identified a prospective contractor’s reported qualifying federal tax debt under § 52.209-11 and notified the SDO during this period. As mentioned, the SDOs we interviewed at these four agencies told us that they did not receive, nor were they aware of, any notifications to review prospective contractors that reported having qualifying federal tax debt during this period. All four of these SDOs told us that they track notifications to the SDO manually or via a case-management tracking system. Further, none of the agency officials we interviewed at the selected agencies were able to identify specific reasons a contracting officer would not notify an SDO of reported qualifying federal tax debt as required. Five Agencies Did Not Take Potentially Required Actions before Contract Award When Contractors Reported Qualifying Federal Tax Debt under FAR § 52.209-5 Our analysis of the five selected agencies also identified 1,706 contracts awarded in 2015 and 2016 to contractors that reported having qualifying federal tax debt in SAM under § 52.209-5. Table 2 shows the number of contract awards to contractors that reported having qualifying tax debt under § 52.209-5 in 2015 and 2016, by selected agency. However, none of the five agencies that awarded these 1,706 contracts took required follow-up actions that potentially should have resulted from the contractor’s reporting qualifying tax debt before contract award. As mentioned above, as early as 2008, contractors were required to certify whether they had qualifying federal tax debt if, within the preceding 3-year period, they or any of their principals had been notified of “delinquent federal taxes” in an amount that exceeds $3,500 for which the liability remained unsatisfied. Also as previously mentioned, tax debts must only be reported under this provision if the tax liability is finally determined with no pending administrative or judicial challenge, all judicial appeal rights have been exhausted, enforcement action is not precluded, and the taxpayer is not in compliance with an installment repayment agreement. Qualifying federal tax debts reported under this certification do not automatically disqualify the prospective contractor from receiving a contract, but rather are used as part of the contracting officer’s responsibility determination. Further, contracting officers are to insert this FAR provision in solicitations where the value of the contract is expected to be greater than the simplified acquisition threshold. If a prospective contractor reports qualifying tax debt, contracting officers must request such additional information as the contractor deems necessary in order to demonstrate responsibility, and, prior to proceeding with the award, notify the agency’s SDO in accordance with agency procedures. While we cannot readily determine whether all 1,706 contract awards were out of compliance with federal regulations due to limitations in the data, as discussed earlier, our review of seven contract awards with reported qualifying tax debt under either provision identified an instance under this certification where we confirmed that the solicitation was above the simplified acquisition threshold and the contracting officer did not take follow-up actions to ensure compliance with federal regulations (see sidebar to the left). As mentioned, agency contracting officer supervisors we interviewed from the five agencies told us that they were not aware of any instances in which a contracting officer identified a prospective contractor’s reporting qualifying federal tax debt under § 52.209-5 and notified the SDO during this period. Further, SDOs we interviewed at these five agencies told us that they did not receive, nor were they aware of, any notifications identifying prospective contractors that reported qualifying federal tax debt under this FAR provision during this period. As mentioned earlier, four out of the five SDOs told us that they track SDO notifications, and none of the agency officials we interviewed identified specific reasons a contracting officer would not notify an SDO as required. When discussing these 1,849 contracts with agency officials, they were unable to explain whether or why their control activities did not operate effectively to ensure compliance with applicable federal laws and regulations. To do so, some of these officials told us that they would need to review the contract files for each of the 1,849 instances of potential noncompliance we identified. Specifically, the agency must confirm that (1) a solicitation was issued, and (2) the estimated value of the contract award was above the simplified acquisition threshold, when applicable, to determine whether the regulatory requirements applied. If the regulatory requirement applied to the contract award, the agency must then determine why their control activities did not operate effectively to ensure compliance. We plan to refer these contract awards to the appropriate agency’s Inspector General for review, and share them with the agencies at that time as well. Understanding why existing control activities potentially did not operate effectively will help these agencies ensure they are taking necessary steps to protect the interests of the government and avoid the misuse of appropriated funds in the future. The five selected agencies told us, in response to our review, they plan to take actions to improve control activities to identify contractors’ federal tax debts reported under § 52.209-11 and § 52.209-5. These planned actions include issuing new guidance, providing additional training, verifying that contracting officers considered reported tax debts in postaward compliance reviews, and updating preaward contract-file checklists to ensure compliance with federal laws and regulations. Some of the selected agencies also noted that the FAR requirements apply to all executive agencies and that a broader solution to accessing, identifying, and reviewing qualifying federal tax debt reported in SAM representations and certifications could be useful. Agency officials explained that contracting officers have to individually identify and review each relevant representation and certification—up to 54 representations and certifications—to become aware of the prospective contractor’s response before contract award. Further, agency officials told us that contractors’ responses are not easily identifiable in SAM and contracting officers can miss the contractor’s reported qualifying federal tax debt under § 52.209-11 and § 52.209-5. As mentioned earlier, accessing, reviewing and documenting the SAM representations and certifications is one part of the preaward contracting process and is one of the actions contracting officers are required to take as part of the contract award process. The SAM tax-related representations and certifications that must be reviewed before contract award are determined by various factors, including contract award value. See figure 3 for an overview of the general process to access, review, and identify prospective contractors’ qualifying tax debts reported in SAM. As mentioned earlier, GSA manages SAM, and while the GSA official we interviewed acknowledged the challenges raised by the selected agencies, this official noted that SAM representation and certification data are accessible to contracting officers for the purpose of reviewing qualifying federal tax debt reported by prospective contractors and taking any required follow-up actions. Nevertheless, this official noted that GSA is in the process of upgrading SAM, which may include changes to the representations and certifications. Standards for Internal Control in the Federal Government state that management should use high-quality information to achieve its objectives and that management should consider the accessibility of information and make revisions when necessary so that the necessary information is accessible. As GSA makes planned upgrades to SAM, it is in a position to consider improvements to SAM users’ experience with representations and certifications that may help executive-branch agency contracting officers more easily identify contractors’ reported qualifying federal tax debt under § 52.209-11 and § 52.209-5. Federal Contracts Were Awarded to Thousands of Contractors with Potentially Qualifying Federal Tax Debt Of the 120,000 federal contractors that were awarded contracts in 2015 and 2016, our analysis found that over 4,600 of them had unpaid taxes at the time they received the award. These contractors collectively owed $1.8 billion in unpaid taxes as of December 15, 2016, and received contract award obligations totaling $17 billion. We could not confirm, however, whether at the time of the contract awards these contractors’ unpaid taxes met the relevant legal definitions of qualifying federal tax debt under § 52.209-11 and § 52.209-5 due to limitations in the data. However, we were able to determine which debts likely met the definition of qualifying tax debt, and to determine those that did not meet the definition, as of December 15, 2016—a date after the contract award. Specifically, over 2,700 of these contractors had unpaid taxes that were all likely qualifying federal tax debt as of December 15, 2016. In addition, about 1,900 had unpaid taxes that were not qualifying federal tax debt. As previously noted, agencies are required by the FAR to consider contractors’ reported qualifying federal tax debt before awarding contracts. Generally, as mentioned earlier, agencies are not restricted from awarding contracts to contractors that report having qualifying federal tax debt if an agency SDO determines suspension and debarment of the contractor is not necessary to protect the interests of the government. We describe characteristics of the unpaid taxes and contract awards for these 4,600 contractors with unpaid taxes below. (See fig. 4.) Contractors Owed Unpaid Taxes at the Time They Received Contract Awards We identified over 4,600 federal contractors that had unpaid taxes at the time they received a contract award in 2015 and 2016. However, we could not confirm whether these contractors’ unpaid taxes met the relevant legal definitions under § 52.209-11 and § 52.209-5 at the time of the contract award due to limitations in the data we obtained, as previously described. These 4,600 contractors received about $17 billion in contract awards and owed $1.8 billion in unpaid taxes as of December 15, 2016. The characteristics of these 4,600 federal contractors with unpaid taxes in December 15, 2016, are discussed below: Average and Total Debt Associated with Contractors with Unpaid Taxes: About 1,000 contractors had unpaid taxes of at least $51,000 each. These contractors collectively owed about 98 percent of the $1.8 billion in unpaid taxes we identified. About 1,900 contractors each had unpaid taxes between $3,500 and $51,000. They collectively owed about $30 million in taxes. About 1,700 contractors each had unpaid taxes over $100 but less than $3,500. They collectively owed about $2 million in taxes. Chief Financial Officers (CFO) Act Agencies Associated with Contractors with Unpaid Taxes: The 4,600 contractors with unpaid taxes as of December 15, 2016, received contract awards in our 2- year period from one or more of all 24 CFO Act agencies. Almost 1,500 contractors received contract awards from more than one agency. These contractors owed almost $600 million in unpaid taxes as of December 15, 2016 (see sidebar to the left). Although, as discussed above, we reviewed the control activities of five agencies, all executive-branch agencies are required by the FAR to consider the qualifying federal tax debt of prospective contractors before making an award. If a contractor is receiving awards from multiple federal agencies, the suspension and debarment determination of any agency SDO is relevant to other agencies considering the same contractor for an award. For example, as discussed earlier, we identified 1,849 contract awards by five selected agencies to contractors that reported qualifying tax debt before contract award, and none of these agency SDOs were notified. There were some instances where more than one agency made a contract award to the same contractor that reported having qualifying tax debts. These obligations might not have been made by multiple agencies if one of these agencies’ SDOs had been notified of the reported tax debt as required. Contractors with Unpaid Taxes and Associated with TFRP: We also identified about 600 contractors whose tax records indicate the IRS assessed a TFRP to the owner or officers associated with the contractor, as shown in the sidebar to the left. As mentioned previously, a TFRP indicates willful failure to collect, account for, or pay certain taxes owed. These 600 contractors had $200 million in unpaid taxes in December 2016. Having a TFRP does not disqualify a contractor from obtaining a contract, but it can be considered when the agency determines a prospective contractor’s responsibility under the FAR, according to agency contracting and suspension and debarment officials (SDO) we spoke with. Over 2,700 Federal Contractors Likely Had Qualifying Federal Tax Debt on December 15, 2016, but Few Reported Qualifying Tax Debt in SAM We found that over 2,700 contractors owed about $350 million in unpaid taxes that likely met the relevant legal criteria for qualifying federal tax debt on December 15, 2016. However, few of those contractors reported having qualifying tax debts in SAM. Because the contracts were awarded before December 2016, we cannot determine whether these unpaid taxes met the relevant legal criteria under § 52.209-11 and § 52.209-5 for qualifying federal tax debt at the time of the contract award. However, because these tax debts were unpaid as of December 15, 2016, we determined they were likely qualifying tax debts because they were not being timely paid consistent with a collection agreement and appeared to be finally determined. These tax debts amounted to about 20 percent of the $1.8 billion in unpaid taxes we identified. The 2,700 contractors received almost $5 billion of the $17 billion in federal contract obligations for awards made to contractors with unpaid taxes. We examined the SAM § 52.209-11 representations and § 52.209-5 certifications for these over 2,700 contractors to determine whether they reported this debt as qualifying federal tax debt. We identified about 2,000 contractors that had completed a representation or certification, and, when applicable, met the tax-debt threshold for § 52.209-5. Of those 2,000, 93 percent (1,848) did not report their debt as qualifying federal tax debt, compared to fewer than 150 who did report qualifying federal tax debt under one or both tax-debt provisions (see sidebar to the left). Specifically: Over 1,300 contractors completed the § 52.209-11 representation in SAM (which took effect on Feb. 26, 2016), and less than two dozen of these contractors reported having qualifying federal tax debt under § 52.209-11 before receiving contract awards. Nearly 1,400 contractors completed the § 52.209-5 certification in SAM and as of December 15, 2016, had unpaid taxes over the certification threshold. Fewer than 140 of these contractors reported under § 52.209-5 that they had been notified of qualifying federal tax debt above $3,500 before receiving a contract award. The accuracy of contractors’ reported tax-debt status in SAM is critical to federal agencies’ ability to identify reported qualifying federal tax debt owed by prospective contractors. As described earlier, contracting officers generally rely on the contractors’ representations and certifications in SAM to identify qualifying federal tax debts. Contracting officers generally cannot verify a contractor’s tax-debt status by obtaining taxpayer information directly from the IRS without the contractor’s prior consent, because federal tax law generally prohibits the IRS from disclosing taxpayer data for this purpose. While contracting officers cannot independently verify whether federal contractors accurately report qualifying federal tax debt, any qualifying federal tax debt may be available for levy by the IRS, as discussed further below. About 1,900 Federal Contractors Owed Unpaid Taxes That Were Not Qualifying Federal Tax Debt as of December 15, 2016 We found that about 1,900 contractors had about $1.4 billion in unpaid taxes that did not meet the relevant criteria for qualifying federal tax debt on December 15, 2016, a date after which their contracts were awarded. Specifically, these unpaid taxes were not finally determined or were being paid in a timely manner consistent with a collection agreement as of December 15, 2016. If the status of these debts was the same at the time of contract award, then the contractors did not need to report them during the contracting process and agencies were not required to consider the debts before awarding the contract. Although we were able to determine that these unpaid taxes did not meet the legal definitions of qualifying federal tax debt as of December 15, 2016, we could not determine whether this was also the case at the time of the contract award. Federal agencies obligated $12 billion to these 1,900 contractors between 2015 and 2016, for awards made while the contractors owed taxes. Of these 1,900 contractors, about 1,400 owed $1.3 billion in unpaid taxes that were not finally determined on December 15, 2016. About 700 contractors owed $90 million in unpaid taxes that were being timely paid consistent with a collection agreement in December 15, 2016, due to installment agreements or offers-in-compromise accepted by the IRS. The IRS Identified Most Federal Contractors with Unpaid Taxes for Levy, but the FPLP Cannot Comprehensively Identify All Federal Contractors for Levy Through its FPLP, the IRS identified for levy most contractors we found to have likely qualifying federal tax debt, according to our analysis of IRS data. Specifically, of the over 2,700 executive-branch agency contractors with likely qualifying federal tax debt as of December 15, 2016, discussed above, the IRS identified over 2,000 for levy through the FPLP, a program administered by Treasury’s Fiscal Service. These 2,000 contractors collectively owed about $300 million of the roughly $350 million in likely qualifying federal tax debt. According to IRS data, the FPLP did not identify almost 700 of the 2,700 contractors we found to have likely qualifying federal tax debt as of December 15, 2016. These 700 contractors owed about $50 million in likely qualifying federal taxes. IRS officials responsible for the FPLP told us that they would need to review these instances to determine whether the contractors were eligible for levy as of December 15, 2016, and if so why they were not identified by the FPLP. We plan to share these cases with the IRS to determine whether the contractors were eligible for levy at that time and take any appropriate enforcement action. It is possible that the IRS did not identify these 700 contractors for levy through the FPLP because the IRS did not have access to their payments. The FPLP was developed as an automatic and efficient means for the IRS to collect delinquent taxes as payments were processed through the Fiscal Service. Accordingly, the FPLP can only levy federal agency payments processed by the Fiscal Service, but not all federal agencies process their payments through the Fiscal Service. As a result, payments disbursed by other means—such as payments that agencies make directly to contractors—are not included in the FPLP, although they can be levied by the IRS through other manual methods (see fig. 5). The IRS cannot readily identify which payments are made outside of the Fiscal Service, and such payments cannot be levied through the FPLP. While the IRS receives some information about contractor payments from agencies, it does not receive information that would allow it to comprehensively determine which payments are processed by the Fiscal Service and can be levied through the FPLP and which payments are not and must be levied manually. Specifically, executive-branch agencies, including those that do and do not process payments through the Fiscal Service, are required to report information to the IRS about some federal contracts through the IRS Form 8596 information return. Reporting agencies identify themselves on the Form 8596, and the IRS uses data from this form to identify federal contractors for potential levy. However, the Form 8596 information return lacks information on whether payments to federal contractors are processed by the Fiscal Service or through some other means. Without visibility into the payments made outside the Fiscal Service, the IRS is limited in its ability to identify nonparticipating agencies for outreach about the efficiencies of leveraging the FPLP to collect contractors’ unpaid taxes, as opposed to manual levies. Further, without information on agencies’ payment methods, the IRS cannot quickly identify payments that must be levied through manual methods. Expanding Form 8596 to include payment-method information could help the IRS identify which agencies to target for outreach and avoid delays in identifying contractor payments requiring manual levy. IRS officials told us the IRS has the legal authority to expand Form 8596 reporting requirements and would have to determine whether a change to add information on Fiscal Service processing of agency payments was warranted. In addition, we found the IRS is missing an opportunity to further enhance the FPLP levy process for certain contractor payments. Within the FPLP, the IRS has an expedited process to levy federal contractors and, as noted above, the IRS uses data from Form 8596 to identify federal contractors for potential levy. However, Form 8596 reporting requirements do not apply to federal contracts for which the amount obligated is $25,000 or less. When Form 8596 reporting requirements were initially established, this threshold was consistent with Federal Procurement Data System (FPDS) contract reporting requirements for agencies at the time. However, subsequent FAR amendments revised the reporting threshold from contracts over $25,000 to contracts over the micropurchase amount, which is currently set at $10,000. Because the Form 8596 reporting threshold is higher than FPDS reporting requirements, information about contracts in the $10,000 to $25,000 range is available in FPDS, but is not required to be shared with the IRS. Such information could help the IRS identify and use expedited levy procedures on federal contractors with contract obligations in the $10,000–$25,000 range. According to the IRS, an amendment to its regulations would be needed to align the Form 8596 reporting threshold with FPDS reporting requirements. Standards for Internal Control in the Federal Government state that management should use high-quality information to achieve the entity’s objectives. To do this, management obtains relevant data from reliable internal and external sources, processes the obtained data into high- quality information, and uses high-quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives. Without additional information about and from the agencies making these payments, the IRS may be missing opportunities to identify federal contractors for levy to enhance tax collections. Conclusions Considering prospective contractors’ reported qualifying federal tax debt—in accordance with federal regulations—helps ensure federal agencies comply with federal appropriations law, supports the integrity of the contracting process, and protects the interests of the government. The five federal agencies we reviewed had control activities, such as policies, procedures, and training, to help ensure contracting officers consider prospective contractors’ reported qualifying federal tax debt before making an award. However, these controls were not always effective in ensuring that potentially required actions were taken. Determining the reasons the contracts we identified were awarded without appropriate consideration of contractors’ reported qualifying federal tax debt and taking additional steps to ensure tax debts are appropriately considered in future contract award decisions is necessary to ensure contracting opportunities are appropriately awarded. Improving accessibility of SAM representation and certification data to allow contracting officers to more easily identify and consider reported qualifying federal tax debt before contract award can help contracting officers meet required steps, such as referring them to the SDO. Federal tax law generally prohibits the IRS from disclosing taxpayer data to other federal agencies for the purpose of determining whether potential contractors owe qualifying federal tax debt. Consequently, federal agencies generally rely on contractors’ reported qualifying federal tax debt to detect any tax debt owed by their potential contractors. However, agencies cannot independently verify the accuracy of contractors’ reported qualifying federal tax debts when awarding contracts. This limitation heightens the importance of the IRS’s levy process for recouping revenue from businesses that have failed to pay their taxes in a timely way but are receiving federal contract dollars, and the recoupment of revenue can help reduce the tax gap. Accordingly, the IRS has opportunities to use available data to improve its detection and collection of qualifying federal tax debts owed by federal contractors, which can help enhance revenue collection and compliance. Recommendations for Executive Action We are making 12 recommendations—two each to the Army, HHS, the Navy, and VA; one each to DOE and GSA; and two to the IRS. The Senior Procurement Executive for the Department of the Army should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-11 and (1) determine whether the contracting officer was required to consider the contractor’s reported tax debt; if so, (2) determine the reasons controls to identify and refer these contractors to the SDO before contract award did not operate effectively; and (3) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 1) The Senior Procurement Executive for HHS should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-11 and (1) determine whether the contracting officer was required to consider the contractor’s reported tax debt; if so, (2) determine the reasons controls to identify and refer these contractors to the SDO before contract award did not operate effectively; and (3) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 2) The Senior Procurement Executive for the Department of the Navy should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-11 and (1) determine whether the contracting officer was required to consider the contractor’s reported tax debt; if so, (2) determine the reasons controls to identify and refer these contractors to the SDO before contract award did not operate effectively; and (3) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 3) The Senior Procurement Executive for VA should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-11 and (1) determine whether the contracting officer was required to consider the contractor’s reported tax debt; if so, (2) determine the reasons controls to identify and refer these contractors to the SDO before contract award did not operate effectively; and (3) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 4) The Senior Procurement Executive for the Department of the Army should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-5. Specifically, the Senior Procurement Executive should determine whether each contract value was expected to exceed the simplified acquisition threshold when the solicitation was issued and, if so, (1) determine the reasons controls to identify and notify the SDO of these contractors before contract award did not operate effectively and (2) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 5) The Senior Procurement Executive for DOE should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-5. Specifically, the Senior Procurement Executive should determine whether each contract value was expected to exceed the simplified acquisition threshold when the solicitation was issued and, if so, (1) determine the reasons controls to identify and notify the SDO of these contractors before contract award did not operate effectively and (2) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 6) The Senior Procurement Executive for HHS should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-5. Specifically, the Senior Procurement Executive should determine whether each contract value was expected to exceed the simplified acquisition threshold when the solicitation was issued and, if so, (1) determine the reasons controls to identify and notify the SDO of these contractors before contract award did not operate effectively and (2) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 7) The Senior Procurement Executive for the Department of the Navy should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-5. Specifically, the Senior Procurement Executive should determine whether each contract value was expected to exceed the simplified acquisition threshold when the solicitation was issued and, if so, (1) determine the reasons controls to identify and notify the SDO of these contractors before contract award did not operate effectively and (2) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 8) The Senior Procurement Executive for VA should review the contracts we identified as being awarded to contractors that reported qualifying federal tax debt under FAR § 52.209-5. Specifically, the Senior Procurement Executive should determine whether each contract value was expected to exceed the simplified acquisition threshold when the solicitation was issued and, if so, (1) determine the reasons controls to identify and notify the SDO of these contractors before contract award did not operate effectively and (2) design or modify controls to help ensure compliance with applicable regulations. (Recommendation 9) The Administrator of GSA should coordinate with the appropriate SAM users, such as agency procurement officials, to identify potential updates to facilitate contracting officers’ identification of contractors that report qualifying federal tax debt under the § 52.209-11 representation and § 52.209-5 certification. (Recommendation 10) The Commissioner of the IRS should evaluate options to identify which contract payments federal agencies expect to be processed by the Fiscal Service, including amending the reporting requirements for Form 8596 to require federal agencies to include information about whether contractor payments are expected to be processed by the Fiscal Service. If the IRS amends Form 8596 reporting requirements, the IRS should (1) systematically note this information on taxpayer accounts to help the IRS identify which payments may be available for levy through the FPLP and which payments may be available for other (i.e., manual) levies and (2) analyze these data to help identify agencies that do not participate in the FPLP and inform its efforts to expand the number of agencies participating in the FPLP. (Recommendation 11) The Commissioner of the IRS should evaluate options to obtain comprehensive contract payment data above the existing FPDS-NG reporting threshold of $10,000, including assessing the costs and benefits of changing the current threshold for contracts that agencies are required to report to the IRS through Form 8596 information returns to be consistent with the existing reporting threshold for FPDS-NG, determine whether regulatory revisions are necessary, and change the reporting threshold, if appropriate. (Recommendation 12) Agency Comments We provided a draft of this report to the Department of Defense (for the Army and Navy), HHS, VA, DOE, GSA, the IRS and the Office of Management and Budget for review and comment. In written comments (reproduced in appendixes II–VI), the Department of Defense, HHS, VA, DOE, and GSA agreed with our recommendations. The IRS generally agreed with our recommendations (see appendix VII). The Office of Management and Budget had no comments. HHS and the Navy provided technical comments, which we incorporated as appropriate. The Department of Defense, HHS, VA, and DOE noted that they plan to review the contract awards identified in our review. In addition, several agencies described steps they will be taking to address our recommendations. For example, the Department of Defense noted that it plans to take corrective actions or add controls as necessary. HHS noted that it will assess internal controls and take appropriate action. VA noted that it will provide an action plan. DOE noted that it will design or modify controls for regulatory compliance, if necessary. GSA noted that it will work with the procurement community through established governance channels to identify potential approaches for drawing contracting officers’ attention to qualifying federal tax-debt information reported in SAM. The IRS noted its commitment to obtaining accurate information on potential levy sources and, accordingly, indicated it will review the benefits of expanding the information included on its Form 8596, along with other alternatives, to determine their feasibility, effectiveness, and relative burden. The IRS further noted that it will review the potential benefits and costs that would result from changing the current reporting threshold for contract payments, and submit its findings to the Office of IRS Chief Counsel to consider this addition to the IRS Priority Guidance Plan. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, the Secretary of Veterans Affairs, the Secretary of Defense, the Secretary of the Navy, the Secretary of the Army, the Secretary of Energy, the Administrator of GSA, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or shear@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology This report first examines the extent to which, in calendar years 2015 and 2016, (1) selected federal agencies had control activities that ensured contractors’ reported federal tax debts were considered before contract award. The remainder of the report assesses the same period; however, it focuses on all executive-branch agencies and examines the extent to which (2) federal contracts were awarded to contractors with federal tax debt, including the characteristics of those contracts and contractors, and (3) the Internal Revenue Service (IRS) identified selected federal contractors’ payments for levy. To identify the extent to which selected federal agencies had control activities that ensured contractors’ reported federal tax debts were considered before contract award (including task orders), we analyzed contract obligation information from the Federal Procurement Data System–Next Generation (FPDS-NG) and selected for our review the five agencies with the highest contract obligations associated with contract awards for 2015 and 2016, which is the period when contract award data were available at the time of our review. In addition, the revised FAR tax-debt provision went into effect during this period. Specifically, we selected the three civilian agencies with the highest obligations—the Departments of Energy (DOE), Health and Human Services (HHS), and Veterans Affairs (VA)—and, within the Department of Defense, the two agencies with the highest obligations—the Departments of the Army and Navy. The results of our review of these five selected agencies are not generalizable to all federal agencies. However, these five selected agencies awarded about 51 percent of contract obligations associated with contract awards for 2015 and 2016, which were the most-recent contract award data available at the time of our review, and during this period the newest Federal Acquisition Regulation (FAR) tax-debt provision was implemented. We reviewed selected agencies’ policies and procedures related to awarding contracts to prospective contractors that report they owe certain tax debts and met with agency officials to discuss how their agencies consider contractors’ reported federal tax debt before awarding a federal contract. Specifically, we met with agency officials who supervise contracting officers, such as the Head of Contracting Activity, Director of Contracts, or other contracting managers, policy and procurement officials, and suspension and debarment officials from the selected agencies. Additionally, we reviewed and analyzed applicable laws and regulations, as well as applicable policies and procedures from DOE, HHS, VA, the Navy, and the Army for considering contractors’ reported federal tax debt when awarding federal contracts. In addition, we interviewed staff from the Office of Management and Budget’s Office of Federal Procurement Policy and officials from the Interagency Suspension and Debarment Committee, and the Civilian Agency Acquisition Council to obtain an understanding of how the law is implemented through the FAR. We also met with the General Services Administration (GSA) to obtain an understanding of the System for Award Management (SAM), including the registration of prospective contractors and their reporting of certain federal tax debt to the representation requirement of FAR § 52.209-11 and the certification of § 52.209-5. As part of this work, we analyzed FPDS-NG contract award and SAM contractor registration data to identify instances where contractors reported having certain qualifying federal tax debt and received a contract award (including task orders). Specifically, we electronically matched FPDS-NG contract award data from 2015 and 2016 to the relevant contractors’ SAM registration. We then analyzed the relevant contractors’ representations and certifications most recently updated in SAM before the relevant contract award to identify all instances where contractors reported that they had a federal tax debt as defined in FAR § 52.209-11 or § 52.209-5 within our time frame. From the resulting list, we identified the contracts that selected agencies awarded to contractors that reported these qualifying federal tax debts. In addition, we reviewed a nongeneralizable sample of 15 contract awards selected from the five selected agencies to provide illustrative examples of the extent to which these agencies’ control activities ensured required actions were taken before contract award. These 15 contract awards were selected based on numerous criteria, including the prospective contractors’ (1) responses under FAR § 52.209-11 or § 52.209-5 in SAM before the new contract award, and (2) having tax debts as of December 15, 2016, that were not in a repayment agreement with the IRS. Further, when selecting contract awards that had a § 52.209-5 certification, we considered only contractors having at least $3,500 in tax debts as of December 15, 2016. We identified the relevant contractor population and then considered the following factors simultaneously to select the 15 case examples: unique contractor Taxpayer Identification Number across selected agency contracting office locations, the amount of tax debt owed by the prospective contractor, the amount of award obligations, and IRS assessment of a Trust Fund Recovery Penalty (TFRP). We selected case examples that represent a variety of these factors. We reviewed seven contract awards made to contractors that reported that they had certain tax debts and eight contract awards made to contractors that reported that they did not have certain tax debts as part of their § 52.209-11 representations and § 52.209-5 certifications in SAM. For these 15 contract awards, we reviewed pre–contract award documentation, which included tax debt–related representations and certifications retrieved by the selected agencies from SAM, and copies of historical tax transcripts and other records, such as revenue officers’ notes obtained from the IRS. For the case examples presented in this report, we rounded tax debt and contract obligation amounts, did not identify the awarding agency, and did not meet with awarding agency officials to discuss each contract award to protect sensitive taxpayer information. To determine the extent to which executive-branch agency contracts were awarded in 2015 and 2016 to federal contractors with federal tax debt, and characteristics of those contract awards and contractors, we electronically matched data from FPDS-NG on contract awards (including task orders) for all executive agencies with (1) data from SAM on contractors’ representations and certifications of their tax debt, and (2) data from the IRS on tax debts owed by these contractors. Specifically, we used the Data Universal Numbering System number to match data from FPDS-NG with contractor registration data from SAM to obtain additional information on these contractors, such as the contractors’ Taxpayer Identification Numbers and their representations and certifications of tax debt. Using the contractor Taxpayer Identification Number from SAM, we then matched our list of contractors with IRS data to identify our population of contractors that received a contract award and had unpaid federal tax debts. Our analysis included all of the executive-branch agencies. Further, our analysis describes some of characteristics of these debts, including the total amount of unpaid taxes, whether the contractors had a TFRP, and whether or not contractors had unpaid taxes that were timely paid or appeared to be finally determined, as of December 15, 2016, which was the time of our data extract. We also analyzed whether contractors that were assessed unpaid taxes in the IRS data reported having certain tax debts as part of their § 52.209- 11 representations and § 52.209-5 certifications in SAM. We reviewed the most-recent § 52.209-11 representation and § 52.209-5 certification prior to the relevant contract award. Our analysis may understate the population of contractors with tax debt to the extent that contractors repaid their tax debts before the timing of our data extract. Specifically, our analysis does not include any contractors that may have owed federal taxes at the time of a new contract award during this period, but that paid or otherwise resolved their tax debts before December 15, 2016. Additionally, our analysis focuses on contract awards made in 2015 and 2016, and not contract modifications made during this period. In 2015 and 2016, federal agencies obligated $400 billion in modifications to contracts made in 2014 or earlier, almost half of all federal contract obligations in this period. We identify contractors who potentially may have had federal tax debt meeting the definitions of tax debt under FAR § 52.209-11 and § 52.209-5 before the contract award, but cannot verify whether that was the case. To determine the extent to which the IRS identified selected federal contractors’ payments made for levy in 2015 and 2016, we identified the population of contractors that owed taxes at the same time they received a contract award during our period by matching FPDS-NG, SAM, and IRS Unpaid Assessment data, as described above. We then determined whether the tax debt had ever been levied or blocked by the Federal Payment Levy Program (FPLP) as of December 15, 2016, according to IRS data. We also interviewed IRS officials about levying federal contractor payments and reviewed Internal Revenue Manual sections and other relevant documents from the IRS. We assessed the reliability of FPDS-NG, SAM, and IRS Unpaid Assessment data by reviewing relevant documentation, interviewing knowledgeable agency officials, and performing electronic testing to determine the validity of specific data elements in the databases. We determined that these databases were sufficiently reliable for the purposes of our reporting objectives. Appendix II: Comments from the Department of Defense Appendix III: Comments from the Department of Health and Human Services Appendix IV: Comments from the Department of Veterans Affairs Appendix V: Comments from the Department of Energy Appendix VI: Comments from the General Services Administration Appendix VII: Comments from the Internal Revenue Service Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In additional to the individual named above, Jonathon Oldmixon (Assistant Director), Gloria Proa (Analyst-in-Charge), Jennifer Felder, and Albert Sim made significant contributions to this report. Also contributing to this report were Scott Hiromoto, Barbara Lewis, Heather Miller, James Murphy, and Elizabeth Wood.
Why GAO Did This Study The federal government obligated approximately $507 billion on contracts in fiscal year 2017. Businesses, including federal contractors, pay billions of dollars in taxes each year. Some businesses, however, do not pay owed taxes, contributing to what is known as the tax gap. Federal contractors owe some of the taxes that contribute to the tax gap, and, since 2015, federal law prohibits agencies, under certain circumstances, from using appropriated funds to contract with those who have qualifying tax debt. The IRS also has authority to levy certain payments of contractors with qualifying federal tax debt. GAO was asked to review issues related to federal contractors and tax debt. Among other things, GAO examined whether, in calendar years 2015 and 2016, (1) selected federal agencies had control activities that ensured contractors' reported federal tax debts were considered before contract award and (2) the IRS levied selected federal contractors' payments. GAO analyzed contract and IRS data from 2015 and 2016 (the most-recent data available), reviewed five agencies that represent 51 percent of contract obligations, and reviewed seven awards to contractors reporting tax debt. What GAO Found The five selected agencies GAO reviewed have control activities—such as policies and procedures—to help ensure they consider qualifying federal tax debts as defined by Federal Acquisition Regulation (FAR) § 52.209-11 and § 52.209-5 before awarding contracts. However, these controls were potentially ineffective in ensuring compliance with relevant laws and regulations. According to GAO's analysis, in 2015 and 2016 the Departments of Energy, Health and Human Services, and Veterans Affairs, and the Army and Navy, awarded 1,849 contracts to contractors that reported qualifying federal tax debts, such as delinquent debts over $3,500 (see table). When a contractor reports qualifying tax debts under these regulations, the contracting officer must take several actions, including notifying the agency suspension and debarment official (SDO). However, SDOs at all five agencies told GAO they did not receive any notifications of contractors reporting tax debt in this period. As a result, these contracts may have been awarded without potential required actions, indicating potential violations of federal regulations and, in some cases, appropriations law. GAO's nongeneralizable review of seven contracts illustrate two cases where contractors were collectively awarded more than $510,000 in contract obligations while having more than $250,000 in tax debt, including tax penalties for willful noncompliance with tax laws. Officials from the selected agencies were unable to explain why their control activities were potentially ineffective without reviewing each contract to determine whether FAR requirements were applicable and whether control activities were applied. Understanding why existing control activities did not operate effectively will help these agencies enhance controls to avoid future misuses of appropriated funds. GAO plans to provide information on the instances of potential noncompliance GAO identified to the selected agencies. Of the over 2,700 executive-branch contractors GAO found to have likely qualifying federal tax debt as of December 2016, the Internal Revenue Service (IRS) had identified over 2,000 for levy through its automated Federal Payment Levy Program (FPLP). However, the FPLP cannot levy all contractors because not all payments are processed by the system the FPLP uses. The data the IRS receives from agencies does not allow it to readily identify payments made using other systems—information the IRS needs for agency outreach about inclusion in the FPLP and to more quickly initiate a manual levy. With this information, the IRS may be able to improve its levy capacity and enhance tax collections. What GAO Recommends GAO is making 12 recommendations, including that selected agencies enhance controls for considering contactors' qualifying federal tax debt before awarding contracts and that the IRS evaluate options to obtain comprehensive contract-payment information. All the agencies generally agreed with GAO's recommendations.
gao_GAO-20-421T
gao_GAO-20-421T_0
Background Aviation Maintenance Workforce Different aviation industry employers have distinct workforce needs and may require workers with specific skillsets depending on the type of work performed. The workforce includes FAA-certificated mechanics and repairmen, as well as non-certificated workers. FAA-certificated mechanics inspect, service, and repair aircraft bodies (airframe) and engines (powerplant), and only they can approve an aircraft for return to service. It can take between 1 and 3 years to obtain the required education or training to become certificated. FAA-certificated repairmen service aircraft components and must be recommended for certification by their employer to perform specific tasks such as welding or painting. It can take more than a year to obtain the required experience or training to become certificated. A repairman certificate is only valid at the employer for which it was issued. Non-certificated aviation maintenance workers include individuals who are supervised by certificated mechanics or repairmen in performing repair work. Federal Data Reveal Some Demographic and Employment Information on Certificated Mechanics and Repairmen Existing federal data shed light on key workforce characteristics such as the number of FAA-certificated mechanics and repairmen, their age, sex, and education. Specifically: As of December 2018, about 295,000 individuals held a mechanic certificate and about 35,000 held a repairmen certificate. The median age of FAA-certificated mechanics and repairmen was 54 years old, according to our analysis of FAA data. Three percent of all aviation maintenance certificate holders were women as of December 2018. Attending AMT school was the most common pathway certificated individuals used to qualify for the FAA tests to become mechanics. Existing federal data also provide some information on employment characteristics such as the supply of certificated workers. Specifically, FAA certificated about 8,600 mechanics and repairmen on average each year for 2014 through 2018 (see fig. 1). BLS data project an annual average of 11,800 job openings in the United States from 2018-2028 for aircraft mechanics and service technicians due to growth and replacement, which include job openings for certificated and non- certificated workers. There are, however, certain limitations to existing federal data. For example, neither FAA nor BLS collects data on the race or ethnicity of certificated individuals. In addition, FAA officials said the number of certificated individuals likely overestimates the number of them working in the aviation industry. It is unknown how many of the approximately 330,000 certificate holders are retired, deceased, or working in other industries. Furthermore, BLS data indicate 136,900 individuals were employed in the aircraft mechanics and service technicians occupation in 2018, but it is not clear how many of those jobs were filled by FAA- certificated workers. There are also limitations to determining employment characteristics such as pay for certificated workers, specifically. BLS publishes some data on pay for aircraft mechanics and service technicians, such as average hourly and annual wages. However, the occupational classification system BLS and other federal statistical agencies use for aircraft mechanics and service technicians does not distinguish between FAA- certificated and non-certificated workers, making it difficult to determine employment characteristics such as pay for certificated workers, specifically. This is in part because workers are classified by the work they perform and not necessarily by certification or education, according to occupational classification system principles. BLS officials said they collected wage and employment data for certificated workers separate from non-certificated workers in employer surveys conducted between 2000 and 2012, but stopped collecting these data in part because employers inconsistently reported them. Employers we interviewed, including air carriers and repair stations, had differing perspectives on potential growth in demand for aviation maintenance workers; some said they were experiencing difficulty finding enough workers to meet their needs, while others said they were not experiencing difficulty. Employers we interviewed for our 2014 report also expressed varying levels of difficulty filling vacancies and recruiting individuals for certain aviation professions, including aviation maintenance workers. Small and medium-sized employers in particular cited some challenges to hiring due to the wage they offered. Some stakeholders we interviewed for our recent report voiced concerns about the potential for a labor shortage. In addition to these views, two of the three selected labor market indicators (unemployment rate and wage earnings) we reviewed from 2013 through 2018 were consistent with difficulties in hiring aircraft mechanics and service technicians, while the other indicator (employment) was not. Government and Industry Programs Support the Workforce, but FAA Lacks Information That Could Advance Its Workforce Development Efforts Several federal agencies such as DOD, DOL, VA, Education, and the Department of Transportation administer grants or programs that support individuals pursuing aviation maintenance careers or facilitate coordination among different stakeholders to support them. For example: DOD’s Military Services’ Credentialing Opportunities On-Line (COOL) program. This program provides funding for service members to obtain professional credentials related to their military training and helps them translate their military experience into civilian occupations. DOL’s Registered Apprenticeship Program. DOL awards grants to support Registered Apprenticeship Programs— employer-driven training opportunities that combine on-the-job learning with related classroom instruction. The program facilitates coordination among different stakeholders such as industry, states, and educational institutions to support apprenticeships and employment opportunities. In addition, FAA established an Aviation Workforce Steering Committee in February 2019, in part to coordinate efforts across FAA to address various workforce related provisions included in the FAA Reauthorization Act of 2018. Additional examples of federal grants or programs that support this workforce can be found in our report. The report also includes examples of states, industry employers, and AMT schools coordinating or partnering to support the workforce including developing career grants and military pathway programs. Despite some of FAA’s recent efforts in support of this workforce, we found that FAA does not routinely analyze, collect, or coordinate with other stakeholders on certain data related to workforce development. FAA’s strategic plan includes an objective on promoting the development of a robust aviation workforce, and its Aviation Workforce Steering Committee charter emphasizes providing diverse populations, including youth, women, and minorities, with clear pathways into aviation careers to expand the talent pool from which both government and industry may recruit. However, neither the strategic plan nor the steering committee charter provides specific information on how FAA plans to select and measure any efforts it undertakes related to these objectives. Without routinely analyzing its own data or leveraging others’ data, FAA may not have certain information it needs to track or ensure progress toward its workforce development goals. We identified several areas in which improved data analysis, collection, or coordination could assist FAA in measuring progress and understanding how to target its resources in support of its workforce related objectives. For example, FAA could use the demographic or pathway data it already collects to identify patterns or relationships (such as the trend in female certificate holders by pathway), which could be useful information as FAA aims to increase opportunities for women to pursue aviation maintenance careers. FAA could also use existing AMT school data (such as enrollment or mechanic test pass-rate data) to analyze nationwide trends or aggregate information across AMT schools to better understand the AMT school pathway as a whole. In our 2020 report that issued last week, we recommended that the Aviation Workforce Steering Committee, as part of its ongoing efforts, take steps to use existing FAA data and coordinate with other federal agencies to identify and gather the information it needs to measure progress and target resources toward its goal of promoting a robust, qualified, and diverse aviation maintenance workforce. FAA agreed with our recommendation. Revisions to FAA’s Decades-Old Mechanic Curriculum Requirements and Its Mechanic Testing Standards Are Ongoing Even as FAA’s strategic plan states the agency’s focus on promoting the development of a skilled aviation maintenance workforce to integrate new technologies, the agency has acknowledged that the current curriculum requirements for AMT schools and mechanic testing standards are outdated. FAA officials, employers, and AMT School officials we interviewed said the current curriculum requirements do not emphasize commonly used modern aircraft technologies, such as avionics and composite materials. Over the years, FAA has attempted several times to revise curriculum requirements for AMT schools through the rulemaking process, and efforts to revise these requirements are ongoing through this process. FAA is also currently updating the testing standards for mechanics. FAA officials have noted several challenges to updating the curriculum requirements including competing demands at the department level and the extent of comments FAA has received from stakeholders in response to proposed changes. In October 2015, FAA published a notice of proposed rulemaking (NPRM) with the stated goal of updating the existing AMT school curriculum. FAA issued a supplemental NPRM in April 2019 that expanded the scope of the NPRM it issued in October 2015. Comments on the supplemental NPRM were due in June 2019. As of October 2019, FAA officials said they were in the process of reviewing the comments. FAA officials told us that a final rule will be published some time toward the end of 2020. In a separate effort outside of the rulemaking process, FAA is currently updating the testing standards for mechanics. FAA has acknowledged that current mechanic testing standards are also outdated. As a result, aviation stakeholders have stated the mechanic tests include outdated or irrelevant questions. For example, the practical test may include projects on wood airframes and fabric coverings, which are not common to modern commercial aircraft. An FAA official noted that any delay in finalizing the rule would likely result in a corresponding delay to finalizing the testing standards. Delaying the release of the updated mechanic testing standards could result in the prolonged use of outdated or irrelevant questions on the mechanic tests. FAA officials said that once finalized and implemented, the updated curriculum requirements for AMT schools and the mechanic testing standards for individuals should be mostly aligned. Chairman Larsen, Ranking Member Graves, and members of the Subcommittee, this completes my prepared remarks. I look forward to answering any questions you may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact me at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Betty Ward-Zukerman, Assistant Director, Vashun Cole, Chelsa Gurkin, Ellie Klein, Meredith Moore, Justin Reed, Andrew Von Ah, and Chris Woika. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study FAA requires that only mechanics who are "certificated" by the FAA approve aircraft for return to service. Some stakeholders have expressed concern that retirements and attrition could adversely affect the capacity of this workforce to meet the growing demand for air travel, and that the mechanic curriculum is outdated. The FAA Reauthorization Act of 2018 included provisions for GAO to examine the aviation workforce. This testimony examines (1) what federal data reveal about the characteristics of the aviation maintenance workforce, (2) how selected federal agencies and other key stakeholders provide support and coordinate to develop the skills of this workforce, and (3) FAA's progress in updating the curriculum and testing standards for mechanics. GAO analyzed FAA and BLS data; reviewed relevant federal laws and regulations; and interviewed selected federal agency, industry, and AMT school officials. What GAO Found Federal data provide some information on the Federal Aviation Administration (FAA)-certificated aviation maintenance workforce, though certain data limitations exist. FAA maintains data on the number of individuals newly certificated each year, but less is known about how many certificated individuals exit the aviation industry each year and the extent of growing demand. A sufficient supply of certificated workers is critical for safety and to meet the growing demand for air travel. Bureau of Labor Statistics (BLS) data provide some information on pay and demand for aviation maintenance workers more broadly, but do not differentiate between FAA-certificated and non-certificated workers due to data collection challenges. Demographic data may also be useful for workforce analysis and planning. FAA data provide some demographic information on certificated mechanics and repairmen, such as age and sex, but the agency lacks data on race and ethnicity. According to GAO analysis of FAA data, the median age of the roughly 330,000 mechanics and repairmen FAA had certificated as of December 2018 was 54 years old and three percent were women. Government agencies, educational institutions, and businesses coordinate to some extent in support of this workforce, but FAA does not routinely analyze, collect, or coordinate with other stakeholders on certain data related to workforce development. One of FAA's strategic objectives includes promoting the development of a robust, skilled aviation workforce, and the agency established a committee, in part, to explore ways to diversify this workforce; however, FAA is not currently positioned to understand whether its efforts are optimally targeted or effective. Without routinely analyzing its own data or leveraging others' data, FAA may not have certain information it needs to track or ensure progress toward its workforce development goals. FAA has acknowledged that curriculum requirements for Aviation Maintenance Technician (AMT) schools and mechanic testing standards are outdated. Efforts to revise the decades-old curriculum requirements for AMT schools are ongoing and FAA officials told GAO that a final rule will be published some time toward the end of 2020. FAA officials indicated that the revised mechanic testing standards would likely be finalized after. What GAO Recommends In its February 2020 report, GAO recommended that FAA use its existing data and coordinate with other federal agencies to identify and gather information to measure progress and target resources toward its goal of promoting a robust, qualified, and diverse aviation maintenance workforce. FAA agreed with the recommendation.
gao_GAO-20-88
gao_GAO-20-88_0
Background Federal Transportation Security Responsibilities and Coordination Congress created a multi-agency framework that established agency responsibilities for securing the nation’s transportation systems. Following the terrorist attacks of September 11, 2001, the Aviation and Transportation Security Act, enacted in November 2001, established TSA as the federal agency with primary responsibility for transportation security. Within this framework, two components of DHS—TSA and Coast Guard—are responsible for most transportation security activities. TSA is the primary federal agency responsible for security in all modes of transportation, including civil aviation, passenger and freight rail, highway and motor carrier transportation, and pipeline transportation systems. Coast Guard is the lead federal agency responsible for maritime transportation security, though TSA plays a role in managing, for example, credentialing for workers at seaports. TSA and Coast Guard’s regulatory authorities vary across modes, which affects how transportation security activities are planned for and implemented. For example, TSA and Coast Guard exercise more regulatory authority over (and, in some cases, have operational responsibility for) the aviation and maritime modes pursuant to their respective statutory authorities. In the aviation mode, TSA has operational responsibility for the screening of passengers and property transported on aircraft, but also imposes and enforces security requirements established through regulation on air carriers and other industry stakeholders. Similarly, Coast Guard has responsibility for ensuring that maritime vessels and facilities are compliant with applicable security requirements. TSA’s statutory responsibilities for the surface transportation modes, however, are generally less prescriptive. With respect to these modes, TSA works with transportation operators on a broad set of risk-based activities such as training, information sharing, and community outreach within a collaborative environment. For example, in freight rail TSA and its partners undertake collaborative efforts to establish security priorities, identify vulnerabilities and capability gaps, and reduce risks. Freight rail operators, meanwhile, engage in cooperative and independent security initiatives to assess risks and refine security plans. Other federal agencies are involved in transportation security, but to varying degrees. At the department level, DHS is responsible for providing strategic guidance, directing a national unity of effort, and coordinating security across critical infrastructure sectors. CBP manages programs designed to secure cargo and ensure intermodal transportation security, among other things. CBP activities include programs to encourage trade partners to implement security best practices and identify high-risk shipments and travelers before they reach U.S. ports of entry. DOT also has some transportation security responsibilities, which we describe below. Figure 1 illustrates agencies’ activities across transportation modes. Federal policies and plans establish specific coordination mechanisms and activities for transportation security. Specifically, in accordance with the Homeland Security Act of 2002, as amended, DHS created the National Infrastructure Protection Plan to guide the national effort to manage risk to the nation’s critical infrastructure, including through coordination of agencies and various critical infrastructure sectors, including transportation systems. Under this structure, DHS and DOT are co-Sector-Specific Agencies for the Transportation Systems Sector. DHS delegated its sector responsibilities to TSA and Coast Guard. Within the transportation systems sector, agencies and stakeholders charter councils for individual transportation modes as well as the sector as a whole. Sector Coordinating Councils and Government Coordinating Councils for each critical infrastructure sector provide forums for promoting efficient collaboration within the sectors. Further, the Sector- Specific Agencies are to develop, in close collaboration with Sector Coordinating Councils and other sector partners, a sector-specific plan that tailors the National Infrastructure Protection Plan to the specific characteristics and landscape of each critical infrastructures sector. Under the Transportation Systems Sector-Specific Plan, DOT and DHS, through TSA, and Coast Guard, coordinate with infrastructure owners and operators, provide technical assistance, and carry out incident management responsibilities. CBP is also a permanent member of the Aviation Government Coordinating Council. The Impetus for a National Strategy for Transportation Security The Final Report of the National Commission on Terrorist Attacks Upon the United States (9/11 Commission Report), released in July 2004, identified concerns with aspects of transportation security planning, including the lack of an integrated strategic plan for the transportation sector. The Commission found that the screening of passengers and their property at airports accounted for the majority of transportation security investments, leaving vulnerable other facets of transportation security, such as cargo, general aviation, and surface transportation. The Commission recommended that the U.S. government identify and evaluate the transportation assets that need to be protected, set risk- based priorities for defending them, select the most practical and cost effective means of doing so, and then develop a plan, budget, and funding source to implement the effort. Congress subsequently passed the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act), which directed the Secretary of DHS to develop, prepare, implement, and update, as needed, a National Strategy for Transportation Security and transportation modal security plans. The statute further directs the Secretary of Homeland Security to work jointly with the Secretary of Transportation to develop, revise, and update the national strategy and transportation modal security plans. Within DHS, responsibility for such strategic planning had been delegated by the Secretary of Homeland Security in May 2003 to TSA for transportation security across all modes of transportation and to Coast Guard for maritime security, specifically. The Intelligence Reform Act called for a national strategy that was to include elements that aligned with the Commission’s recommendation. Table 1 illustrates parallels among the Commission’s multi-part recommendation, the Intelligence Reform Act, as amended, and the 2018 national strategy. Consistent with its underlying statute, the national strategy states that it is the governing document for federal transportation security efforts, and lays out a number of areas where it can govern those efforts. For example, the national strategy states that it contributes to departmental budgetary processes by applying multiple information sources to determine priorities and capability gaps that influence resource allocation decisions and budget projections across federal agencies. Further, the national strategy is intended to support out-year programming and budgeting by measuring progress toward achieving the security outcomes for funded activities. The national strategy states that its risk-based priorities help to narrow capability gaps and raise the security baseline. The risk-based priorities in the national strategy are also intended to inform security decisions about the types of activities government and industry modal security officials should pursue to address terrorism risks. The national strategy includes modal security plans as appendixes—also consistent with its underlying statute—and other, separate, statutorily required national strategy documents as annexes TSA determined were appropriate to include. 2018 National Strategy for Transportation Security Is Generally Consistent with Desirable Characteristics The 2018 National Strategy for Transportation Security is generally consistent with desirable characteristics of an effective national strategy. In 2004, we reported that national strategies are not required to address a single, consistent set of characteristics, and they contain varying degrees of detail based on their different scopes. We have previously identified a set of desirable characteristics that we believe would provide additional guidance to responsible parties for developing and implementing the strategies—and to enhance their usefulness as guidance for resource and policy decision-makers and to better ensure accountability. Our analysis of the 2018 National Strategy for Transportation Security found that it is fully consistent with two of the six desirable characteristics of an effective national strategy and partially consistent with four, as summarized in table 2. We found that supporting documents of the national strategy (such as a planning guide, project plan, and budget document) include additional elements of desirable characteristics that are not currently included in the strategy. For example, the national strategy’s guidance document describes the methodology for developing the strategy. TSA officials indicated that as they develop the 2020 national strategy, they will take steps to incorporate additional elements of desirable characteristics. 2018 National Strategy for Transportation Security Generally Did Not Guide Federal Efforts, Including Resource Decisions The national strategy plays a limited role in guiding federal transportation security efforts. Agencies rely instead on various agency- or mode- specific documents that DHS and DOT officials stated overlap with the national strategy. Similarly, agencies do not consult the national strategy to allocate resources for their federal transportation security efforts. They instead make such decisions based on various strategy documents and department and agency guidance, which the national strategy may inform to varying degrees. National Strategy Generally Did Not Guide Federal Transportation Security Efforts TSA identifies the national strategy as the governing document for federal transportation security efforts, consistent with its underlying statute; however, agency officials generally do not use it to guide their efforts and had disparate views about its functional role given overlapping strategic documents. The 2018 national strategy states: “While the strategy presents a whole community plan for reducing the risks to transportation from terrorist attacks, it is, as mandated, the governing document for federal transportation security efforts.” Officials representing TSA aviation, Coast Guard, TSA intermodal, and DOT stated that they did not use the national strategy to guide their efforts; TSA surface officials stated that it generally did guide surface transportation activities. Officials from TSA’s Strategy, Policy Coordination, and Innovation office, which coordinates the national strategy’s development, said that although the national strategy does not drive transportation security activities, it does inform such activities as they related to risk-based priorities. Although the national strategy states that it is to be the governing document for transportation security efforts, TSA strategy officials described it as a catalogue of transportation security activities. The vast majority of the activities and performance measures reported in the national strategy came from ongoing reporting mechanisms such as the DHS Annual Performance Report and TSA voluntary surface security assessments, according to TSA and Coast Guard officials. Therefore, the national strategy did not affect the number of activities or types of programs that agencies undertook, according to TSA and Coast Guard officials. Instead, the national strategy summarized information about current transportation security goals and performance as opposed to guiding such decisions. TSA surface and aviation, Coast Guard, and DOT officials stated that several different strategies and planning documents with similar areas of focus resulted in redundancy or overlap with the National Strategy for Transportation Security. We have reported that when overlap exists there may be opportunities to increase efficiency. For example, communicating the use of overlapping documents could promote efficiency in creating and using strategies to make transportation security- related decisions. Figure 2 shows the National Strategy for Transportation Security and numerous other documents, including several identified in the 2018 national strategy, that guide transportation security decisions. For specific examples of strategies used by each component, see appendix I. As shown in figure 2, the National Strategy for Transportation Security exists among more than a dozen other national-level strategic documents without a hierarchical alignment indicating how they interact or supersede each other. Officials from TSA’s strategy office stated that they view the functional role of the national strategy as informing transportation modes’ activities where applicable, and that transportation officials should use it to ensure consistency of effort across activities. Transportation officials had differing views on the varying role of the national strategy, as described below: TSA Aviation: The national strategy keeps security operations on track and aligned with priorities, but officials used the national strategy more for reference than to guide program or planning decisions. TSA officials stated that aviation policy is regulatory in nature, meaning policy is driven by requirements established through statute and regulation rather than by the national strategy. TSA aviation officials also stated that they could not provide an example of where the national strategy was used to make specific decisions or actions. Coast Guard: The national strategy informs federal partners of Coast Guard’s maritime transportation security activities, but Coast Guard officials stated that the national strategy does not require them to take on activities they are not already doing; instead, it puts those transportation security activities in context. Coast Guard officials stated that the national strategy did not drive decisions or activities. TSA Surface: The national strategy generally guides transportation security activities and drives a common understanding around goals for both TSA officials and industry partners. TSA surface officials stated that the need for voluntary cooperation and engagement makes the alignment of priorities with national strategy more valuable in the surface mode. TSA surface officials stated that they use the national strategy to guide their implementation of federal transportation security programs. Specifically, TSA surface officials stated that they use the national strategy to determine areas of focus for training and exercise programs. DOT: The national strategy delineates the transportation roles and responsibilities through the lens of terrorism, giving it value as a tool for communicating and coordinating within the transportation systems sector rather than as a planning tool. DOT officials stated that they did not use the national strategy as a major factor to prioritize budget decisions and cannot assign a causal relationship between the national strategy and policy. Officials from TSA’s strategy office stated that they created the national strategy to respond to legislative requirements; however, they had not fully considered or communicated to key stakeholders how the national strategy would functionally guide federal efforts. Officials acknowledged that it could be helpful to communicate this information to stakeholders as they develop future iterations of the national strategy. Such communication would be consistent with federal internal control standards, which state that management should externally communicate the necessary quality information to achieve the entity’s objectives. TSA has made efforts over the years to streamline and consolidate reporting requirements of the national strategy with similar documents. For example, in August 2010, TSA sent a letter to notify Congress that it was streamlining the national strategy and several other documents by incorporating them into the Transportation Systems Sector Annual Report. The letter stated that streamlining strategic planning and reporting requirements improves their usefulness and reduces federal government and stakeholder confusion. Similarly, TSA surface officials stated that they have attempted to consolidate their reporting requirements by integrating two strategies focused on mass transit and freight rail into the national strategy. Officials stated that those strategies were published as separate annexes in the 2018 National Strategy for Transportation Security in response to feedback, but had been integrated into the 2016 iteration of the national strategy. Officials from TSA’s strategy office said they believed the national strategy has value in providing a whole-of-government strategy for transportation security with a counterterrorism view. However, we have previously reported that the ultimate measure of the value of national security strategies is the extent to which they are useful as guidance in balancing homeland security priorities with other important, non- homeland security objectives. Though the national strategy lays out a number of areas where it can govern federal transportation security efforts, its unclear position among numerous strategic documents limits its ultimate value. For example, the risk-based priorities in the national strategy are intended to inform security decisions about the types of activities government and industry modal security officials should pursue to address terrorism risks. Instead, according to officials, the national strategy summarizes current transportation security activities within each mode and they generally use other documents to guide their transportation security decisions. By communicating to key stakeholders how the national strategy aligns with related strategies to guide federal efforts, stakeholders would be in a better position to use the strategy as a whole-of-government approach to preventing terrorist attacks. Agencies Use Various Strategy Documents to Allocate Resources, Which the National Strategy May Have Informed Officials representing TSA, Coast Guard, and DOT identified various documents and strategies as guiding resource decisions. TSA budget representatives stated that specific budgetary decisions and trade-offs result from other strategy documents, such as the TSA Strategy and Administrator’s Intent. TSA budget officials indicated a link between the National Strategy for Transportation Security and the budget process because other strategy documents incorporated the national strategy. Similarly, Coast Guard officials stated that they broadly consider the national strategy, the DHS Resource Planning Guidance, and other documents during their budgeting process. However, Coast Guard officials could not speak to the influence of the national strategy in particular. When asked about how the national strategy influences resource decisions, agency officials explained: TSA Aviation: The national strategy has not influenced any specific resource allocation decisions. Coast Guard: The national strategy is part of the broader budget process, but officials could not speak to its particular influence or provide examples of the national strategy changing the direction of maritime security activities. TSA Surface: The national strategy does not provide specific direction on resource allocation decisions. The national strategy provides a guidepost for where TSA wants to expend effort and can provide guidance during times of limited budgets and personnel, though officials did not provide specific examples of cases in which this occurred. DOT: The national strategy does not play any role in the department’s budget process. The national strategy identifies the creation of out-year budgets as a challenge. For example, the statute under which TSA develops the national strategy provides that it is to include both a 3-year and 10-year budget for federal transportation security programs that will achieve the priorities of the national strategy. However, the national strategy recognizes that it does not provide 3-year and 10-year budget information due to the challenge of anticipating future transportation security programming needs and aligning budget projections across multiple departments and agencies. To address this challenge, the national strategy aims to contribute to budgetary processes by applying multiple information sources to determine priorities and capability gaps that influence resource allocation decisions and budget projections. Further, the national strategy is to support budgeting by measuring progress towards achieving the security outcomes for funded activities. TSA officials explained that, rather than provide the 3-year and 10-year budget, TSA designed its budget process to align with, and be consistent with, the department’s five-year budget cycle set out in the Homeland Security Act. The national strategy explains that, accordingly, agency budget information will continue to be reported through their regular budget processes. TSA officials told us that they have been reporting budget information to Congress this way since before they produced the initial national strategy, and that Congress has not raised concerns with this approach. TSA, Coast Guard, and DOT officials told us they did not use the national strategy to make specific budget or resource allocation decisions because they did not believe the national strategy should direct those decisions. Officials from TSA’s strategy office confirmed that, in their view, the national strategy was not intended to guide resource decisions. Interagency Collaboration and Risk Information Underpin the 2018 National Strategy TSA officials collaboratively developed the 2018 National Strategy for Transportation Security, which generally reflected risks identified in existing TSA and Coast Guard documents. TSA managed the creation of the national strategy by seeking input from stakeholders with responsibilities in each of the three transportation modes as well as intermodal transportation. Specifically, TSA officials sent out three data calls for information and feedback to officials at TSA, Coast Guard, and DOT responsible for providing information. Each data call built upon the prior one and provided the modal officials multiple opportunities to revise and edit their data. In addition, TSA officials sent the data calls to the Transportation Modal Government Coordinating Councils and recounted sending them to other groups, such as Sector Coordinating Councils. Because of the numerous agencies involved and the length of the development and review process, TSA began development of the 2018 national strategy before they submitted the 2016 national strategy to Congress. TSA planning officials stated that they encouraged officials responsible for overseeing implementation of transportation programs to help develop the strategy so TSA could leverage the expertise of each individual mode. TSA delegated the responsibility of identifying performance measures, activities, and related information to officials in each of the modes. These modal officials in turn contacted officials implementing transportation security programs to gather information and metrics related to their programs in the mode-specific appendixes, as well as coordinate general feedback on the national strategy’s base plan. TSA recommended that modes leverage activity and performance information already reported, to the extent possible. This allowed the national strategy to be efficiently updated according to TSA planning officials, which is crucial to TSA’s planning timeline of developing the national strategy every two years. Officials representing TSA surface and aviation, Coast Guard, and DOT confirmed their participation in the data calls and national strategy development. TSA surface officials also stated that they leveraged existing collaboration and coordination mechanisms to provide industry and stakeholder feedback, such as government coordinating councils and sector coordinating councils. Senior leadership then reviewed the information to ensure that it did not conflict with other strategies that agencies use to guide activities, according to TSA and DHS officials. We compared TSA’s work collaborating with other agencies to produce the national strategy with key practices we have identified for collaboration and found that TSA generally aligned the national strategy development with selected key practices. Specifically, selected leading practices call for agencies to collaborate by identifying 1) leadership, 2) clear roles and responsibilities, and 3) participants. TSA’s leadership developing the national strategy, working jointly with DOT, is identified in agency documentation and a DHS memo which delegates this authority. TSA officials provided clear roles and responsibilities to agencies asked to provide data through the data calls that supported the 2018 national strategy development. In addition, they included all relevant participating agencies in the process and provided a clear method of decision-making. TSA officials stated that it was a challenge to get input from agencies that do not consider their main function to be transportation security, such as CBP. Officials from CBP—which is responsible for carrying out multiple activities related to air cargo and intermodal security in the 2018 national strategy—stated that they were not involved with the 2018 national strategy. CBP officials acknowledged that their programs to inspect cargo played a role in transportation security; however, they said they viewed their responsibilities as separate. For example, CBP officials stated that they are responsible for verifying the security of some cargo transported on planes but not the security of the planes themselves. However, TSA officials stated that they involved two individuals from CBP and will continue to reach out to CBP for information and involvement in the development of the 2020 national strategy. TSA officials stated that they are committed to collaborative development of the national strategy, and have taken an extra measure to seek comments from the public to inform the development of the 2020 national strategy using the Federal Register. In addition to agency collaboration, the development of the 2018 national strategy centered on agencies incorporating risks listed in their risk assessments. TSA officials from surface and aviation modes stated that they relied primarily on the Transportation Sector Security Risk Assessment; while Coast Guard officials relied on the National Maritime Strategic Risk Assessment and the National Maritime Terrorist Threat Assessment. TSA officials stated that they did not have documentation of the risks considered for the intermodal information for the 2018 strategy because the TSA official responsible for its development was no longer with the agency. However, officials stated that they are considering risks for the 2020 national strategy that are described in the Transportation Sector Security Risk Assessment and National Risks Estimate and provided documentation of these considerations. We found that, in general, the risk-based priorities highlighted in the national strategy aligned with the risks identified in the assessments. For example, the 2018 national strategy identified the prevention of insider threats as part of a risk-based priority in its base plan and aviation- specific appendix. In addition, the aviation-specific appendix identified an activity, outcome, and performance measure aimed at addressing this threat. This aligns with the identification of insider threats as a key part of risks specified in TSA’s 2017 risk assessment. In addition, TSA and Coast Guard officials stated that they also considered and included emergent threat information—for example, new threats presented by cybersecurity. They decided to include these threats as a result of ongoing development of strategy documents both in TSA and across the interagency community, according to TSA officials. The development of risk information in the 2018 national strategy remained within the context of each mode. TSA’s Transportation Sector Security Risk Assessment does provide information to compare risks across aviation and surface modes; however, that information is not included in the 2018 national strategy. Similar information related to Coast Guard risks is also not included in the 2018 national strategy, though available in Coast Guard risk assessments. The national strategy lays out areas where it could inform decision-making across modes; however, the information about transportation activities’ effectiveness does not currently lend itself to meaningful comparisons. For example, transportation security activities in the 2018 national strategy report outcome and performance measures, but not targets or results. TSA officials stated that they are developing the 2020 national strategy to include performance measures for activities to respond to risks, which will be the second iteration of measures in the national strategy. Corresponding performance results on activities that respond to risk- based priorities will be directly reported to Congress through annual reports on the progress of the national strategy’s implementation. Though this is not the same as providing cross-modal risk information, it would enable decision-makers to hold risk reduction activities accountable for results that they were intending to achieve, according to TSA officials. Conclusions In accordance with statutory requirements, the National Strategy for Transportation Security is to be the governing document for federal transportation security efforts. However, its unclear position among numerous related strategies has clouded its value in guiding federal efforts. In light of other strategies and governance documents, DHS, in consultation with DOT, can better communicate the applicability of the National Strategy for Transportation Security so that key stakeholders have clear direction on how to rely on the national strategy. As TSA develops future iterations of the national strategy, key stakeholders would be better positioned to use it if the departments communicate how the national strategy aligns with related strategies. In the absence of such communication, transportation security stakeholders may continue to miss opportunities to use the national strategy as part of a whole-of- government approach to preventing terrorist attacks. Recommendation for Executive Action The Secretary of Homeland Security should, in consultation with the Secretary of Transportation, communicate to key stakeholders how the National Strategy for Transportation Security aligns with related strategies to guide federal efforts as it develops future iterations of the national strategy. (Recommendation 1) Agency Comments We provided a draft of this report to DHS and DOT for review and comment. In written comments, which are included in appendix II and discussed below, DHS concurred with our recommendation and described actions taken to address it. DHS and DOT also provided technical comments, which we have incorporated into the report, as appropriate. DHS stated that the 2020 national strategy will elevate alignment language from the 2018 national strategy modal plans and better explain how the national strategy relates to newly issued strategies, among other things. These updates to the 2020 strategy are a positive step, and DHS should ensure that it further clarifies alignment language in the modal plans and communicates both newly issued and previous strategies alignment with the national strategy. Further communication about related strategies will provide better direction for key stakeholders on how to use the national strategy in relation to other strategies. We are sending copies of this report to the appropriate congressional committees, the acting Secretary of the Department of Homeland Security, and the Secretary of the Department of Transportation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or RussellW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix III. Appendix I: Strategies Agencies Identified as Guiding Transportation Security Decisions Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kevin Heinz (Assistant Director), Michelle Serfass (Analyst-in-Charge), Chuck Bausell, Benjamin Crossley, Elizabeth Dretsch, Andrew Lobel, Tom Lombardi, Sarah Veale, and Adam Vogt made key contributions to this report.
Why GAO Did This Study In recent years, the nation's transportation systems facilitated over 5 trillion miles of passenger travel annually while moving billions of tons of cargo. The scale and scope of these systems make them targets for terrorist attacks. Congress directed DHS to work jointly with DOT to develop, revise, and update a biennial National Strategy for Transportation Security that governs federal transportation security efforts. The FAA Reauthorization Act of 2018 includes a provision for GAO to evaluate the extent to which the most recent strategy is reflected in relevant federal transportation security efforts. This report examines the extent to which the 2018 strategy (1) guides relevant federal transportation security efforts, including resource allocation, and (2) incorporates input across transportation modes and risk information, among other things. To conduct this work, GAO reviewed relevant transportation security documentation, interviewed officials within DHS and DOT on the development and use of the strategy, evaluated interagency collaboration during the development of the national strategy, and analyzed the national strategy's incorporation of risk information. What GAO Found The 2018 National Strategy for Transportation Security generally does not guide federal efforts due in part to its unclear alignment with several strategies that also inform federal transportation security efforts. The Department of Homeland Security (DHS)—primarily through the Transportation Security Administration (TSA)—developed the national strategy, consistent with congressional direction, to govern federal transportation security efforts. However, TSA and Department of Transportation (DOT) officials all identified some degree of redundancy or overlap regarding the role of the strategy in light of other transportation security strategies such as the National Strategy for Aviation Security. Agencies reported using the national strategy for reference, context, and general coordination, but not for driving program activities. Agencies instead use separate strategies and plans to guide program and resource decisions. Similarly, agencies in DHS and DOT (key stakeholders of the strategy) use various strategy documents to allocate resources for federal efforts, which the strategy may inform. However, DHS has not communicated how the strategy aligns with related strategies to guide these efforts. By doing so, federal stakeholders would be better positioned use the national strategy as part of a whole-of-government approach to preventing terrorist attacks. TSA effectively incorporated input from stakeholders and considered risk information to develop the 2018 National Strategy for Transportation Security. TSA iteratively updated the biennial strategy by incorporating input across transportation modes and feedback from stakeholders in a manner that generally met GAO's leading practices for collaboration. For example, TSA clearly communicated roles and responsibilities regarding the strategy development process for participating agencies. In addition, the strategy compiles risks identified for each transportation mode in other strategic planning documents. TSA strategy development officials stated that they also included emergent risk information, for example cybersecurity risks. The security risks identified in these risk assessments, in general, aligned with the risk-based priorities highlighted in the strategy. What GAO Recommends GAO recommends that DHS should, in consultation with DOT, communicate to key stakeholders how the National Strategy for Transportation Security aligns with related strategies to guide federal efforts. DHS concurred with the recommendation.
gao_GAO-19-270
gao_GAO-19-270_0
Background Risk Assessment and Risk Management at EPA According to EPA, risk assessments provide information on potential health or ecological risks. Information from risk assessments, in combination with other information, provides the basis for risk management actions, as illustrated in the risk assessment model in figure 1. EPA may also consider scientific and economic factors; court decisions; and social, technological, and political factors during the risk management process. A number of program and regional offices at EPA prepare chemical risk assessments. These risk assessments in turn provide the foundation for EPA’s risk management decisions, such as whether EPA should establish air and water quality standards to protect the public from exposure to toxic chemicals. To prepare these risk assessments, some EPA program and regional offices often rely in part on chemical assessments that the IRIS Program, as part of ORD, prepares. IRIS assessments generally include the first two steps of the risk assessment process seen in green in figure 1: (1) hazard identification and (2) dose-response assessment. Hazard identification identifies credible health hazards associated with exposures to a chemical; dose-response assessment characterizes the quantitative relationship between chemical exposure and each credible health hazard. The program derives toxicity values through this quantitative relationship. These toxicity values are combined with exposure assessments (produced by other offices within EPA) to produce a risk assessment. OCSPP, which oversees TSCA implementation, also prepares chemical risk assessments, though it does not generally rely on IRIS toxicity values. OCSPP’s risk evaluations provide the foundation for a risk management action under TSCA if a use is found to present unreasonable risk of injury to human health or the environment. Risk management actions under TSCA can include but are not limited to restrictions or bans on a chemical or a condition of use, limitations on processing or manufacture, or changes to product labeling. Figure 2 shows EPA’s organizational structure, including the program and regional offices that prepare chemical risk assessments. EPA’s IRIS Program and Process EPA created the IRIS Program in 1985 to help develop consensus opinions within EPA about the health effects from lifetime exposure to chemicals. The IRIS database of chemical assessments contains EPA’s scientific positions on the potential human health effects that may result from exposure to various chemicals in the environment, and as of November 2018, it included information on 510 chemicals. Based on our body of work on the IRIS Program, the program’s importance has increased over time as EPA program offices and regions have increasingly relied on IRIS chemical assessments in making environmental protection and risk management decisions. In addition, state and local environmental programs, as well as some international regulatory bodies, rely on IRIS chemical assessments in managing their environmental protection programs. The IRIS Program uses a seven-step process to produce chemical assessments, as shown in figure 3. The first step in the assessment development process is developing a draft assessment. This begins with IRIS Program staff determining the scope and initial problem formulation of an assessment in consultation with EPA program and regional offices. This information is documented in an IRIS Assessment Plan and released for agency and public comment. After obtaining feedback on the IRIS Assessment Plan, IRIS Program staff prepare an assessment protocol for public comment that describes the methods that IRIS will use to conduct the assessment. During Step 1 (Scoping and Problem Formulation) IRIS Program staff conduct preliminary searches of scientific literature and screen relevant studies to understand the extent and nature of the available evidence. This informs the level of effort, identifies areas of scientific complexity, and helps the IRIS Program estimate time frames for conducting the assessment. The program staff select and extract relevant data and analyze and integrate the evidence into the draft assessment. The final step in preparing the draft assessment is deriving chemical toxicity values. After these draft development steps (step 1 in fig. 3), the draft assessment goes through internal agency and interagency review, public comment, and peer review, as shown in steps 2 through 4 in figure 3. After making revisions to address comments received (step 5), the assessment goes through another round of internal and interagency review (steps 6a and 6b), and then the program finalizes and posts the assessment to the IRIS website. According to IRIS officials, in order to prepare IRIS assessments, a group of staff with specialized skills are required. On any given assessment, approximately a dozen staff drawn from several different backgrounds (e.g., toxicologists and epidemiologists) work on each assessment. While some of the assessment preparation—that is, setting up database searches and performing initial search screenings—can be performed by any staff, other parts of assessment development require that the staff have specific expertise. The IRIS assessment development process—and the associated implementation of systematic review processes—has continued to evolve since 2011, primarily as a result of NAS recommendations made in two reports issued in 2011 and 2014. The 2011 report was a NAS peer review of the IRIS assessment of formaldehyde. In that report, NAS recommended several changes to the formaldehyde assessment and also offered recommendations more generally about the IRIS assessment development process. For example, NAS recommended methods for identifying evidence to be included in IRIS assessments; assessing and weighting that evidence in preparing the assessment; selecting studies that are used for calculating toxicity; and documenting how those toxicity calculations are carried out. A House appropriations committee report for fiscal year 2015 directed EPA to implement the 2011 report’s recommendations and NAS to review the changes that EPA was making (or proposing to make). In its review, NAS made additional recommendations to the program. In April 2018, NAS released a report on the IRIS Program’s responses to the 2014 recommendations. IRIS assessments are one potential source of information for risk assessors in OCSPP who conduct risk evaluations informing risk management activities under TSCA. The purpose of risk evaluation is to determine whether a chemical substance presents an unreasonable risk to human health or the environment. EPA’s Evaluation and Management of Chemicals under TSCA TSCA authorizes EPA to evaluate and, if appropriate, regulate existing chemicals and new chemicals. TSCA generally covers chemicals manufactured, imported, processed, distributed in commerce, used, or disposed of in the United States. If EPA finds that any of these activities with respect to a specific chemical presents an unreasonable risk of injury to health or the environment, EPA must issue regulations that can, among other things, restrict or prohibit these activities. TSCA also specifies the information obtained from chemical companies that EPA must publicly disclose and the circumstances under which chemical companies can claim certain information, such as data about chemical processes, as confidential business information. EPA’s OPPT within the Office of Chemical Safety and Pollution Prevention manages risk assessment and risk management strategies for chemicals under TSCA. According to EPA officials, OPPT’s Risk Assessment Division uses a number of different streams of information—including IRIS assessments—to prepare chemical risk assessments in order to make determinations about the safety of chemicals, and the Chemical Control Division uses those risk assessments to prepare risk management plans for chemicals. Prior to 2016, environmental and industry stakeholder organizations expressed concern that public confidence was decreasing regarding the safe use of chemicals in commerce and that federal oversight should be strengthened. For example, according to an American Bar Association new TSCA guide, the desire for reform was driven by a proliferation of state-based chemical initiatives threatening to disturb interstate commercial transactions and by a continuing erosion of public confidence in TSCA’s ability to protect human health and the environment from unreasonable risks presented by chemicals. In addition, according to a statement from the Environmental Defense Fund, federal oversight could not keep pace with science or rapidly expanding production and use of chemicals. In June 2016, Congress passed the Lautenberg Act, which amended TSCA in several ways. Table 1 summarizes some of the major changes in the act, along with the purpose and application of TSCA’s major sections. Since passage of the Lautenberg Act, several areas of disagreement have arisen among stakeholders regarding the implementation of various aspects of the act. One of the main points of ongoing discussion centers on what conditions of use EPA must consider in a chemical risk evaluation under TSCA. EPA and some stakeholders also disagree on other areas such as the methodologies EPA uses in its systematic review approach, the extent to which companies’ data are exempt from disclosure, and the extent to which the fees rule accurately reflects EPA’s costs for implementing TSCA. Some of these issues have resulted in litigation. The IRIS Program Has Made Progress in Addressing Identified Process Challenges, but EPA Leadership Deliberations Delayed Progress on Producing Assessments The IRIS Program has addressed many process challenges, such as by making changes to address the length of time it takes to develop chemical assessments and to increase transparency, but EPA has not made progress toward producing chemical assessments. However, the release of documents related to IRIS assessments was delayed for nearly 6 months because EPA leadership instructed the IRIS Program not to release any assessment documentation pending the outcome of EPA leadership deliberations concerning IRIS Program priorities. The IRIS Program Has Made Progress in Addressing Identified Challenges The IRIS Program in 2011 began making changes to address identified challenges, particularly the length of time the program took to produce assessments and the level of transparency in how the program prepared assessments. The program has made some progress since the beginning of 2017 toward producing assessments and is ready to release assessment-related documents. These changes were made in response to program implementation challenges identified by governmental, industry, academic, and non-governmental stakeholders in recent years. For example, in its 2011 report, NAS identified timeliness and transparency as issues. In our review of the 2011 and 2014 NAS reports and other documentation as well as our interviews with IRIS officials and leadership and officials in program and regional offices that use IRIS assessments, we identified the key actions the IRIS Program has taken to address lack of timeliness in producing assessments and lack of transparency in how it produces assessments. The IRIS Program Has Made Changes to Address Timeliness Developing IRIS assessments has historically been a lengthy process. Because of the rigor of the IRIS process and the amount of literature that program staff must search and consider, producing an assessment typically takes several years, as we found in December 2011. Program and regional offices that use IRIS assessments understand this, and officials from several program and regional offices told us that despite the length of time it takes for the IRIS Program to complete its assessments, they prefer these assessments as sources of information over other agencies’ toxicity assessments. To address the length of time it takes to produce assessments, the IRIS Program is (1) employing project management principles and specialized software that enable the program to better plan assessment schedules and utilize staff to make the systematic review process more efficient; (2) focusing on better scoping assessments to create timely, fit-for-purpose products that address specific agency needs; and (3) streamlining the peer review process as much as possible. The Program Has Adopted Project Management Principles and New Software The first way in which the IRIS Program is addressing the length of time it takes to produce assessments is by utilizing project management principles and new software that enable the program to better plan assessment schedules and utilize staff. IRIS officials said that by using these tools, IRIS staff are able to view project tasks, timelines, and milestones to manage their individual tasks and assessment work. For example, IRIS officials said that as part of an EPA-wide initiative, they began incorporating lean management techniques, which aim to improve efficiency and effectiveness by reducing unnecessary process steps and waiting time. Additionally, IRIS officials said that they have begun using a staffing model that trains staff to be proficient in all phases of the systematic review process (i.e., screening, data extraction, study evaluation, and evidence synthesis). This modularity will make it easier for staff to work across teams and on multiple projects, assisting with systematic review needs while also contributing in their areas of expertise, according to IRIS Program officials. In addition, the IRIS Program began using both project management software and business intelligence and visualization software in 2017. IRIS Program leadership is using this software to generate resource allocation reports showing staff assignments, enabling leadership to better manage staff workloads. According to IRIS officials, the recent adoption of specialized systematic review software also enables program staff to perform more literature searches faster, and the ability to filter search results allows staff to find more quickly the most relevant information for an assessment. Use of software tools with machine-learning capabilities facilitate program staff’s ability to screen studies for relevance more quickly compared to approaches used before 2017. Prior to the adoption of these specialized software tools, much of the development of an assessment was manual (i.e., using a spreadsheet). For example, for one assessment developed manually, contactors working on an IRIS assessment took over 200 hours to screen and catalog 1,200 epidemiological studies, including carrying out quality assurance checks. By comparison, using machine-learning tools, EPA staff were able to screen almost 5,500 articles in about 30 hours. With the new tools, quality assurance was embedded into the workflow by having two independent reviewers and a software-facilitated process track and resolve screening conflicts. Additionally, an official from EPA’s National Health and Environmental Effects Research Laboratory said that the laboratory uses a similar screening process. The laboratory worked with the IRIS Program to identify similar constructs in their processes and used each other’s results to make changes and validate tools used by both. According to IRIS officials, as a result, the use of these tools has created more efficient workflow processes, leading to considerable cost and time savings. The incorporation of systematic review software tools has greatly helped the program more efficiently carry out tasks like screening literature, evaluating study quality, extracting data, and developing visualizations, according to IRIS Program officials we interviewed. Most importantly, the software tools allow multiple staff members to work on tasks simultaneously, rather than one at a time, facilitating concurrent completion of key assessment pieces. The Program Tailors Assessments to Program and Regional Office Needs The second way in which the IRIS Program is reducing the length of time it takes to produce assessments is by tailoring them to program and regional office needs, called fit-for-purpose assessments. According to IRIS officials, part of the reason assessments historically were time- consuming was because the program tried to synthesize and present all possible information on the human health effects of a particular chemical, including multiple exposure pathways (e.g., inhalation, ingestion, or dermal) and reference doses, reference concentrations, and cancerous and non-cancerous effects. This required large amounts of data extraction and was very time intensive. Beginning in early 2017, the program began implementing the fit-for-purpose approach to producing assessments. IRIS officials said the idea is that instead of producing a wide-ranging assessment, the program can produce assessments that are more limited in scope and targeted to specific program and regional office needs, reducing the amount of time IRIS staff needed to search for information, synthesize it and draft, review, and issue an assessment. For example, if the Office of Air and Radiation needed a chemical assessment that examined only inhalation exposures, the IRIS Program could limit its assessment to a single exposure pathway, which would reduce the amount of data that staff review and extract and, with less text to draft and less complex peer reviews, allow the assessment to more quickly move through the process. IRIS officials said that if offices make subsequent requests for other effects or exposure pathways, the IRIS Program can update the original assessment. IRIS officials said that they expect time savings as a result of moving to the fit-for-purpose model. As of November 1, 2018, the IRIS Program had produced two fit-for-purpose assessments: a request for correction on chloroprene and an update of the assessment on acrolein. An assessment on perfluorobutane sulfonic acid (PFBS) was also released for public comment following peer review. PFBS are a member of a class of man-made chemicals known as per- and polyfluoroalkyl substances (PFAS)—a groups that also includes perfluorooctane sulfonate acid (PFOS), perfluorooctanoic acid (PFOA), GenX, and many others. In addition, since 2017, the IRIS Program released scoping and problem formulation materials for six IRIS chemical assessments (nitrates/nitrites, chloroform, ethylbenzene, uranium, ammonia, and naphthalene). Additionally, the program is examining ways to assist program and regional offices with information that may not necessitate developing a full assessment. For example, the Office of Air and Radiation was doing work using a toxicity value for acrolein that the California Environmental Protection Agency prepared in 2008, because that value was more recent than the value in the IRIS database. However, a large number of studies on acrolein had been released since 2008, so the IRIS Program searched approximately 10,000 new studies and concluded that the study used by California Environmental Protection Agency in 2008 was still the most appropriate study for chronic toxicity value derivation. In addition, IRIS staff developed an updated draft reference concentration for acrolein based on this study. The screening and update process took approximately 4 months, demonstrating how the IRIS Program’s use of new tools and a targeted scope resulted in more timely attention to program office needs. The Program Is Streamlining the Peer Review Process The third way the IRIS Program is addressing the length of time it takes to produce assessments is by streamlining the peer review process as much as possible without compromising the quality of the review. EPA guidelines require peer review of all IRIS assessments. Smaller, less complex assessments may be peer reviewed through a contractor-led letter review or panel; more complex assessments are usually reviewed by a full Scientific Advisory Board (SAB) or a NAS panel, though IRIS leadership determines the most appropriate method of peer review based on Office of Management and Budget and EPA Peer Review Handbook guidelines. While the contractor-led letter or panel reviews are no less robust than full SAB or NAS panel reviews, the contractor-led reviews are usually smaller and completed in less time because they are reviewing smaller, less complex IRIS assessments. The time savings occur because the reviewers do not typically meet in person, or may meet only once, typically taking a few months to complete their reviews. In contrast, SAB and NAS panels involve larger numbers of people who meet multiple times, review longer and more complex assessments, and must reach consensus on their reviews. As a result, SAB and NAS peer reviews can take more than a year to complete. IRIS officials said that as they try to produce more fit-for-purpose assessments that are smaller in scope, they plan to utilize letter reviews as appropriate, to streamline the peer review process. IRIS Program officials said they also hope that other changes they recently implemented—primarily, increased transparency and systematic review—will help speed up the peer review process by producing a higher-quality overall draft. The IRIS Program Has Made Changes to Address Lack of Transparency Another major category of NAS recommendations that the IRIS Program has addressed is the need for greater transparency in how the program conducts assessments. For example, one industry representative expressed concern in August 2018 about transparency before the program began making changes, describing the IRIS Program as a “black box” because “no one knew how the program created its methodologies, weighted evidence, or produced assessments.” In response, the IRIS Program has in the past several years (1) implemented systematic review, which provides a structured and transparent process for identifying relevant studies, reviewing their methodological strengths and weaknesses, and integrating these studies as part of a weight of evidence analysis, and (2) increased outreach efforts with stakeholders and the public, both in terms of the frequency and the depth of content about assessment preparation. The Program Began Implementing Systematic Review as a Basis of Its Assessments The IRIS Program began addressing the need for greater transparency by implementing systematic review as a basis for every assessment and has been doing so for several years. A systematic review is a structured and documented process for transparent literature review. It is a scientific investigation that focuses on a specific question and uses explicit, prespecified scientific methods to identify, select, assess, and summarize the findings of similar but separate studies. The goal of systematic review methods is to ensure that the review is complete, unbiased, reproducible, and transparent. By using systematic review, the IRIS Program can demonstrate that it considered all available literature in forming conclusions and deriving toxicity values. Utilizing the new software tools described above allows program staff to search more widely than before and to identify the most relevant results faster and more accurately. The IRIS Program is working with technical experts to increase the applications of machine learning for carrying out systematic review. Additionally, new software allows the IRIS Program to save and publish its search strings and to indicate why it selected certain studies over others for review and inclusion. The software also allows multiple staff to check searches and concur or not-concur with the initial assessment about including a scientific article in the draft assessment. IRIS officials told us that the transparency associated with systematic review and clearer explanation of methodologies in assessments (as well as releasing subsidiary documents, such as IRIS Assessment Plans and Assessment Protocols) will improve stakeholders’ understanding of how the program arrives at its conclusions. The Program Has Made Changes to Communication Frequency and Type The IRIS Program also furthered transparency by increasing the frequency, structure, and content of communications with EPA program and regional offices about overall program priorities and individual assessments. This allows EPA program and regional offices to know when to expect assessments, as well as what those assessments will cover. To prepare the 2015 Multi-Year Agenda, the IRIS Program solicited requests from EPA program and regional offices about which chemical assessments they needed; these requests were released in December 2015. When new leadership joined the IRIS Program in early 2017, the new officials began reaching out to individual program and regional offices to re-confirm their needs and priorities. IRIS officials said this effort was in part to ensure that the IRIS Program was delivering what the program offices needed, as well as to help the IRIS Program keep its priorities up to date and ensure that resources (primarily staff) were aligned with EPA-wide priorities. Based on these conversations with program and regional office staff, the IRIS Program made some chemical assessments higher priority and removed others from the program’s workflow, consistent with stated needs. In May 2018, the IRIS Program prepared a statement for posting on the IRIS website outlining these changes to the program’s workflow and an updated list of assessments that were being developed with anticipated completion time frames. However, EPA leadership in ORD—the office that oversees the IRIS Program—did not approve this statement for release because current EPA leadership in program and regional offices had not formally requested these assessments. Nevertheless, officials from program and regional offices that use IRIS assessments told us that they received clear communication from the IRIS Program about priorities and timelines for individual assessments. According to these officials, some of this communication took place when IRIS Program leadership reached out to program and regional office officials to confirm their needs, and some took place during monthly telephone calls the IRIS Program held to update stakeholders on assessment development timelines. Program and regional office officials told us that they appreciated the IRIS Program’s recent efforts to understand program and regional office needs and timelines; communicate the status of assessments more frequently; and find ways to assist program offices that may not require developing a full assessment, such as assessment updates or literature reviews. Since 2013, the IRIS Program has released preliminary assessment materials—including IRIS Assessment Plans and assessment protocols— so that EPA and interagency stakeholders and the public could be aware of scoping and problem formulation for each assessment. Since 2017, according to EPA, these documents had a new structure and better demonstrate the application of systematic review, and they continue to convey EPA’s need for each assessment and frame questions specific to each assessment. Officials in several program and regional offices that use IRIS assessments told us that the release of IRIS Assessment Plans and protocols was very helpful because it allowed them to offer early input to the IRIS Program about the scope of an assessment, when it could affect the direction of the assessment. IRIS officials also said that they created templates for several parts of the assessment process, including the IRIS Assessment Plans and assessment protocols, which help maintain consistency throughout assessment development and from one assessment to the next. The Program Made Progress in Early 2018 on Assessments in Development During calendar year 2018, the IRIS Program planned to release documents or hold meetings for 15 of the 23 ongoing chemical assessments in development, as well as for the IRIS Handbook and a template for assessment protocols. From January through May 2018, the IRIS Program met each of its internal deadlines for work on 9 different chemical assessments and released the template for assessment protocols for agency review. The IRIS Program also produced a report to Congress on the program’s work in January 2018 and took part in a NAS review of the program in February 2018. The NAS review, which offered a third-party assessment of the program’s efforts, provided a supportive assessment of ongoing transformations aimed at ensuring data quality, new systematic approaches for data analysis and expanded stakeholder engagement efforts, and increased the efficiency of assessments. According to the report, NAS reviewers were impressed with the changes being instituted in the IRIS Program since 2014, including substantive reforms by new IRIS Program leadership, such as the development, implementation, and use of systematic review methods to conduct IRIS assessments. In addition, as of August 2018, the final IRIS assessment of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) was issued. In early November 2018, IRIS officials told us that the agency had almost completed internal review of the handbook, which was being prepared for public release. In December 2018, the IRIS Program and OPPT participated in a NAS workshop that informed the systematic review of mechanistic evidence. Budget Cuts May Impact the Program’s Ability to Expand Assessment Development The IRIS Program has made important changes aimed at producing more timely and transparent assessments, but IRIS officials told us that proposed budget cuts have caused them concern about whether they will have sufficient resources to expand assessment work in the future. The human health risk assessment area, of which IRIS’s budget makes up approximately half, has been funded at about $38 million annually since fiscal year 2013 based on our review of EPA budget documents. However, the President’s budget request for human health risk assessment work in fiscal years 2018 and 2019 was $22.5 million and $22.2 million, respectively. This represents a cut of approximately $17 million from previous budget levels dating back to fiscal year 2013. The IRIS Program budget would drop approximately 40 percent from $20.8 million to approximately $12 million if these cuts were enacted. Congress did not support these reductions. Specifically, according to the joint explanatory statements accompanying the Consolidated Appropriations Act, 2018, and Consolidated Appropriations Act, 2019, Congress had agreed to continue providing funding at fiscal year 2017 enacted levels. Cuts to the program could impact EPA’s regulatory work: Officials in almost all of the program and regional offices that use IRIS assessments told us that they rely on IRIS assessments to do their work—it is the first place they look for chemical toxicity values, and if the IRIS Program is unable to produce assessments, their offices would be challenged to meet statutory deadlines and there would be a generally negative effect on public health. IRIS Program Progress toward Producing Assessments Was Delayed by EPA Leadership Deliberations about Priorities The IRIS Program made progress developing assessments and producing assessment documentation (e.g., IRIS Assessment Plans and protocols) in early 2018. However, EPA leadership deliberations about the program’s priorities that took place from June through December 2018 delayed the program’s assessment production. IRIS officials told us that in early June 2018 EPA leadership in ORD informed them that the IRIS Program could not release an assessment without a formal request for that assessment from the current leadership of a program office. At the request of the Administrator, IRIS officials prepared a survey of program and regional offices, asking them to re- confirm their needs for 20 assessments that were in development. This survey was sent by memorandum in August 2018. Program office responses were to be signed by the Assistant Administrator of each program office to ensure that the re-confirmations were consistent with the priorities of EPA program office leadership. While survey responses were being compiled, EPA leadership in ORD instructed the IRIS Program not to publically release any assessment documentation. As a result, any assessment or subsidiary assessment document (e.g., an IRIS Assessment Plan or protocol) that was ready for agency review, public comment, or peer review was unable to proceed through the IRIS assessment development process. In late October 2018, prior to releasing results of the initial program and regional office survey, EPA leadership in ORD made a second request of program offices for a prioritized list of assessments. According to officials from the IRIS office, who were queried for advice by officials from some program offices, ORD’s second request was made verbally at a meeting and included direction to the program offices to limit their requests to no more than three to four chemicals. ORD’s request did not provide information on the basis for selecting priorities or the reason for the limit of three or four chemical assessments from the original survey submissions. The calls for advice from program office officials represented the first time the IRIS Program heard about the requests for a prioritized list, according to IRIS program officials. And since neither the program and regional offices nor the IRIS Program had information from the Administrator’s office about what the prioritization was meant to achieve, the IRIS Program was unable to provide guidance about what chemicals might be considered a priority, or how many they might be able to continue work on. When EPA leadership’s deliberations about the program’s priorities were completed, a memorandum was issued on December 4, 2018, that listed 11 chemical assessments that the IRIS Program would develop. This was a reduction of the program’s workflow from 22 assessments, but the memorandum announcing the reduced workflow gave no reason for the reduction. The memorandum accompanying the list of 11 chemicals gave no indication of when more assessments could be requested or if IRIS’s workflow would remain at 11 chemicals for the foreseeable future. According to the memorandum, the 11 chemicals were requested by two EPA program offices (the Office of Water and the Office of Land and Emergency Management). We received this memorandum at the end of our review and did not have the opportunity to review the prioritization process that led to its drafting. Two weeks after the issuance of the memorandum, the IRIS program publicly issued an outlook of program activities, which included two additional assessments that were not included in the memorandum. These two assessments, ethyl tertiary butyl ether (ETBE) and tert-butyl alcohol (TBA), were not included in the memorandum because they were out for public comment and external peer review. Furthermore, four assessments that were in the later stages of development and had not been issued were not included in the December 2018 Outlook. The four assessments were: acrylonitrile, n-Butyl alcohol, formaldehyde, and polycyclic aromatic hydrocarbon (PAH). The assessment of formaldehyde was, according to the “IRIS Assessments in Development” website, at Step 4 of the IRIS process (an assessment is drafted and was ready to be released for public comment and external peer review). The absence of these four assessments from the December 2018 Outlook could create confusion for stakeholders interested in them. EPA provided no information on the status of these four assessments or whether it planned to discontinue working on them or restart them at another time. As we have previously reported, an overarching factor that affects EPA’s ability to complete IRIS assessments in a timely manner is that once a delay in the assessment process occurs, work that has been completed can become outdated, necessitating rework throughout some or all of the assessment process. Thus, it remains to be seen when these assessments can be expected to move to the next step in the IRIS process or be completed. As of December 19, 2018, the status of the 13 assessments in the December 2018 Outlook was: External peer review: ETBE and TBA. Draft Development: arsenic, inorganic; chromium VI; polychlorinated biphenyls (PCBs; noncancer); perfluorononanoic acid (PFNA); perfluorobutanoic acid (PFBA); perfluorohexanoic acid (PFHxA); perfluorohexane sulfonate (PFHxS); and perfluorodecanoic acid (PFDA). Scoping and Problem Formulation: mercury salts; methylmercury; vanadium and compounds. According to IRIS officials, the IRIS Program was unable to release any work since June 2018, while it was waiting for feedback from the Administrator’s office regarding whether its assessment workflow was consistent with agency priorities. IRIS officials told us that staff continued whatever draft development work that they could do internally, but several IRIS staff have been working increasingly for OPPT to support its work preparing risk evaluations under TSCA. ORD reported to us that in September 2018—3 months after IRIS assessments were stopped from being released because of ongoing EPA leadership deliberations—5 of approximately 30 IRIS staff were supporting OPPT with 25 to 50 percent of their time. In October 2018—4 months after IRIS assessments were stopped from being released—28 of approximately 30 IRIS staff were supporting OPPT with 25 to 50 percent of their time. According to IRIS officials, this was occurring primarily because OPPT has a significant amount of work to do to meet its statutory deadlines, and OPPT needed IRIS staff expertise to help meet those deadlines. As noted above, TSCA establishes a regulatory standard that generally differs from those under other environmental laws, so the TSCA assessments will not necessarily be relevant to other EPA programs that have relied on IRIS endpoint values in making their regulatory decisions. EPA Has Demonstrated Progress Implementing TSCA by Responding to TSCA Statutory Deadlines through the End of Fiscal Year 2018, but Key Challenges Remain EPA has demonstrated progress implementing TSCA by responding to TSCA’s statutory deadlines through the end of fiscal year 2018, including promulgating rules, developing guidance, and releasing reports. However, EPA faces key challenges to its ability to implement TSCA, such as managing the risks posed by ongoing litigation, ensuring appropriate resources, developing guidance to ensure consistency, and ensuring that the new chemicals review process is efficient and predictable. EPA Responded to TSCA Statutory Deadlines EPA has responded to initial statutory deadlines under TSCA, as amended by the Lautenberg Act, including requirements to promulgate new rules, develop guidance, and release reports. For example, EPA began 10 risk evaluations drawn from the 2014 update of the TSCA Work Plan within 180 days of enactment of the Lautenberg Act (§ 6(b)(2)(A)); submitted an initial report to Congress estimating capacity for and resources needed to complete required risk evaluations within 6 months of enactment (§ 26(m)(1)); carried out and published in the Federal Register an inventory of mercury supply, use, and trade in the United States by April 1, 2017. (§ 8(b)(10)(B)); developed guidance to assist interested persons in developing and submitting draft risk evaluations within 1 year of enactment (§ 26(l)(5)); and developed a plan for using alternative test methods to reduce use of vertebrate animal testing within 2 years of enactment (§ 4(h)(2)(A)). In addition, in four areas in which Congress required EPA to establish processes and structures for TSCA, EPA finalized four rules detailing the general processes for prioritizing and evaluating chemicals under TSCA, known together as the Framework Rules. EPA responded to the 1-year deadlines to establish three of the four Framework Rules. These three rules are the risk prioritization rule, which explains EPA’s process for prioritizing existing chemicals for risk evaluation; the risk evaluation process rule, which explains EPA’s process for conducting risk evaluations on existing chemicals; and the inventory notification rule, which requires manufacturers and processors of chemical substances to report which chemicals are currently in commerce. The fourth Framework Rule EPA issued, which had no issuance deadline, implements a Lautenberg Act provision authorizing EPA to collect fees for carrying out a number of different activities under TSCA, including collecting fees from manufacturers and processors that submit new chemicals or submit chemicals for significant new uses to EPA for review. Though EPA responded to all of the statutory deadlines, some environmental and industry stakeholder organizations we interviewed told us that they do not believe this is a complete measure of how well EPA is implementing TSCA. Representatives from one environmental stakeholder organization told us in July 2018 that it is still too early to assess how well EPA is implementing TSCA because none of the existing chemical risk evaluations ongoing under the new process have been released; the wording in the new rules and documentation is unclear; and the risk prioritization rule, the risk evaluation rule, and the inventory reset rule have been challenged in court. However, in January 2019 they told us that they were too optimistic in their assessment of TSCA implementation and believe EPA is falling behind in its progress. As of December 2018, representatives from another environmental stakeholder group told us that, while EPA has met a number of major statutory deadlines, the agency’s rules and other actions do not reflect the best available science and are contrary to both the letter and intent of the new TSCA Act. However, in January 2019 an industry stakeholder organization noted that the 2016 amendments to TSCA are generally being implemented effectively and efficiently as Congress envisioned, and the agency continues to meet important deadlines required by the law. In addition, they also told us that EPA’s TSCA program is also utilizing the best available science and a weight of the evidence approach to make high quality chemical management decisions. Representatives from industry stakeholder organizations we interviewed told us they believe the rules are consistent with TSCA, but that EPA is not consistently meeting the 90-day deadline to make determinations on new chemicals or the 30-day deadline to make determinations on low-volume exemptions. EPA Faces Challenges Implementing TSCA EPA faces challenges with its ability to implement TSCA, such as managing the risk posed by ongoing litigation, ensuring appropriate resources, developing guidance documents to ensure consistency, and ensuring that the new chemicals review process is efficient and predictable. Three Rules Are Undergoing Litigation Three of the four Framework Rules that EPA issued to implement TSCA have been challenged in court: the risk prioritization rule, the risk evaluation rule, and the inventory notification rule. Procedures for Prioritization of Chemicals for Risk Evaluation under the Toxic Substance Control Act (risk prioritization rule). In Safer Chemicals, Healthy Families v. U.S. Environmental Protection Agency, a collection of environmental and public health organizations challenged several aspects of EPA’s TSCA implementation, including the risk prioritization rule. Specifically, the environmental organizations argue, among other things, that the plain language of TSCA requires EPA to consider all conditions of use in prioritizing chemicals for review under TSCA, rather than excluding, for example, uses that EPA believes are “legacy uses” for which a chemical is no longer marketed. EPA and chemical industry intervenors respond by arguing that TSCA grants EPA discretion to determine what conditions constitute a chemical’s conditions of use and to generally exclude legacy activities—primarily historical activities that do not involve ongoing or prospective manufacturing, processing, or distribution in commerce of a chemical substance as a product. Procedures for Chemical Risk Evaluation under the Amended Toxic Substances Control Act (risk evaluation rule). In Safer Chemicals, Healthy Families v. U.S. Environmental Protection Agency, the environmental organizations also contend that EPA’s risk evaluation rule is contrary to TSCA, in part because, as noted above, the rule “impermissibly” excludes uses that the law requires EPA to include in its risk evaluations. EPA and industry intervenors responded by arguing that TSCA grants EPA discretion to determine what conditions constitute a chemical’s conditions of use. The organizations also argued that the risk evaluation rule would deter public participation in the risk evaluation process by imposing criminal penalties on a member of the public who submits incomplete information to EPA but does not impose similar penalties on manufacturers. In August 2018, the government moved to vacate the penalty regulation, and the environmental organizations consented to this motion. Inactive) Requirements (inventory notification rule). In Environmental Defense Fund v. U.S. Environmental Protection Agency, an environmental organization challenged EPA’s inventory notification rule, which EPA issued in response to a TSCA requirement that EPA identify which chemicals in the TSCA inventory are still in use and require substantiation of claims that chemical identities constitute confidential business information that can be withheld from public disclosure. The environmental organization argued, among other things, that the rule impermissibly allows any persons to assert confidentiality claims for any chemical they manufacture or process, rather than just the original claimant. EPA and industry intervenors responded in part by arguing that TSCA specifically allows any affected manufacturers to maintain an existing confidentiality claim for a specific chemical identity, which the industry intervenors assert constitutes critically important intellectual property. OPPT officials told us they are trying to not anticipate the results of the litigation and, instead, address the outcome of each case as it is decided. They stated that they are staying aware of developments in ongoing litigation and are constantly considering potential outcomes but believe it would not be reasonable to prepare explicit resource plans for unknown future scenarios. If EPA loses any of these lawsuits, it may need to devote additional resources to implement the relevant provisions of TSCA. For example, if the suit involving the risk evaluation rule is successful, EPA may be forced to redo parts of its risk evaluations close to the December 2019 deadline to finalize these evaluations. EPA is required to complete its first 10 existing chemical evaluations not later than 3 years after the date on which it initiated the risk evaluations, which was December 2016. TSCA also allows for an extension of the risk evaluation deadlines for up to 6 months if the agency deems it necessary. EPA Faces Challenges Ensuring That It Has Appropriate Resources The Lautenberg Act greatly increased OPPT’s workload. Prior to the enactment of the Lautenberg Act, EPA did not have deadlines for completing existing chemical evaluations. Under the Lautenberg Act, EPA must finalize 10 ongoing risk evaluations by December 2019, which represents a tight deadline, according to EPA officials. Furthermore, the law requires EPA to ensure that 20 risk evaluations are ongoing for high- priority substances 3-1/2 years after enactment and that at least 20 chemical substances have been designated as low-priority substances. In addition, under TSCA prior to the Lautenberg Act, a new chemical could enter commerce after 90 days unless EPA took action to the contrary. Under the Lautenberg Act, EPA is required to make a determination on a new chemical before it can be manufactured—another source of increased workload. Partially because of the increased workload, some OPPT officials told us that they have concerns about staff capacity within OPPT. Officials in both the Chemical Control Division (responsible for risk management) and the Risk Assessment Division (responsible for risk assessment) said that they do not have sufficient resources to do their work. This included staff from all five technical teams we interviewed in the Risk Assessment Division. Technical teams are working groups organized by discipline that bring together experts from across OPPT branches. The Risk Assessment Division is particularly affected by the heavy workload, according to OPPT officials and representatives from an industry stakeholder organization. The division must review all of the premanufacture notices for new chemicals and contribute to the first 10 existing chemical evaluations. Officials from the Chemical Control Division told us that the Risk Assessment Division is struggling more because its work requires more technical employees. The officials said that EPA is hiring additional full-time equivalents (FTE), but it takes time to train new people, and this will initially increase workload. Officials told us that in July 2018, OPPT had about 300 FTEs and was authorized to hire 40 additional FTEs. As of October 2018, OPPT officials told us that they had hired or extended offers to 20 to 25 of that 40 and continued to hire more employees. OPPT officials told us that reaching an appropriate level of FTEs—including recruiting and retaining staff—is challenging. OPPT officials said they expect that the recently announced initiative to implement direct hiring authority for scientific and technical positions will have a positive impact on these efforts. To address the staffing challenge, staff have also been reassigned from other parts of EPA to OPPT. For example, staff in the Safer Choice Program—an EPA program that helps consumers, businesses, and purchasers find products that perform and are safer for human health and the environment—were redeployed to the Chemical Control and the Risk Assessment Divisions. Representatives from both industry stakeholder organizations we interviewed told us that it can be difficult to work with recently reassigned staff who are not familiar with the chemicals they are working on. Representatives from an industry stakeholder organization told us that, in some cases, OPPT staff are ill-prepared to make decisions about a premanufacture notice. OPPT senior officials said there is always a learning curve for reassigned employees, but they do not put new people in positions to make decisions on premanufacture notices. They said that these decisions are never made by one person in a vacuum. OPPT officials and staff told us that they are generally optimistic about an upcoming reorganization of OPPT that will separate assessment and management of new and existing chemicals programs and better align the structure of OPPT with the focus of TSCA’s provisions. For example, the Chemical Control Division and the Risk Assessment Division currently each handle both new and existing chemicals, and the planned reorganization will divide the divisions into new and existing chemical divisions. However, staff told us that they have concerns about whether the new divisions will be adequately staffed, the timing of the reorganization, and their future placements. Staff from multiple technical teams we interviewed in the Risk Assessment Division said that they are not sure if, after the reorganization, the new divisions will be adequately staffed. Staff from one technical team said there has been increased attrition in recent years, partially because of concerns about the upcoming reorganization. Staff from another technical team said that a large number of management positions are unfilled. Staff from multiple technical teams told us that it will take time after the reorganization to redistribute work and train staff. Staff from one team said the reorganization is ill-timed because there are currently too many other ongoing high-priority projects. Staff from multiple technical teams also told us that they are experiencing anxiety about their future placements and with whom they will work. In commenting on a draft of this report, EPA stated that the concerns raised by staff are likely common to any program undergoing change. OPPT officials said they submitted the reorganization proposal to EPA’s Office of Mission Support—formerly the Office of Administration and Resources Management—in October 2018 and that it could take several more months as EPA management works out details with labor unions and addresses other issues. Officials said that they anticipate implementing the reorganization in early 2019. OPPT senior officials said that now that OPPT has many new responsibilities and a heavier workload, they are taking steps to improve capacity by implementing the reorganization and hiring new staff. The officials said that though there will inevitably be growing pains, the changes are part of a larger plan specifically designed to better position OPPT to implement TSCA. Senior officials also told us that they have spent considerable time setting expectations for new and existing staff. In tandem with the major changes that increased EPA’s workload, the 2016 amendments to TSCA authorize EPA to establish fees to defray a portion of the costs of administering TSCA sections 4, 5, and 6 and collecting, processing, reviewing, providing access to, and protecting information about chemical substances from disclosure, as appropriate, under TSCA section 14. Affected businesses began incurring fees under the new rule as of October 1, 2018, but it is unclear whether the fees collected will be sufficient to support relevant parts of the program. OPPT officials told us that while they are uncertain how much the fees rule will generate the first year, they believe that over the course of a few years, the amount of money generated should stabilize. The first year is where officials are not sure how much they may receive. Officials expect to collect an average of $20 million per year over the next 3 fiscal years. In fiscal year 2019, however, they expect to collect approximately $7 million to $8 million. According to EPA, the agency will be tracking its costs and use that information to adjust future fees, if appropriate. As required by law, EPA will evaluate and readjust, if necessary, the fees every 3 years. EPA estimates the average yearly cost of TSCA implementation for fiscal years 2019 through 2021 to be $80,178,000. EPA’s fiscal year 2019 budget justification shows $57,973,700 allocated to TSCA implementation. However, EPA does not expect a budget shortfall in fiscal year 2019 because, according to officials, they (1) have funds available from 2018 to support fiscal year 2019 needs, (2) receive support from other EPA offices like the Office of General Counsel and the Office of Research and Development, (3) expect fiscal year 2019 costs to be lower than the 3-year average described in the fees rule, and (4) expect some indirect costs to be covered by non-TSCA budget categories. EPA Faces Challenges Developing Guidance to Ensure Consistency EPA also faces challenges in developing guidance to ensure consistency in implementing the law. OPPT officials said that, given the tight timelines that TSCA requires, they have not yet created all the necessary guidance for staff implementing the law. Officials likened it to building an airplane as they fly it, as they must create guidance and processes, while simultaneously applying them to chemical evaluations. Staff from four of five technical teams we interviewed are either currently updating their guidance, still developing their guidance, or have never developed guidance before. Staff from two teams told us that they are developing the guidance as they apply it to their work. OPPT officials told us that they are using some guidance that was in place before the Lautenberg Act was enacted, though they are working on updates. Representatives we interviewed from industry stakeholder organizations said they want EPA to be clear about its standards for the new chemicals program and how they are defining terms in TSCA. Representatives from one industry stakeholder organization suggested that EPA should establish some definitions and develop guidance on how to apply those definitions, in order to help both chemical manufacturers and reviewers within OPPT. In June 2018, EPA released “Points to Consider When Preparing TSCA New Chemical Notifications,” guidance that representatives from industry stakeholder organizations said is helpful, but they are still not sure how EPA is using information like the Points to Consider guidance in its evaluations and against what standard EPA’s reviewers are reviewing and assessing a chemical. Representatives we interviewed from industry stakeholder organizations said that decisions on new chemical reviews depend on individual reviewers because EPA has not provided the reviewers with guidance that ensures consistency. OPPT officials also said consistency is a challenge in conducting risk assessments. Representatives we interviewed from environmental stakeholder organizations did not mention consistency as an area of challenge. EPA Faces Challenges Ensuring That the New Chemicals Review Process Is Efficient and Predictable Representatives from both industry stakeholder organization we interviewed also told us that the new chemicals program is too slow and unpredictable, which can negatively affect innovation. For example, representatives from one company told us in comments they provided through an industry stakeholder organization we interviewed that it submitted a premanufacture notice for a substance that would decrease the potential for worker and environmental exposure while providing improved product performance. The approval process extended to nearly 550 days compared to the 90 days it typically took to obtain approval prior to TSCA’s amendment. EPA can request extensions, and submitters can voluntarily suspend the review process; therefore, the overall process can extend beyond the 90-day requirement. For example, in the new chemical review process, EPA first makes an initial determination. If a company does not like this initial determination, it can request more time to provide additional data or develop new data in an effort to get a positive final determination. A company withdraws its submission prior to a final EPA determination if it is clear the determination will not be favorable and the chemical will be regulated. EPA officials said the agency does not violate the mandated timelines because submitters agree to voluntarily suspend the review process. However, representatives from one industry stakeholder organization told us that as of December 2018, with the passage of time and greater familiarity with Lautenberg, OPPT’s decision making process has improved and is more predictable. EPA officials said that historically, even among new chemicals for which EPA completed review, 57 percent actually entered commerce. Officials said that in the past companies submitted new chemicals just to see what determinations EPA would make. Going forward, as of October 2018, officials said they expect larger fees will result in some companies choosing to be more selective in the chemicals they submit to the program. In addition, EPA officials told us that after OPPT’s reorganization, a more devoted team will focus on pre-notice meetings with companies. Officials said this should reduce some of the back and forth with submitters, thereby improving timelines. Representatives we interviewed from industry stakeholder organizations also told us that delays motivate companies to introduce chemicals first in foreign markets. For example, one company told us through comments it provided through an industry stakeholder organization we interviewed that it developed a new technology in the United States, but because of the lengthy delays experienced with new chemicals reviewed under TSCA, they will neither register nor commercialize the product in the United States at this time. Rather, the company has decided to pursue commercialization in Europe, which will enable the company to deliver the benefits of this new technology to their customers in the European market sooner than is possible in the United States. Agency Comments We provided a draft of this report to EPA for its review and comment. We received written comments from EPA that are reproduced in appendix I and summarized below. In its written comments, EPA stated that while the draft comprehensively describes the challenges facing the TSCA and IRIS programs, it does not appropriately address EPA’s extensive progress in implementing TSCA, and EPA recommended that our final report include information regarding its accomplishments under the new law. Specifically, we report on the steps EPA has taken to respond to the requirements of the law because in many instances, whether EPA’s response is legally sufficient is in litigation, and GAO does not typically express a view on legal or factual matters in dispute before a court. We have updated our report with additional examples, which the agency provided in its comments, of steps it has taken to implement TSCA. In addition, EPA requested that we consider its progress made in addressing and controlling toxic chemicals with respect to the five criteria for removal from our high-risk list. The application of the high-risk criteria was not within the scope of this report. Our forthcoming 2019 high-risk update will address actions taken by agencies on the list, including EPA, since the last update in 2017. EPA said that to monitor progress, it had put into place a rigorous program; as a regular practice, EPA stated that Deputy Assistant Administrators from the Office of Chemical Safety and Pollution Prevention conduct monthly Business Review meetings with the Office Directors, Deputy Office Directors, lead region representatives, and other key staff. EPA stated that during these meetings they discuss their organizations’ operations and performance, including TSCA implementation status, using performance charts to track progress on mission measures, identify and update countermeasures, and resolve problems. However, over the year that we conducted our review, EPA officials did not mention conducting such meetings and did not provide documentation that such meetings took place. Further, in its written comments, EPA provided technical comments on the draft report, which we address as appropriate. In one comment, EPA stated that instead of noting that the agency has successfully implemented many statutory requirements, the draft report stated that EPA responded to deadlines. We believe the report correctly characterizes steps EPA has taken to implement TSCA, and, as noted above, whether EPA’s response is legally sufficient is in litigation, and GAO does not typically express a view on legal or factual matters in dispute before a court. In another case, the technical comments contradicted facts that we gathered during our review. For instance, while EPA stated that the draft report incorrectly noted that most of the IRIS staff had been working on TSCA activities, we provide further information to support our original statement; we replaced the term ‘most’ with specific data on the number of IRIS staff and the percentage of their time that was devoted to TSCA activities. Also in its technical comments, EPA stated that our analysis highlighted uncertainty resulting from the agency’s recent activities to ensure IRIS Program efforts were aligned with the highest priorities of the agency. EPA acknowledged that this action did result in a delay but that in the long term, it would ensure that EPA’s program and regional office priorities are being addressed and that each office is fully engaged in the development of IRIS assessments that will strengthen the agency’s ability to address its mission for protecting human health and the environment. However, as we state in our report, prior to releasing results from the initial program and regional office survey, EPA leadership in ORD made a second request for a prioritized list of chemical assessments. According to officials from the IRIS office, who were queried for advice, the second request was made verbally at a meeting and did not provide the offices with information on the basis for selecting priorities or the reason for limiting the number of assessments to three or four chemicals. In addition, the ultimate priority list EPA issued in December 2018 reflected the priorities of two program offices and did not provide evidence that other EPA program offices had no interest in IRIS assessments. Because EPA did not identify the basis for program offices to select priorities or the reason for limiting the number of chemicals to assess, the process was not transparent, leaving room for uncertainty. EPA also provided additional technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Comments from the Environmental Protection Agency Appendix II: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Diane Raynes (Assistant Director), Summer Lingard-Smith (Analyst in Charge), Alisa Carrigan, Tara Congdon, Richard P. Johnson, Amber Sinclair, and William Tedrick made key contributions to this report. In addition Karen Howard, Dennis Mayo, Dan Royer, and Sara Sullivan made important contributions. Related GAO Products Chemical Innovation: Technologies to Make Processes and Products More Sustainable. GAO-18-307. Washington, D.C.: February 8, 2018. Chemicals Management: Observations on Human Health Risk Assessment and Management by Selected Foreign Programs. GAO-16-111R. Washington, D.C.: October 9, 2015. Chemical Assessments: Agencies Coordinate Activities, but Additional Action Could Enhance Efforts. GAO-14-763. Washington, D.C.: September 29, 2014. Chemical Regulation: Observations on the Toxic Substances Control Act and EPA Implementation. GAO-13-696T. Washington, D.C.: June 13, 2013. Chemical Assessments: An Agencywide Strategy May Help EPA Address Unmet Needs for Integrated Risk Information System Assessments. GAO-13-369. Washington, D.C.: May 10, 2013. Toxic Substances: EPA Has Increased Efforts to Assess and Control Chemicals but Could Strengthen Its Approach. GAO-13-249. Washington, D.C.: March 22, 2013. Chemical Assessments: Challenges Remain with EPA’s Integrated Risk Information System Program. GAO-12-42. Washington, D.C.: December 9, 2011. Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA’s Integrated Risk Information System. GAO-08-440. Washington D.C: March 7, 2008. High-Risk Series: An Update. GAO-09-271. Washington D.C: January 2009. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington D.C: February 15, 2017.
Why GAO Did This Study EPA is responsible for reviewing chemicals in commerce and those entering the marketplace. Currently there are more than 40,000 active chemical substances in commerce, with more submitted to EPA for review annually. EPA's IRIS database contains the agency's scientific position on the potential human health effects that may result from exposure to various chemicals in the environment. EPA's IRIS Program, which produces toxicity assessments, has been criticized in the past for timeliness and transparency issues. In response, the IRIS Program committed to making program improvements starting in 2011, which the National Academy of Sciences (NAS) recently commended. TSCA as amended in 2016 provides EPA with additional authority to review both existing and new chemicals and to regulate those that EPA determines pose unreasonable risks to human health or the environment. This report describes (1) the extent to which the IRIS Program has addressed identified challenges and made progress toward producing chemical assessments; and (2) the extent to which EPA has demonstrated progress implementing TSCA. GAO reviewed NAS and EPA documents and interviewed officials from EPA and representatives from two environmental and two industry stakeholder organizations. What GAO Found The Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) Program, which prepares human health toxicity assessments of chemicals, has made progress addressing historical timeliness and transparency challenges in the assessment process. Efforts to address timeliness include employing project management principles and specialized software to better plan assessments and utilize staff. To address the need for greater transparency in how the program conducts assessments, IRIS officials and the IRIS Program have implemented systematic review, which provides a structured and transparent process for identifying relevant studies, reviewing their methodological strengths and weaknesses, and integrating these studies as part of a weight of evidence analysis. Since the process improvements were implemented, the program made progress toward producing chemical assessments through May 2018. In June 2018, the EPA Administrator's office told IRIS officials that they could not release any IRIS-associated documentation without a formal request from EPA program office leadership. In August 2018, according to IRIS officials, program office leadership was asked to reconfirm which ongoing chemical assessments their offices needed. In late October 2018, these offices were asked to limit their chemical requests further, to the top three or four assessments. At the same time—4 months after IRIS assessments were stopped from being released—28 of approximately 30 IRIS staff were directed to support implementation of the Toxic Substances Control Act of 1976 (TSCA), as amended, with 25 to 50 percent of their time, according to officials. Then on December 19, 2018, the Office of Research and Development released its IRIS Program Outlook, which provided an updated list of 13 assessments. Eleven of the 13 chemicals on the IRIS Program Outlook were requested by two EPA program offices. A memorandum issued earlier in December, gave no indication of when additional assessments could be requested or what the IRIS Program's workflow would be in the near term. EPA has demonstrated progress implementing TSCA, which was amended in June 2016, by responding to statutory deadlines. For example, EPA finalized rules detailing the general processes for prioritizing and evaluating chemicals, known as the Framework Rules, but three of the four rules have been challenged in court. Environmental organizations have argued, among other things, that TSCA requires EPA to consider all conditions of use in prioritizing and evaluating chemicals, rather than excluding, for example, uses that EPA believes are "legacy uses," for which a chemical is no longer marketed. EPA argued that TSCA grants it discretion to determine what constitutes a chemical's conditions of use. Amendments to TSCA in 2016 increased EPA's responsibility for regulating chemicals and in turn, its workload. As such, EPA is required to prioritize and evaluate existing chemicals by various deadlines over an extended period and to make a regulatory determination on all new chemicals. Senior management told GAO that they were confident that ongoing hiring and reorganization would better position the office that implements TSCA. What GAO Recommends GAO made recommendations previously to improve the IRIS Program and TSCA implementation. EPA provided comments, which GAO incorporated as appropriate.
gao_GAO-19-347
gao_GAO-19-347_0
Background Direct Loan Program and Repayment Plans The Direct Loan program provides financial assistance to students and their parents to help pay for postsecondary education. Under the Direct Loan program, Education issues several types of student loans (see sidebar). Current William D. Ford Federal Direct Loan Types Subsidized Stafford Loans: Available to undergraduate students with financial need (generally the difference between their cost of attendance and a measure of their ability to pay, known as expected family contribution). Borrowers are not responsible for paying interest on these loans while in school and during certain periods of deferment, an option that allows eligible borrowers to temporarily postpone loan payments. Unsubsidized Stafford Loans: Available to undergraduate and graduate school students irrespective of financial need. Borrowers must pay all interest on these loans. PLUS Loans: Available to graduate student borrowers and parents of dependent undergraduates. Borrowers must pay all interest on these loans. Consolidation Loans: Available to eligible borrowers wanting to combine multiple federal student loans (including those listed above) into one loan. Repayment periods are extended up to a maximum of 30 years, thereby lowering monthly payments. Interest rates for these loans are tied to the Department of the Treasury’s 10-year note rate and can vary by loan type. In addition, there are limits on the annual and aggregate amounts that can be borrowed for certain loan types. After a prospective borrower applies for and is awarded a Direct Loan, Education disburses it through the borrower’s school. Once the loan is disbursed, it is assigned to one of nine loan servicers under contract with Education. These loan servicers are responsible for such activities as communicating with borrowers about the status of their loans, providing information about and enrolling borrowers in repayment plans, and processing payments. Once borrowers leave school, they are responsible for making payments directly to their assigned loan servicer. A variety of repayment plans are available to eligible Direct Loan borrowers, including Standard, Graduated, Extended, and several IDR plans. Borrowers are automatically enrolled in the Standard plan if they do not choose another option, and generally make fixed monthly payments over a period of 10 years. IDR plans can ease repayment burden by setting monthly loan payments based on a borrower’s income and family size and extending the repayment period up to 20 or 25 years, depending on the plan. Unlike Standard, Graduated, and Extended repayment plans, IDR plans offer loan forgiveness at the end of the repayment period and monthly payments may be as low as $0 for some borrowers. There are a variety of IDR plans, and these plans have differences in eligibility requirements, how monthly payment amounts are calculated, and repayment periods before potential loan forgiveness (see table 1). Application Process for Income-Driven Repayment Plans To participate in an IDR plan, borrowers must submit an application to their loan servicer that, among other things, includes information about their income, marital status, and family size (see table 2). According to Education, Education’s loan servicers review the information borrowers submit on their IDR applications to determine if borrowers are eligible for IDR plans. If the servicer determines that a borrower is eligible, it enrolls the borrower in an IDR plan and calculates the borrower’s monthly payment amount. To continue making monthly payment amounts based on income and family size, IDR borrowers must annually submit the IDR application form certifying their income and family size, which servicers then use to update monthly payment amounts. If a borrower’s income changes significantly prior to the borrower’s annual recertification date, the borrower can use the same application form to request a recalculation of the monthly payment amount (see fig. 1). However, borrowers are not required to report any such changes before their annual recertification date. If IDR borrowers do not have any discretionary income, their scheduled monthly payment amount is zero dollars (meaning they will not have to make a monthly loan payment until their discretionary income is high enough to warrant one). Scheduled monthly payments of zero dollars count as qualifying payments towards eventual loan forgiveness at the end of the 20- to 25-year repayment period. Borrowers who make monthly payments on IDR plans that are much lower than they would be under the Standard 10-year repayment plan for a long period of time may end up paying less than their original loan amount because their remaining loan balances may be forgiven. However, some borrowers on IDR plans will fully repay their loans before qualifying for forgiveness. Extending the repayment period may also result in some borrowers paying more interest over the life of the loan than they would under the 10-year Standard repayment plan. Standards and Guidance for Managing Risk of Fraud and Errors in Federal Programs Fraud in federal programs occurs when individuals or entities intentionally misrepresent themselves in order to benefit from the programs. Fraud poses a significant threat to the integrity of federal programs and erodes public trust in government. Federal programs are at risk for fraud when individuals have both the opportunity and incentive to commit fraud. Although the occurrence of one or more cases of fraud indicates there is a fraud risk, a fraud risk can exist even if fraud has not yet been identified or occurred. Proactive fraud risk management is meant to facilitate a program’s mission and strategic goals by ensuring that taxpayer dollars and government services serve their intended purposes. In July 2015, GAO issued the Fraud Risk Framework, which provides a comprehensive set of components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Framework recommends that to effectively manage fraud risks, managers should design and implement specific control activities to prevent and detect potential fraud, such as data analytics. After issuance of the Fraud Risk Framework, the Fraud Reduction and Data Analytics Act of 2015 was enacted to improve federal agency controls and procedures to assess and mitigate fraud risks, and to improve agencies’ development and use of data analytics for the purpose of identifying, preventing, and responding to fraud. The act requires agencies to establish financial and administrative controls that incorporate the Fraud Risk Framework’s leading practices. We previously reported that Education identified itself as subject to the act. Error also poses a risk to the integrity of federal programs. According to federal internal control standards, to maintain an effective internal control system, managers should use quality information to achieve agency objectives. This includes obtaining information from reliable sources that is reasonably free from errors and communicating it externally to achieve objectives. Indicators of Potential Fraud or Error in Income and Family Size Information Pose Risks to IDR Plans Over 95,000 IDR Plans Were Held by Borrowers Reporting No Income, but Data Suggests They May Have Had Enough Wages to Make Student Loan Payments Our analysis of Education’s IDR plan data and HHS’s NDNH wage data for borrowers who reported zero income found that about 95,100 approved IDR plans (11 percent of all IDR plans we analyzed) were held by borrowers who may have had sufficient wages to warrant a monthly student loan payment. These plans were held by about 76,200 unique borrowers who owed nearly $4 billion in outstanding Direct Loans as of September 2017. According to our analysis, 34 percent of these plans were held by borrowers who had estimated annual wages of $45,000 or more, including some with estimated annual wages of $100,000 or more (see fig. 2). Our results from matching the Education and HHS data indicate the possibility that some borrowers misrepresented or erroneously reported their income, highlighting the risk of potential fraud and errors in IDR plans. Borrowers may have a financial incentive to commit fraud to reduce their monthly payment amount and, by extension, possibly increase the amount of loan debt forgiven at the end of their repayment periods. However, we cannot determine whether fraud occurred through data matching alone. Where appropriate, we are referring these results to Education for further investigation. Among the 76,200 borrowers in our data matching results, it is possible that some accurately reported zero income even though they had wages reported in NDNH in the same quarter in which their IDR application was approved. For example, a borrower may have earned wages at the start or end of a quarter, but was not earning wages at the time of submitting the IDR application. Conversely, our analysis cannot identify borrowers who may have earned additional taxable income that is not part of NDNH data, but should be included on IDR applications, such as income for individuals who are self- employed or receiving alimony. Regarding the potential for error, officials from Education and all four loan servicers we spoke with stated that it is possible that borrowers could incorrectly report that they had no taxable income. Officials from Education said, for example, that borrowers may misunderstand the question about taxable income on the IDR application, and one loan servicer, echoing this perspective, stated that some borrowers may mistakenly think that some of their income is nontaxable when it is in fact taxable. To examine how borrowers’ failure to report their income could affect the amount repaid to Education over the course of a year, we used Education’s online repayment estimator to illustrate how much hypothetical borrowers with different annual adjusted gross incomes would expect to pay under each IDR plan (see fig. 3). If a borrower at one of these income levels instead reported zero income on the IDR application, Education could lose thousands of dollars per borrower each year in student loan payments. Such a situation could also potentially increase the ultimate cost to the federal government and taxpayers for loan forgiveness because scheduled monthly payments of zero dollars count toward the borrower’s 20- or 25-year repayment period. Education May Miss Indicators of Potential Fraud or Error in Borrowers’ Family Sizes To examine the extent to which Education’s IDR plan data on family size may indicate potential fraud or error, we analyzed the family sizes for about 5 million IDR plans approved between January 1, 2016 and September 30, 2017. Of these plans, over 2.1 million (43 percent) were approved with a family size of one, meaning only the borrower was included (see fig. 4). In addition, over 2.6 million plans (52 percent) were approved with family sizes of two to five. At the high end of the spectrum, about 40,900 of the plans we analyzed (about 1 percent) were approved with family sizes of nine or more (see fig. 5). We consider IDR plans with family sizes of nine or more atypical or outliers because they comprise the top 1 percent of all family sizes in Education’s data. Of these plans, almost 1,200 had family sizes of 16 or more, including two plans held by different borrowers that were approved with a family size of 93. In total, the 40,900 plans approved with family sizes of nine or more corresponded to about 35,200 unique borrowers who owed almost $2.1 billion in outstanding Direct Loan debt as of September 2017. While IDR plans with family sizes of nine or more were atypical in our data and could indicate fraud or error, IDR plans with smaller or more typical family sizes could also pose problems. Borrowers may have a financial incentive to commit fraud because larger family sizes reported on the IDR application can reduce borrowers’ discretionary income and, by extension, their monthly payment amounts. Regarding the potential for error, officials from Education and all four loan servicers we spoke with said borrowers or loan servicers may inadvertently make mistakes related to family size. For example, officials from Education and one loan servicer said borrowers sometimes report inaccurate family sizes if they are confused about who to count as a member of their family. Officials from this loan servicer told us that a borrower initially applied for an IDR plan claiming a family size of five— himself and four other family members who were not his spouse or children. They said that during a subsequent phone call with loan servicer staff about the borrower’s loan, the borrower volunteered that the other members of his family did not live with him, meaning that for IDR purposes, he had a family size of one. It is unclear whether this borrower may have misrepresented his family size to receive a lower monthly payment or did not understand the definition and reported it in error. In regards to loan servicer error, Education officials said that servicers may make mistakes when entering family sizes from paper applications into their computer systems or when determining the total family size because borrowers provide information on family members in up to three places on the application. To examine the effect of family size on monthly payment amounts in IDR plans, we used Education’s online repayment estimator to illustrate how much hypothetical borrowers with the same income but different family sizes would be expected to pay each month under certain IDR plans. We found that a hypothetical borrower with a family size of one and an adjusted gross income of $40,000 who enrolls in one of three IDR plans that base monthly payment amounts on 10 percent of discretionary income would have a monthly payment amount of $182 (see fig. 6). If this borrower instead reported a family size of two people, the monthly payment amount would decrease by $54, to $128. For each additional person, the monthly payment would decrease by $54. At a family size of five people, the borrower would have no monthly payment. Weaknesses in Education’s Procedures to Verify Income-Driven Repayment Plan Information Reduce Its Ability to Detect Potential Fraud or Error, but Approaches Exist to Address Risks Education Does Not Verify Borrower Reports of Zero Income and Has Limited Protocols for Verifying Borrower Family Size Education does not have procedures to verify borrower reports of zero income nor, for the most part, procedures to verify borrower reports of family size; although there are approaches it could use to do so. Because income and family size are the basis for calculating borrowers’ monthly payment amounts for IDR plans, it is important that this information is accurate on IDR applications. While Education instructs loan servicers to review tax returns and other documentation of taxable income that borrowers are required to provide, as previously discussed, borrowers are not required to provide documentation to support self-attestations of zero income or their family size on IDR applications. Officials from Education and all four loan servicers we spoke with said that servicers are generally instructed to take these self-attestations at face value. However, Education has limited, voluntary procedures for reviewing family size information submitted by borrowers. In 2016, Education implemented a voluntary procedure for loan servicers to contact borrowers who report changes in family size of four or more from one year to the next in order to verify the accuracy of the most recently reported family size. Education officials told us that servicers are not contractually required to follow this procedure. In addition, this procedure is not applicable to student loan borrowers when they initially apply for IDR plans. In October 2018, Education officials told us they began to follow up with loan servicers about family sizes of 20 or more in IDR program data to ensure these data match the family size information in the loan servicer systems from which they originated. Officials said that this process is to ensure that family size data were accurately transferred from servicers to Education. Borrowers are not contacted for verification of the information itself. Officials from Education and three of the four loan servicers we spoke with acknowledged that IDR plans are at risk for fraud or error because verification is generally not performed on borrower reports of zero income and borrower reports of family size. Officials from Education and two of the loan servicers also said that certain program requirements discourage borrowers from providing false information. For example, borrowers are required to sign the IDR form to certify that all provided information is true, complete, and correct, and the form warns borrowers that false statements or misrepresentations are subject to penalties including fines, imprisonment, or both. However, the extent to which this requirement may serve as a deterrent is unknown because Education has not assessed the risk of fraudulent reports on IDR applications. Moreover, Education officials told us that they were not aware of any IDR borrowers being investigated or facing penalties for providing false information on the IDR application. Officials from one loan servicer also said that borrowers may be deterred from falsely claiming zero income or misrepresenting their family size because they assume that servicers, acting on behalf of the government, can check the information on IDR applications. However, it is also possible that borrowers would assume that this self-reported information would not be routinely verified because the only documentation requirements discussed on the application relate to verifying taxable income. Education officials also said that the risk of borrowers providing inaccurate information on IDR applications must be balanced against the impact of adding verification procedures. They said additional procedures could make the already complex IDR application process more burdensome for borrowers to navigate and result in longer application processing times. While it is important to make IDR plans accessible to borrowers who could benefit from them, it is also important that Education design internal control activities to achieve program objectives and respond to risks, including addressing the risk of fraud and error in borrower self-reported information. GAO’s Fraud Risk Framework describes the importance of developing procedures for preventing, detecting, and responding to the risk of fraud in government programs. The risk of fraud exists when there is opportunity and incentive to commit it. The lack of verification of borrower reports of zero income and limited verification of borrower reports of family size on IDR applications creates the opportunity for borrowers to commit fraud. Because lower income and larger family sizes can reduce borrowers’ monthly payment amounts and, by extension, possibly increase the amount of loan debt forgiven at the end of their repayment periods, there is also an incentive for some borrowers to commit fraud. In regard to error, federal internal control standards state that agencies should obtain information from reliable sources that are reasonably free from error. Education officials and all four loan servicers told us that borrower-reports of family size or zero income can be susceptible to error if, for example, borrowers misunderstand the definitions of these items on IDR applications. Addressing the risk of fraud and error would also help to minimize the costs associated with IDR plans that are passed on to the government and taxpayers. As more borrowers enter IDR plans, the costs of these plans—including loan forgiveness—increase for the government and taxpayers. Using data underlying the President’s fiscal year 2017 budget request, GAO previously reported that Education estimated Direct Loans repaid with IDR plans would cost the federal government about $74 billon over their repayment periods. In its fiscal year 2015-2019 strategic plan for Federal Student Aid, Education acknowledged that as IDR plans continue to grow in popularity, the cost of loan forgiveness could be a major issue for the federal government. Education can minimize the costs associated with IDR plans by ensuring payment amounts are based on accurate income and family size information. Approaches Exist That Could Help Education Identify and Address Potential Fraud or Error in IDR Plans Education has not fully leveraged available approaches to help detect and prevent fraud or error in IDR plans. Federal internal control standards call for agency management officials to identify, analyze, and respond to risks related to achieving program objectives, such as the risk of using potentially fraudulent or erroneous information about borrowers to calculate monthly payment amounts for student loans. Approaches, such as using data analytic practices and follow-up procedures, can help identify and address these risks. Two data analytic practices that can help identify such risks with respect to IDR plans are (1) anomaly detection to identify atypical or unusual information about borrowers and (2) data matching with outside data sources to verify information that borrowers provide. These practices, which can be used on their own or together, can help prevent fraud from occurring and detect potential fraud or error that may have occurred. Because data analytics alone may not be sufficient to determine whether fraud or error has occurred, follow-up procedures can then be used in the investigation and verification to make such determinations. A leading practice in data analytics in GAO’s Fraud Risk Framework is conducting data mining to identify suspicious activity or transactions, including anomalies, outliers, and other red flags in the data. Similar to our family size analysis, borrower-reported family sizes above a certain threshold on IDR applications could be flagged in loan servicers’ and Education’s data systems for further verification. Anomaly detection is used to a limited extent to identify errors in family size on IDR plans by one loan servicer and by Education. According to officials at Education and all four loan servicers we spoke with, anomaly detection is not used to systematically identify potentially fraudulent reports of family size. Anomaly detection can also identify deviations from expected patterns in data over time. Because IDR borrowers are required to fill out applications annually, it would be possible to develop automated queries to look for unusual patterns in borrower-reported income and family size from one year to the next. Officials from Education and servicers described several patterns across applications that could indicate potential fraud, specifically large swings in income from one year to the next, reporting zero income for multiple years, and having a large family size, but relatively low income. Another leading practice for data analytics in GAO’s Fraud Risk Framework is conducting data matching to verify key information, including self-reported data and information necessary to determine eligibility. The results of our analysis illustrate the usefulness of this technique to identify potential inconsistencies in the income information on IDR plans. Education does not have authority to access wage data from HHS’s NDNH or income data from the Internal Revenue Service (IRS) for the purpose of verifying IDR borrowers’ income information through data matching. However, private data sources are also available for data matching. We reported in 2016 on the benefits of government agencies using private data to address the risk of fraud. Moreover, some state agencies (such as those administering the Supplemental Nutrition Assistance Programs) use a private, commercial verification service known as The Work Number® to help determine eligibility for government assistance. We reported in 2016 that 45 states used income information from The Work Number to help determine eligibility for food assistance benefits under the Supplemental Nutrition Assistance Program. Education may also be able to draw on follow-up procedures it has in place for verifying information submitted by students and their families when applying for federal student aid using the Free Application for Federal Student Aid (FAFSA). Education uses a process called “verification” to help identify and correct erroneous or missing information on the application to aid the department’s efforts to reduce improper payments of federal student aid. Each award year, a portion of FAFSA applications are selected for verification, and schools are required to work with the selected applicants to obtain documentation and confirm the accuracy of information provided on these applications. When selecting FAFSAs for verification, Education aims to select those applications with the highest statistical probability of error and the greatest impact of such error on award amounts. FAFSA applicants who are selected to verify their income for the 2018-2019 or 2019-2020 award years may provide a signed copy of their prior years’ tax returns. FAFSA applicants may also obtain documentation from the IRS through the IRS Data Retrieval Tool, an IRS tax return transcript, or an IRS Verification of Non-filing Letter. FAFSA applicants selected to verify their household size must provide a signed statement that provides the name, age, and relationship to the student of each person in the household. For IDR plans, Education could implement follow-up procedures for IDR applications it identifies as at risk for fraud or error and seek additional documentation from borrowers. For example, to verify reports of no income, borrowers could be asked to provide an IRS Verification of Non- filing Letter, documentation that the borrower recently lost a job, or documentation that shows income the borrower receives is nontaxable, such as public assistance benefits. To verify family size, as is the case with FAFSA verification, borrowers could be asked to provide a signed statement with the names, ages, and relationship to the borrower of each family member. Another option might be to request that borrowers provide documentation showing that family members (other than the borrower’s spouse and children) receive mail at the borrower’s address as well as documentation of the financial support provided by the borrower. Such follow-up procedures would be consistent with federal internal control standards advising managers to design control activities to achieve program objectives and respond to risks. Conclusions While Income-Driven Repayment plans can help borrowers with limited incomes afford their monthly student loan payments, these plans can also result in high costs to the federal government and taxpayers. To minimize these costs, it is important that Education accurately determine monthly payment amounts under its IDR plans. Because these determinations are based on income and family size information that borrowers self-report, there is risk for potential fraud or error. Our data matching analysis showed, for example, that tens of thousands of borrowers who were not making monthly loan payments because they reported zero income on IDR applications may have had enough income to do so. Where appropriate, we are referring these borrowers to Education for further investigation. In addition, an increase in family size can cause a borrower’s payments to decrease, creating a potential incentive for fraud, and our analysis found atypically large family sizes that are generally not verified by Education. The results of our analyses highlight the risk for fraud or error, as well as weaknesses in Education’s procedures. In turn, the weaknesses we identified raise questions about the strength of Education’s institutional oversight of a major program involving hundreds of billions of dollars. The fact that, cumulatively, the borrowers and their plans we reviewed owed over $6 billion in loans helps illustrate the risk of potential financial loss for the government from fraud or error absent comprehensive oversight. It is important for Education to take steps to obtain data to verify borrower reports of zero income and to implement other data analytic practices and follow-up procedures for verifying borrower-reported information. Such actions would help ensure that (1) IDR payment amounts are based on information that accurately represents a borrower’s situation and is free from fraud and error; and (2) the federal government’s fiscal exposure to IDR loans is safeguarded from the risk of loss. Implementing data analytic practices and follow-up procedures to review and verify borrower reports of zero income could help deter borrowers from inaccurately reporting zero income and detect those who have done so, either fraudulently or in error. Similarly, implementing practices and procedures to review and verify reported family sizes could further stem potential fraud or error. Without such changes, IDR plans will remain vulnerable to fraud and error, potentially raising program costs for the federal government and taxpayers. Recommendations for Executive Action We are making the following three recommendations to Education’s Federal Student Aid office: The Chief Operating Officer of Federal Student Aid should obtain data in order to verify income information for borrowers reporting zero income on IDR applications. For example, Education could pursue access to federal data sources or obtain access to an appropriate private data source. (Recommendation 1) The Chief Operating Officer of Federal Student Aid should implement data analytic practices, such as data matching, and follow-up procedures to review and verify that borrowers reporting zero income on IDR applications do not have sources of taxable income at the time of their application. (Recommendation 2) The Chief Operating Officer of Federal Student Aid should implement data analytic practices, such as data mining, and follow-up procedures to review and verify family size entries in IDR borrower applications. For example, Education could review and verify all borrower reports of family size or a subset identified as being most susceptible to fraud or error. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Departments of Education (Education) and Health and Human Services (HHS) for review and comment. HHS provided technical comments, which we incorporated as appropriate. We also provided relevant report sections to the Social Security Administration and the four loan servicers included in our review for technical comments. Loan servicers provided technical comments, which we addressed as appropriate. Education generally agreed with our recommendations, stating that it plans to implement significant additional verification policies to ensure that borrowers who participate in IDR plans do not misrepresent their income or family size to the department. While Education agreed with our recommendation to obtain data in order to verify income for borrowers reporting zero income, it suggested that GAO may wish to convert this recommendation to a Matter for Congressional Consideration to provide Education with access to IRS data. In its response, Education stated that the President’s fiscal year 2020 budget request includes a proposal that Congress pass legislation allowing the IRS to disclose tax return information directly to the department for the purpose of administering certain federal student financial aid programs. According to Education, such legislation, if enacted, would allow borrowers to more easily certify their income on an annual basis to maintain enrollment in IDR plans, and allow the department to use the information to mitigate improper payments to borrowers as a result of misreported income data. Education also stated that in the meantime, it would explore whether commercially available data are sufficient in terms of scope, reliability, and cost effectiveness. Given that there are existing actions Education can take to implement our recommendation, we believe our recommendation is appropriate. Moreover, we believe that Education is best positioned to determine whether the proposal, if enacted, would address our recommendation, or if it would need to be expanded or modified in order to do so. Regarding our second recommendation, Education stated that it would develop data analytic practices to verify borrower reports of zero income contingent upon the enactment of legislation providing the department with access to federal income data. However, implementing our recommendation does not necessarily require Education to wait for such legislation. Our draft report describes data analytic practices, such as anomaly detection, which Education could implement using its own data to identify deviations from expected patterns in data over time. Education also stated that it plans to develop additional follow-up procedures to verify borrower reports of zero income, such as requiring borrowers to substantiate reports of zero income with appropriate documentation. In addition, Education described plans to formalize procedures to make referrals to Education’s Office of Inspector General or the Department of Justice for suspected cases of IDR fraud. We encourage Education to combine its follow-up procedures with data analytic practices to satisfy the recommendation. Education agreed with our third recommendation to implement data analytic practices and follow-up procedures to verify family size, noting that this information could be subject to misrepresentation or erroneous reporting by borrowers. Education stated that it would review various data points that can be used to select IDR applications and certifications for additional review prior to approval, such as providing more scrutiny when borrowers report unusual increases in family size from one year to the next. Education also stated that it plans to formalize additional procedures to require certain borrowers to substantiate their family size. For example, Education will consider requiring IDR applicants to provide statements listing each household member and how they are related to the borrower. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees, the Secretary of Education, the Chief Operating Officer of Federal Student Aid, and other relevant parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (617) 788-0534 or emreyarrasm@gao.gov or (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to the report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report examines (1) whether there are indicators of potential fraud or error in income and family size information provided by borrowers seeking to repay their loans with Income-Driven Repayment (IDR) plans and (2) the extent to which the Department of Education (Education) verifies this information. To address these questions, we reviewed relevant IDR policies and procedures from Education and its four largest loan servicers—Navient, Nelnet, Great Lakes Educational Loan Services, Inc., and the Pennsylvania Higher Education Assistance Agency. We selected these loan servicers because, at the time of our analysis, together they serviced 96 percent of the outstanding balance of loans being repaid with IDR plans as of September 2017. We also interviewed Education officials from Federal Student Aid, the office responsible for developing policies and procedures for administering IDR plans and overseeing how loan servicers carry them out, as well as the officials from the selected loan servicers. Additionally, we reviewed relevant federal laws and regulations and Education’s procedures for verifying information on the Free Application for Federal Student Aid. We assessed Education’s procedures against federal standards for internal control for developing sufficient control activities, risk assessment, and information and communication. We also assessed Education’s procedures against the leading practices for data analytics activities in GAO’s Framework for Managing Fraud Risks in Federal Programs. To determine whether there were indicators of potential fraud or error in borrowers’ income and family size information on IDR plans, we obtained data from Education’s Enterprise Data Warehouse and Analytics (EDWA) database on borrowers with William D. Ford Federal Direct Loans (Direct Loans) and IDR plans approved between January 1, 2016 and September 30, 2017, the most recent data available at the time of our analysis. EDWA is a centralized data warehouse that contains administrative data reported by loan servicers on IDR borrowers and their loans. Some borrowers had multiple approved IDR plans in the data we analyzed. We also obtained national quarterly wage data from the U.S. Department of Health and Human Services’ (HHS) National Directory of New Hires (NDNH) for the same time period. NDNH is a national repository of information reported by employers, states, and federal agencies. The NDNH is maintained and used by HHS for the federal child support enforcement program, which assists states in locating parents and enforcing child support orders. In addition to information on newly hired employees, NDNH contains (1) data on quarterly wages for existing employees, collected and reported by state workforce agencies and federal agencies; and (2) data on all individuals who apply for or received unemployment compensation, as maintained and reported by state workforce agencies. For our analysis of borrower-reported incomes, we matched approximately 656,600 Education borrowers to NDNH quarterly wage data to determine if any borrowers who reported zero income on their IDR applications had wages reported in the same quarter in which their IDR plans were approved. We took additional steps to further review and refine these matches and provide reasonable assurance that the NDNH wage data were associated with the correct borrower by comparing (1) the borrower’s state of residence as reported in the Education data to the state agency submitting the NDNH wage data and (2) the borrower’s name as reported in the Education data to the employee name reported in the NDNH data. For the refined matches, we then estimated whether the borrowers may have had sufficient annual wages based on wages reported in NDNH to potentially warrant monthly student loan payments greater than zero dollars on their associated IDR plan. Specifically, we aggregated all NDNH wages reported for the borrower in the quarter in which their IDR plan was approved to determine a total quarterly wage amount. We then multiplied the total quarterly wage amount by four—the number of quarters in a calendar year—to generate an estimate of annual wages for the borrower. Our approach was based on the methodology Education instructs loan servicers to use to calculate annual wages when borrowers provide an alternative to a tax return to document their income on IDR applications. This methodology may understate or overstate income given that borrowers may not have earned the same amount in each of the four quarters. Our estimates of annual wages are based on the wages reported in NDNH for each borrower and do not take into account any pre-tax deductions that may apply when determining IDR payments. Our estimates of annual wages also do not include borrowers’ spousal income or any other taxable income for the borrower that is not included in the NDNH quarterly wage data—such as unemployment compensation received or unearned income such as alimony. We did not independently verify the wages reported in NDNH or the actual total annual income earned by borrowers identified in our match, as this was outside the scope of our review. Using the estimated annual wage, we then determined whether a borrower would have had a monthly payment greater than zero by using Education’s IDR plan repayment calculations for each IDR plan. To calculate the monthly payment, we used (1) the estimated annual wage from our NDNH data analysis; (2) the family size reported on the borrower’s approved IDR plan; (3) the borrower’s approved IDR plan type; and (4) the relevant percentage of the HHS poverty guideline amount for the borrower’s family size, state of residence, IDR plan approval year, and IDR plan type. For borrowers on Income-Based Repayment, New Income-Based Repayment, Pay As You Earn, and Revised Pay As You Earn plans, we rounded all calculated monthly payments that were less than $5 down to zero, in accordance with Education’s repayment calculations. We then identified which borrowers had calculated payments that were greater than zero. We did not determine the actual repayment amount borrowers may have had, as this was outside the scope of our review. Finally, for borrowers for whom we had calculated a payment greater than zero, we determined the total outstanding Direct Loan balance (principal and accrued interest) as of September 2017, based on EDWA data. For our analysis of borrower-reported family sizes, we analyzed the overall distribution of family sizes reported on approximately 5 million approved IDR plans. We reviewed the percentile distribution for family size on all IDR plans in our analysis and identified those in the top 1 percent of the data—in this case, IDR plans that had a reported family size of nine or more. We defined these IDR plans as having atypical family sizes for the Education data. We did not independently verify the family size reported on the IDR plans. For the borrowers with family sizes of nine or more, we determined the total outstanding Direct Loan balance (principal and accrued interest) as of September 2017. To examine the effects of borrowers inaccurately reporting income and family size on loan payment amounts, we analyzed the estimated monthly loan payment amounts for various hypothetical repayment scenarios from Education’s online repayment estimator as of January 2019, which used the 2018 HHS poverty guidelines. To examine the effect of various family sizes on loan payment amounts, we assumed a hypothetical borrower lived in the continental United States; had an adjusted gross income of $40,000; an outstanding Direct Loan balance of $30,000 (close to the average outstanding Direct Loan balance of $33,600 as of September 2018); and an interest rate of 5.1 percent (the Direct Loan 2018-2019 interest rate for an undergraduate borrower). To examine the effect of various incomes on monthly payment amounts, we assumed hypothetical borrowers had adjusted gross incomes based on estimated annual wages common in our data matching analysis ($30,000, $45,000, and $60,000), a family size of one (meaning just the borrower), and lived in the continental United States. For this analysis, we also assumed hypothetical borrowers had an interest rate of 5.1 percent and an outstanding Direct Loan balance of $50,000, which we selected to be high enough to qualify these hypothetical borrowers for all IDR plans at each of the selected income levels. To assess the reliability of the EDWA data, we reviewed documents related to the database and Education loan data generally; interviewed knowledgeable Education officials; performed electronic testing to determine the validity of specific data elements that we used to perform our work; compared the data we received to published Education data on the number of IDR borrowers and amount of their outstanding loans; and compared borrowers’ personal information to the Social Security Administration’s Enumeration Verification System to identify borrowers whose information may not have been accurate. As part of our reliability assessment of the EDWA data, we selected a nongeneralizable sample of 16 borrowers and their IDR plan and loan information from the EDWA data to compare against four selected loan servicers’ records. Specifically, we stratified borrowers into two groups based on common and potentially outlying incomes and family sizes in the EDWA data. We then randomly selected two borrowers from each stratum for each of the four selected loan servicers (a total of four borrowers per loan servicer). We reviewed all IDR plan data in our scope for each selected borrower, including the plan type, family size, income, and total monthly payment. We did not review original documents, such as the IDR applications or documentation of income. We discussed the results of our review with knowledgeable Education and loan servicer officials to gain additional understanding of each selected borrower’s IDR plan information as well as any differences between EDWA and loan servicer data. We originally obtained EDWA data on approximately 6.5 million IDR plans approved between January 1, 2016 and September 30, 2017 that were held by almost 4.8 million Direct Loan borrowers. Based on data reliability issues we identified during our review, we had to limit the scope of our analysis to a subset of EDWA data that we determined were sufficiently reliable for our purposes. Education officials disclosed issues that impacted the IDR plan data reported to Education by one of its loan servicers. Specifically, Education and the loan servicer had identified instances where the loan servicer’s internal data were changed for valid reasons but the changes were not reported to Education correctly. As a result, we excluded data reported by this servicer from all analyses in our report. We also identified issues with monthly payment amounts for some borrowers in the EDWA data. Accordingly, we limited our borrower- reported income analysis to borrowers who reported zero income and had a scheduled monthly payment of zero dollars. Ultimately, we analyzed about 878,500 IDR plans held by about 656,600 borrowers for our income analysis and approximately 5 million IDR plans held by 3.5 million borrowers for our family size analysis. Consequently, our overall income and family size analyses results may be understated and are not generalizable to all IDR plans and borrowers. Consistent with our report scope, our analyses of borrower-reported income focused on identifying indications of potential fraud or error; however, our analyses do not show that fraud or error occurred. It is not possible to determine whether fraud or error occurred through data matching alone. As previously discussed, our estimates of annual wages are based on the NDNH quarterly wage data, and do not take into account any deductions that may be applicable for determining adjusted gross income, which is used to determine IDR plan payment amounts. As a result, our estimates could overstate borrowers’ incomes for IDR plan purposes. Additionally, wages are reported in NDNH quarterly, so we are not able to determine when in a quarter a borrower earned wages. For example, a borrower may have earned wages at the start or end of a quarter, but was not earning wages at the time of submitting the IDR application. Because borrowers are only required to certify their income annually, such a scenario would not constitute fraud or error even though it would result in a match in our analysis. In addition, our use of Education’s methodology to annualize wages based on quarterly wages may understate or overstate income if a borrower did not earn wages at the same level over the entire year. We are also not able to identify additional taxable income that is not reported to NDNH but should have been included on borrowers’ IDR applications, which could understate borrowers’ incomes. Consequently, our analysis may overstate or understate the number of borrowers who reported no income on their IDR application yet may have had sufficient wages to warrant a monthly student loan payment. To assess the reliability of the NDNH data, we reviewed documents related to the database, interviewed knowledgeable HHS officials, and performed electronic testing to determine the validity of specific data elements in the NDNH data that we used to perform our work. On the basis of our own reliability assessment results, we determined that the NDNH data were sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2017 to June 2019, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Department of Education Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Debra Prescott and Philip Reiff (Assistant Directors), Nancy Cosentino and Mariana Calderón (Analysts- in-Charge), Sarah Cornetto, Jeffrey G. Miller, and Rachel Stoiko made key contributions to this report. Additional assistance was provided by Susan Aschoff, David Ballard, Deborah Bland, Benjamin Bolitzer, Melinda Cordero, Vijay D’Souza, Kevin Daly, Angie Jacobs, Candace Silva-Martin, Sheila R. McCoy, Maria McMullen, Kevin Metcalfe, John Mingus, Drew Nelson, Mimi Nguyen, Matt Valenta, and Ariel Vega.
Why GAO Did This Study As of September 2018, almost half of the $859 billion in outstanding federal Direct Loans was being repaid by borrowers using IDR plans. Prior GAO work found that while these plans may ease the burden of student loan debt, they can carry high costs for the federal government. This report examines (1) whether there are indicators of potential fraud or error in income and family size information provided by borrowers on IDR plans and (2) the extent to which Education verifies this information. GAO obtained Education data on borrowers with IDR plans approved from January 1, 2016 through September 30, 2017, the most recent data available, and assessed the risk for fraud or error in IDR plans for Direct Loans by (1) matching Education IDR plan data for a subset of borrowers who reported zero income with wage data from NDNH for the same time period and (2) analyzing Education IDR plan data on borrowers' family sizes. In addition, GAO reviewed relevant IDR policies and procedures from Education and interviewed officials from Education. What GAO Found GAO identified indicators of potential fraud or error in income and family size information for borrowers with approved Income-Driven Repayment (IDR) plans. IDR plans base monthly payments on a borrower's income and family size, extend repayment periods from the standard 10 years to up to 25 years, and forgive remaining balances at the end of that period. Zero income. About 95,100 IDR plans were held by borrowers who reported zero income yet potentially earned enough wages to make monthly student loan payments. This analysis is based on wage data from the National Directory of New Hires (NDNH), a federal dataset that contains quarterly wage data for newly hired and existing employees. According to GAO's analysis, 34 percent of these plans were held by borrowers who had estimated annual wages of $45,000 or more, including some with estimated annual wages of $100,000 or more. Borrowers with these 95,100 IDR plans owed nearly $4 billion in outstanding Direct Loans as of September 2017. Family size. About 40,900 IDR plans were approved based on family sizes of nine or more, which were atypical for IDR plans. Almost 1,200 of these 40,900 plans were approved based on family sizes of 16 or more, including two plans for different borrowers that were approved using a family size of 93. Borrowers with atypical family sizes of nine or more owed almost $2.1 billion in outstanding Direct Loans as of September 2017. These results indicate some borrowers may have misrepresented or erroneously reported their income or family size. Because income and family size are used to determine IDR monthly payments, fraud or errors in this information can result in the Department of Education (Education) losing thousands of dollars of loan repayments per borrower each year and potentially increasing the ultimate cost of loan forgiveness. Where appropriate, GAO is referring these results to Education for further investigation. Weaknesses in Education's processes to verify borrowers' income and family size information limit its ability to detect potential fraud or error in IDR plans. While borrowers applying for IDR plans must provide proof of taxable income, such as tax returns or pay stubs, Education generally accepts borrower reports of zero income and borrower reports of family size without verifying the information. Although Education does not currently have access to federal sources of data to verify borrower reports of zero income, the department could pursue such access or obtain private data sources for this purpose. In addition, Education has not systematically implemented other data analytic practices, such as using data it already has to detect anomalies in income and family size that may indicate potential fraud or error. Although data matching and analytic practices may not be sufficient to detect fraud or error, combining them with follow-up procedures to verify information on IDR applications could help Education reduce the risk of using fraudulent or erroneous information to calculate monthly loan payments, and better protect the federal investment in student loans. What GAO Recommends GAO recommends that Education (1) obtain data to verify income information for borrowers who report zero income on IDR plan applications, (2) implement data analytic practices and follow-up procedures to verify borrower reports of zero income, and (3) implement data analytic practices and follow-up procedures to verify borrowers' family size. Education generally agreed with our recommendations.
gao_GAO-20-107
gao_GAO-20-107_0
Background Overview of Coast Guard’s Federal Fixed and Floating ATON Through its ATON mission, the Coast Guard promotes safe waterways and an efficient Marine Transportation System. The Coast Guard has statutory responsibility to operate and maintain a system of maritime aids to facilitate navigation and to prevent disasters, collisions, and wrecks. To fulfill this mission, the Coast Guard operates and maintains ATON that are placed along coasts and navigable waterways as guides to mark safe water and to assist mariners in determining their position in relation to land and hidden dangers. As mentioned earlier, this report focuses on two categories of ATON: fixed ATON that include lighthouses, towers, and other structures that are directly affixed to the ground or seabed; and floating ATON that include buoys and markers anchored to the sea bed by a concrete or metal sinker connected by a metal chain or mooring. See figures 1 and 2 for examples of fixed and floating ATON. The Coast Guard uses several types of vessels to place and service fixed and floating ATON. These ATON vessels include buoy tenders, construction tenders, and boats. As of October 2019, the Coast Guard had a fleet of 79 ATON cutters and 190 ATON boats—which varied in size from a 240-foot Great Lakes Icebreaker to 16-foot ATON boats. (See appendix I for additional details on the Coast Guard’s fleet of ATON vessels.) ATON Program Management The Coast Guard’s ATON program consists of several offices and units that work together to carry out the ATON mission: Office of Navigation Systems: Based at Coast Guard headquarters in Washington, D.C., the primary ATON-related roles and responsibilities of Office of Navigation Systems officials include providing oversight and approval for ATON operations and policy. Specifically, the Aids to Navigation and Positioning, Navigation, and Timing Division within the Office of Navigation Systems is responsible for establishing requirements and policy; providing program level guidance; and coordinating processes, platforms, and personnel necessary to establish, maintain, and operate the U.S. ATON system. Office of Civil Engineering: Based at Coast Guard headquarters in Washington, D.C., the primary ATON-related roles and responsibilities of Office of Civil Engineering officials include providing oversight and approval for ATON engineering and logistics policy, including supervision of the Shore Infrastructure Logistics Center. Shore Infrastructure Logistics Center (SILC): Based in Norfolk, VA, SILC supervises the Civil Engineering Units that execute fixed ATON depot-level maintenance and recapitalization projects; as well as the Waterways Operations Product Line. Waterways Operations Product Line (WOPL): A division of the Coast Guard’s Shore Infrastructure Logistics Center, WOPL was established by the Coast Guard in 2016 with the goal of serving as the focal point for implementing engineering and logistics solutions for ATON in order to enhance the mission while reducing costs. To do this, WOPL is to support the ATON mission by providing centralized guidance and oversight covering such issues as ATON acquisition, ATON configuration management (the proper mix of ATON) across the Coast Guard’s nine districts, ATON production and delivery, and ATON logistics and maintenance for the Coast Guard-wide inventory of ATON equipment and systems. WOPL’s support encompasses the entire lifecycle of ATON equipment and systems, from acquisition through disposal. Coast Guard Districts and Sectors: The Coast Guard has nine districts, which have overall responsibility for administration of the ATON within their district. Each district oversees the coordination of operations at the sectors and individual ATON units, which includes cutters, boats, and Aids to Navigation Teams. Figure 3 shows a map of the Coast Guard’s nine districts and the numbers of fixed and floating ATON in each district as of November 2019. ATON Servicing and Maintenance Procedures The ATON units are responsible for the servicing and maintenance of ATON by conducting both routine servicing based on the last-service dates of the ATON and non-routine servicing of ATON within their area of responsibility. The non-routine servicing process includes responding to and addressing discrepant ATON, which are aids that are not functioning properly due to, for example, a weather-related event such as a hurricane, or an equipment failure. Timely response to and correction of discrepant ATON is a high-priority task for the Coast Guard. According to internal guidance, the Coast Guard has a tiered approach to address ATON discrepancies that accounts for the importance of the ATON relative to the waterway and the nature of the discrepancy. In particular, according to Coast Guard guidance, the servicing unit response ranges from immediately after notification up to 72 hours or as soon thereafter as weather and resources permit. In some cases, the determining factors do not require responding within 72 hours and the servicing unit is to advise the district of future plans to correct the discrepancy. Coast Guard guidance states that during the routine servicing process for floating ATON (buoys), the primary purpose of the ATON units is to check the buoys’ positions, their condition, and ensure the correct operation of the buoys’ signal hardware. As part of this process, the Coast Guard may extract the buoys from the water and bring them onboard an ATON vessel to check the condition of their mooring chain, hull, and lighting equipment. If necessary, the mooring chains are cleaned and repaired and non-functioning lanterns (lights) are replaced. After the planned repairs are made, the buoys are placed back in their assigned position in the water. See figure 4 for an example of the process used by an ATON unit to service a steel buoy. When ATON units conduct routine or non-routine servicing of fixed and floating ATON, they also collect data on the condition of the ATON. These data provide a “snapshot” of the ATON’s condition at the time of servicing and include the aid’s geographic position; the last date that the ATON was serviced; the next-scheduled service date; and other detailed information about the aid, such as an assessment of the physical integrity of the ATON. If warranted, ATON units can initiate action for repair or replacement of ATON if necessary. The information gathered by ATON units during their servicing activities is entered into a Coast Guard database—the Integrated ATON Information System (I-ATONIS)—that is used to track and monitor fixed and floating ATON. A hardcopy record containing detailed information about each aid is subsequently generated from I-ATONIS and stored in local unit files to track and schedule future fixed and floating ATON servicing dates. ATON Program Budget According to Coast Guard officials, based on the multi-mission nature of its assets and workforce, the Coast Guard does not budget for, request, or receive funding organized by specific missions or program activities. In addition, Coast Guard financial systems are not structured to collect accounting data by specific missions or program activities, and the Coast Guard does not report expenditures by mission. Rather, the ATON mission receives funding through various sources within the Coast Guard’s annual budget. Specific to repairs and recapitalization of fixed ATON, in fiscal year 2018, $300,000 was allocated from Procurement, Construction, and Improvement funding while $10 million was allocated from the Coast Guard’s Operations and Support funds for depot-level ATON maintenance. Prior Work on Coast Guard Management of Shore Infrastructure We previously reported on the Coast Guard’s management and maintenance of its shore infrastructure, which—in addition to fixed and floating ATON—encompasses over 20,000 shore facilities such as piers, docks, boat stations, air facilities, and housing units at more than 2,700 locations. In July 2018, we found that the Coast Guard had not been able to address many shore infrastructure projects, primarily due to lack of funding, longstanding acquisition management challenges, and that previous Coast Guard leadership prioritized the acquisition of new operational assets to replace aging vessels and aircraft over maintaining and repairing shore infrastructure. We recommended, among other things, that the Coast Guard’s annual Capital Investment Plans reflect acquisition trade-off decisions and their effects. The Coast Guard agreed with this recommendation, and estimated implementing actions by March 2020. In February 2019, we found that almost half of the Coast Guard’s shore infrastructure is beyond its service life, and its current backlogs of maintenance projects, as of 2018, will cost at least $2.6 billion to address. We found that the Coast Guard’s process to manage its shore infrastructure recapitalization and deferred maintenance backlogs did not fully meet 6 of 9 leading practices we previously identified for managing public sector maintenance backlogs. We recommended, among other things, that the Coast Guard establish shore infrastructure performance goals, measures, and baselines to track the effectiveness of maintenance and repair investments and provide feedback on progress made; develop and implement a process to routinely align Coast Guard’s shore infrastructure portfolio with mission needs, including by disposing of all unneeded assets; and employ models for its asset lines for predicting the outcome of investments, analyzing trade-offs, and optimizing decisions among competing investments. The Coast Guard agreed with our recommendations and is taking steps to implement them. The Condition of Fixed and Floating ATON Declined Slightly, While the Costs for Repairing and Replacing Them Increased in Recent Years The Condition of the Coast Guard’s Fixed and Floating ATON Declined Slightly from Fiscal Years 2014 through 2018 The condition of fixed and floating ATON Coast Guard-wide declined slightly from fiscal years 2014 through 2018, as determined by the Coast Guard’s key ATON condition metric. In particular, according to data provided by the Coast Guard, the aid availability rate—the percentage of time ATON are functioning correctly—declined from 98.0 percent in fiscal year 2014 to slightly below the Coast Guard’s performance target percentage of 97.5 percent in fiscal years 2017 (97.4 percent) and 2018 (97.1 percent), as shown in figure 5. While the aid availability rate metric indicates that the condition of fixed and floating ATON Coast Guard-wide declined slightly from fiscal year 2014 through fiscal year 2018, other factors—such as the age of many ATON—have contributed to more significant declines in the condition of ATON for some locations. For example, an internal Coast Guard report states that, as of 2018, nearly a quarter (24 percent) of all floating ATON and over half (59 percent) of all fixed ATON are operating past their designed service lives. On a district level, the conditions of fixed and floating ATON differ from one geographical area to the next, and varying weather conditions often have an impact on the physical condition of ATON. For example, the frigid weather conditions of the Great Lakes in certain months frequently erode the condition of both fixed and floating ATON. Coast Guard officials stated that ATON with large steel hulls many times cannot withstand the pressure and weight of ice that can form on them in the winter months. They also stated that the icy waters delay routine servicing trips for personnel to adequately address ATON, which can contribute to the deterioration of the aids. In District 8’s area of responsibility, which includes much of the Gulf of Mexico, Coast Guard officials said that severe storms and hurricanes can adversely impact the condition of fixed and floating ATON and delay servicing trips for safety reasons. Extended periods of exposure to saltwater is another factor that contributes to the degraded condition of ATON in District 8 and elsewhere, as water salinity often corrodes the hulls of steel buoys. In addition to weather, geographic factors can affect the condition of ATON as well. Coast Guard officials in District 1 (headquartered in Boston) stated that the hard, rocky coast in their district makes it difficult to secure fixed ATON structures to the seabed. As a result, this district requires a higher percentage of floating ATON to mark the location of these hazards and these floating ATON are often damaged by the rocks. See figure 6 for examples of the deteriorating condition of some fixed and floating ATON. Total ATON Repair and Recapitalization Costs Increased During Fiscal Years 2014 through 2018 Our analysis of Coast Guard data shows that the Coast Guard’s overall repair and recapitalization expenditures for fixed and floating ATON increased during fiscal years 2014 through 2018. Specifically, our analysis of Coast Guard data shows that total ATON repair and recapitalization costs increased from about $12 million in fiscal year 2014 to about $20 million in fiscal year 2018. As shown in figure 7, the majority of the costs for fixed ATON were spent on repairs whereas the majority of the costs for floating ATON were spent on recapitalizations. The Coast Guard Has Faced Challenges in Managing ATON and Has Plans and Initiatives to Address Them, but Has Limited Assurance That They Will Be Effectively Implemented The Coast Guard Has Faced a Variety of Challenges in Managing its Fixed and Floating ATON Availability of ATON Cutters and Boats According to Coast Guard documents, data, and officials, the Coast Guard has faced a variety of challenges in managing its fixed and floating ATON. The reported challenges include the availability of ATON vessels, difficulty in conducting routine ATON servicing in a timely manner, and capacity limits at ATON major repair and refurbishment facilities. Our analyses of Coast Guard data on maintenance required of ATON cutters and boats during fiscal years 2014 through 2018 show that ATON cutter and boat availability varied by type and across classes. As described below, our data analyses showed that 10 of the 12 ATON cutter classes consistently met availability targets, whereas 4 of the 7 classes of ATON boats consistently met availability targets. The Coast Guard determines the condition of its ATON cutters and boats using the following measures—planned and unplanned maintenance days, maintenance hours, and achieved material availability rate. Specifically, Planned maintenance days are the number of days that a vessel is not mission capable due to scheduled maintenance. This measure is applicable to cutters. Unplanned maintenance days are the number of days that a vessel is not mission capable due to unforeseen maintenance issues and associated repair efforts. This measure is applicable to cutters. Maintenance hours are the total number of hours that a vessel spent in maintenance, including both planned and unplanned maintenance. This measure is applicable to boats. Achieved material availability rate is calculated based on a vessel’s availability and performance. For cutters, the target availability rate range is between 53 percent and 65 percent. For boats, the target availability rate target is 80 percent. According to our analysis of Coast Guard data, the number of maintenance days for ATON cutters generally decreased during fiscal years 2014 through 2018, as shown in Figure 8. In addition, our analysis shows that the biggest decrease was with planned maintenance days. The Coast Guard has established a target range for the achieved material availability rate for ATON cutters that includes a minimum rate of 53 percent to a maximum rate of 65 percent. According to our analyses of Coast Guard data, the achieved material availability rate for the ATON cutters varied by cutter class during fiscal years 2016 through 2018, with 10 of the 12 cutter classes having met or exceeded the minimum target material availability rate for all 3 years and the remaining 2 ATON cutter classes having met or exceeded the minimum target material availability rate for 2 of the 3 years analyzed. While most of the ATON cutters met Coast Guard availability rate targets during fiscal years 2016 through 2018, officials in 7 of the 9 districts noted that maintaining some older ATON cutters can take longer to repair because of old and obsolete equipment and the lack of available parts, which decreases their availability to conduct missions. Figure 9 shows the achieved material availability rate for ATON cutters for fiscal years 2016 through 2018. According to our analysis of Coast Guard data, we found that the total number of maintenance hours for ATON boats generally decreased during fiscal years 2014 through 2018, although there was an increase from fiscal year 2017 to 2018. Figure 10 shows the total maintenance hours for ATON boats during fiscal years 2014 through 2018. In comparison to ATON cutters, which have a target range for the achieved material availability rate, ATON boats have a material availability threshold of 80 percent. According to our analyses of Coast Guard data, 4 of the 7 classes of ATON boats consistently achieved the 80 percent availability threshold during fiscal years 2014 through 2018. In particular, we found that the four smaller classes of ATON boats—those 16 to 26 feet in length—consistently achieved the 80 percent availability threshold during fiscal years 2014 through 2018, whereas the three larger classes of ATON boats—those 49-feet in length and longer—failed to consistently meet the 80 percent availability threshold during this 5-year period. In addition to the data on achieved material availability rates, Coast Guard officials from 3 of the 9 districts noted they experienced challenges with the availability of ATON boats. Figure 11 shows the achieved material availability rate for seven classes of ATON boats. The Coast Guard Has Developed Plans and Initiatives to Address ATON Challenges, but There Is Limited Assurance that They Will Be Effectively Implemented The Coast Guard has taken positive steps to manage the ATON program, including issuing strategic plans and directions, creating a unit to provide a Coast Guard-wide perspective in managing ATON, and developing various initiatives to improve management of fixed and floating ATON. However, we found that some ATON-related initiatives to be implemented Coast Guard-wide, such as the foam buoy implementation initiative, do not contain certain elements that can provide better assurance that they will be effectively implemented, such as milestones and completion dates, and desired outcomes to be achieved. Strategic Plans to Improve ATON Program Management The Coast Guard has developed strategic plans and directions that provide guidance for addressing challenges faced in managing fixed and floating ATON. In June 2007, the Coast Guard issued the Maritime Short Range Aids to Navigation Strategic Plan to coordinate and standardize a number of ATON mission activities. According a Coast Guard official, at the time this strategic plan was issued, ATON units within the Coast Guard’s nine districts were operating largely independently in terms of planning and conducting ATON missions and activities. The 2007 plan changed this by developing a strategic approach to ATON management and it identified a number of initiatives to improve ATON program management, including reducing ATON lifecycle costs and maintenance needs, increasing efficiency and service intervals, and improving the performance and reliability of fixed and floating ATON. More recently, the Coast Guard issued the Navigation Systems Strategic Voyage Plan for Fiscal Years 2017-2022, which updates and expands on the 2007 strategic plan by identifying priorities that impact ATON program management broadly and the management of fixed and floating ATON in particular. The plan specifically identifies initiatives, including the use of non-steel floating ATON, development of year-round floating ice ATON, increased use of LED lighting, and the increased use of less expensive fixed ATON alternatives in lieu of lighthouses. In addition to the 2007 and 2017 strategic plans, the Coast Guard also issues annual Strategic Planning Directions. These annual directions outline the Coast Guard’s strategic commitments and are the primary mechanism for apportioning resources and providing guidance to field units on initiatives and actions to improve mission operations, including the ATON mission. For example, the Coast Guard has emphasized continuing to leverage electronic ATON technology where appropriate in an effort to reduce seasonal ATON workload, such as in districts with ATON in waters that are subject to freezing during a part of the year. Creation of the Waterways Operations Product Line In addition to developing a strategic approach to management of fixed and floating ATON through its strategic plans, the Coast Guard also created a new unit to provide a Coast Guard-wide, centralized perspective in managing fixed and floating ATON engineering and logistics. In particular, in 2016, the Coast Guard created the Waterways Operations Product Line (WOPL) to centrally manage the distribution, repair, and replacement of fixed and floating ATON and parts; as well as to formulate requests for ATON resources and funding. Since its creation, WOPL has coordinated and helped to implement various Coast Guard- wide initiatives to improve the management of fixed and floating ATON. These initiatives include centralized funding for ATON inspection and major repair services, changes in cost limits for floating ATON refurbishments, and expansion of commercial depot-level maintenance contracts to supplement the Coast Guard’s ATON major repair and refurbishment capacity. WOPL has also analyzed and recommended the transition from steel to foam buoys, where appropriate, to increase life cycle cost savings and reduce servicing times. In addition, WOPL has initiated changes to better manage and sustain the duration of floating ATON, including extending time in the water between major refurbishments from 6 to 9 years for some buoys and increasing the allowance for selected steel buoy hull repair weld hours. Initiatives to Address Specific ATON Management Challenges The Coast Guard has developed and is implementing a variety of initiatives to address specific ATON management challenges that were discussed earlier in this report. These initiatives include the following: Improving the Availability of ATON Cutters and Boats: The Coast Guard has ongoing initiatives to extend the service lives and to recapitalize certain ATON cutters and boats to improve their availability rates. For example, in fiscal year 2019, the Coast Guard continued the major maintenance availability efforts on the 225-foot Seagoing Buoy Tender fleet. In addition, from 2006 to 2016, a portion of the Coast Guard’s ATON fleet (River Tenders, Buoy Tenders, and Construction Tenders) underwent a limited maintenance program to act as a bridging strategy until replacement assets could be acquired. Our 2018 report on Coast Guard acquisitions noted that the designed service life for each of these tenders is 30 years, but as of the time of the report, their average age was 53 years. In 2018, we reported that the Department of Homeland Security approved the Waterways Commerce Cutter Program to replace aging River Tenders, Buoy Tenders, and Construction Tenders. While the acquisitions have been approved, it will likely be years before the new cutters are built and deployed. The Coast Guard has also had an ongoing initiative since 2007 that has recapitalized the boat fleet by 290 boats. Conducting Routine ATON Servicing in a Timely Manner: The Coast Guard has issued guidance to its districts to look for opportunities to reduce the number of ATON that do not significantly increase navigational risk and explore and leverage new technologies, such as the use of electronic ATON, where feasible. Collectively, these efforts should help to ease the servicing burden. In addition, the Coast Guard has also introduced initiatives focused on improving ATON servicing time. For example, officials in one district told us that they require their ATON units to send in monthly reports on ATON servicing due dates and plans. District officials review this information and may shift ATON servicing work to another unit when the primary servicing vessel or unit is not available to provide the needed service in a timely manner. Another ongoing initiative the Coast Guard is exploring is the use of year-round buoys for ice prone areas to reduce servicing requirements. The Coast Guard has received positive feedback in two of the three districts where such buoys have been in service. Improving Capacity Limits at ATON Major Repair and Refurbishment Facilities: According to a Coast Guard official, the Coast Guard has had commercial contracts in District 9 (the Great Lakes region) and District 13 (the Pacific Northwest) going back decades to provide floating ATON major repair and refurbishment services. Then, in March 2019, WOPL awarded four regional commercial contracts to provide increased capacity for ATON major repairs and refurbishments in an effort to help reduce the Coast Guard’s floating ATON major repair and refurbishment backlog. Specifically, the Coast Guard (1) renewed the contract in District 13; (2) awarded a contract covering California (part of District 11); (3) awarded a contract for a zone covering New England and the Mid-Atlantic (Districts 1 and 5); and (4) awarded a contract covering Guam (part of District 14). According to Coast Guard officials, the addition or renewal of these four regional contracts has resulted in greater capacity and flexibility to reduce the floating ATON major repair and refurbishment backlog. ATON Management Initiatives Lack Certain Elements While the Coast Guard has developed various initiatives to improve management of fixed and floating ATON, these initiatives do not contain certain elements, which limit assurance that the initiatives will be effectively implemented. For example, we found that many initiatives we evaluated do not contain milestone and completion dates for Coast Guard-wide implementation, which are elements that can guide decisions on the success of the initiatives. Under the foam buoy implementation initiative, the Coast Guard evaluated the use of foam buoys in lieu of steel buoys (which are more expensive to overhaul) and found that it was feasible to replace steel buoys with foam buoys in some locations but not in others. For example, the Coast Guard’s evaluations showed that foam buoys cannot stand up to ice conditions. On the basis of its evaluations, the Coast Guard plans to continue replacing certain classes of steel buoys with foam buoys where operationally feasible. However, the initiative does not contain milestone dates or desired outcomes for Coast Guard-wide implementation. According to guidance from the Program Management Institute, programs or projects—like the ATON-related initiatives being implemented by the Coast Guard—are to include specific, desired outcomes, along with the appropriate steps and time frames needed to achieve the final outcomes and results to implement the enhanced capabilities across the organization. In addition, our leading practices in capital decision- making state that such initiatives should include milestones and completion dates. According to Coast Guard officials, WOPL is a relatively new unit and is still developing ATON guidance and procedures for ATON-related initiatives and responsibilities to be performed by the districts. By updating its ATON-related initiatives to include the specific outcomes desired and timeframes for completing them, the Coast Guard would have better assurance that its initiatives to address ATON management challenges will be effectively implemented. Conclusions Available Coast Guard data indicate that despite some slight declines in the condition of fixed and floating ATON, and increasing repair and recapitalization costs for floating ATON, the Coast Guard’s ability to meet its ATON mission did not show a marked decline during fiscal years 2014 through 2018. However, the future of the fixed and floating ATON and the vessels used to service them bears close watching given the challenges the Coast Guard faces in managing its fixed and floating ATON. The fact that many of the ATON have reached, or will soon be reaching, the end of their designed service lives could lead to an increase in the number of ATON requiring major repairs or replacement in the near future. According to Coast Guard data, the Coast Guard’s ability to refurbish or replace its aging ATON is made more challenging by limited capacity for conducting major repairs and refurbishments of floating ATON. The Coast Guard has taken positive steps to develop strategic plans to guide the ATON program, and these plans have led to the development of various initiatives to improve management of fixed and floating ATON, but these initiatives would benefit from the inclusion of certain elements, such as desired outcomes to be achieved and associated milestone dates, to have better assurance that the initiatives will be effectively implemented. Recommendation The Commandant of the Coast Guard should direct the Assistant Commandant for Engineering and Logistics and Assistant Commandant for Prevention Policy to update the Coast Guard’s ATON-related initiatives to include the specific outcomes to be achieved and associated time frames. (Recommendation 1) Agency Comments We provided a draft of this report to DHS for review and comment. In its comments, reproduced in appendix II, DHS concurred with our recommendation and stated that the Coast Guard plans to review and update ATON-related initiatives to include specific outcomes with associated implementation milestones by December 31, 2020. DHS also provided technical comments that we incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committee, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (206) 287-4804 or AndersonN@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: The Coast Guard’s Fleet of Aids to Navigation Vessels Figure 14 provides information on the cutters and boats that comprise the Coast Guard’s fleet of aids to navigation (ATON) vessels. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Christopher Conrad (Assistant Director), Hugh Paquette (Analyst in Charge), Chuck Bausell, Breanne Cave, Benjamin Crossley, Dorian Dunbar, Michele Fejfar, Tracey King, Joshua Lanier, and Adam Vogt made significant contributions to this report.
Why GAO Did This Study One of the Coast Guard's statutory missions is the care and maintenance of ATON. Much like drivers need signs and universal driving rules, mariners need equivalent nautical “rules of the road.” As of November 2019, the Coast Guard managed 45,664 federal fixed and floating ATON that are designed to assist those operating in the U.S. Marine Transportation System, which includes about 25,000 miles of waterways, 1,000 harbor channels, 300 ports, and 3,700 terminals. According to the Coast Guard, as of July 2018, these ATON had a collective replacement value of about $1.6 billion. The Coast Guard has faced an array of challenges in managing its ATON, such as deteriorating buoys, and questions have been raised regarding the extent to which the Coast Guard is addressing these challenges. This report (1) describes what is known about the condition and costs of maintaining the Coast Guard's ATON, and (2) examines challenges the Coast Guard has experienced in managing its ATON and how it is addressing them. To address these issues, GAO reviewed ATON regulations and guidance, analyzed data on ATON condition and cost measures, collected input from all nine Coast Guard districts on ATON challenges, accompanied ATON units on mission activities, assessed agency initiatives using leading program management practices, and interviewed headquarters and field unit officials. What GAO Found The condition of the Coast Guard's aids to navigation (ATON), both fixed (e.g., lighthouses) and floating (e.g., buoys), have declined slightly while the overall costs for repairing or replacing them increased in recent years. According to Coast Guard data, its key metric for ATON condition—the Aid Availability Rate, or percentage of time that ATON are functioning correctly—declined from 98.0 to 97.1 percent during fiscal years 2014 through 2018, dipping slightly below the 97.5 percent target rate in fiscal years 2017 and 2018. During this time period, the overall costs to repair and replace ATON increased from about $12 million in fiscal year 2014 to about $20 million in fiscal year 2018. According to Coast Guard data, the majority of the costs for fixed ATON were spent on repairs whereas the majority of the costs for floating ATON were spent on replacements. The Coast Guard faces challenges in managing its fixed and floating ATON and has developed plans and initiatives to address them, but it has limited assurance that the plans and initiatives will be effectively implemented. According to Coast Guard officials, the challenges include decreased availability of vessels to service ATON, reduced ability to provide routine ATON servicing in a timely manner due to severe weather, among other factors, and limited capacity at ATON major repair and refurbishment facilities. The Coast Guard has developed plans to guide the ATON program, and these plans have led to the development and implementation of various initiatives at the headquarters and field unit levels to address these challenges. However, GAO found that the initiatives do not contain certain elements that help ensure effective implementation—such as desired outcomes and schedule milestones and completion dates—as recommended by leading program management practices. According to Coast Guard officials, they are still developing guidance and procedures for ATON-related initiatives that are to be implemented by the districts. By updating these initiatives to include certain elements, such as the specific outcomes desired and timeframes for completing them, the Coast Guard would have better assurance that its initiatives to address ATON management challenges will be effectively implemented. What GAO Recommends GAO recommends that the Coast Guard update its ATON initiatives by including the specific outcomes to be achieved and associated time frames. The Department of Homeland Security concurred with the recommendation.
gao_GAO-19-237
gao_GAO-19-237_0
Background U.S. and foreign air carriers have cooperated in a variety of ways to expand their international reach and service offerings. Legal requirements in the United States and other countries prevent mergers between U.S.- owned airlines and foreign owned airlines and also place restrictions on carriers providing end-to-end service between locations within other countries as well as between third countries. Air carriers may cooperate with each other to provide a wider range of services, more seamlessly, despite these restrictions. Simple forms of cooperation include, for example, “interlining,” which are voluntary commercial agreements to carry passengers across two or more carriers on the same itinerary, and “codesharing,” an agreement whereby carriers place their marketing code on a flight operated by another carrier. This practice allows consumers to book a single ticket for an itinerary involving two separate airlines, with one airline selling tickets under its own code for travel on the other carrier’s flight. These cooperative activities allow carriers to access each other’s network with varying degrees of cooperation. As part of their cooperative efforts, some carriers have formed global alliances. An alliance is an agreement between two or more airlines to link each of the airlines’ route networks and coordinate on specified activities, such as marketing and sales; coordination of airport operations (e.g., sharing gates or baggage facilities); and frequent flyer program accrual and redemptions. Alliances represent more involved coordination than interline or codeshare relationships. This expanded cooperation, according to DOT, allows participating carriers to further expand the geographic reach of their respective networks that the carriers would not be able to do on their own, because of the aforementioned legal restrictions and due to the economic and operational difficulties a single carrier would face implementing such an expansion in foreign markets. As of January 2019, there were three global airline alliances, each with a major U.S. member airline and multiple foreign partners: Oneworld (American Airlines); SkyTeam (Delta Air Lines); and the Star Alliance (United Airlines). These three airline alliances have 61 airline members: 13 for Oneworld, 20 for SkyTeam, and 28 for the Star Alliance. Many of the carriers within each of these alliances, as well as other carriers, have pursued antitrust immunity from DOT to cooperate more closely on key economic elements of their businesses that U.S. antitrust laws might prohibit. The specific activities are delineated in cooperative agreements and carriers have the option to implement such agreements without antitrust immunity from DOT. Carriers are more likely to pursue immunity when the proposed cooperation—and risk of antitrust violations—involves increasingly integrated business functions, according to DOT. However, once carriers that are party to such an agreement are immunized, carriers can cooperate more comprehensively than through interlining and codesharing arrangements (see fig. 1). For example, these agreements may stipulate that carriers share revenues across their flights, regardless of which carrier operates the flight, and jointly coordinate on schedules, prices, and sales. Since 1993, when DOT immunized the first cooperative agreement, between Northwest Airlines and KLM Royal Dutch Airlines, DOT has adjudicated 38 cases which involved one or more U.S. carriers and foreign carriers. Currently, United, Delta, and American—and their major foreign airline partners—are each members of multiple immunized cooperative agreements with their foreign airline partners. As a result, immunized carriers now provide air service across the globe. For example, in 2017, immunized carriers across these three alliances provided approximately 75 percent of the available seats on trans-Atlantic flights between the United States and Europe, and also provided on trans-Pacific service to Asia, and Australia, as well as service to South America. DOT’s process for reviewing each application for antitrust immunity includes two analytic steps. First, DOT must decide whether to approve a proposed cooperative agreement. In this step, by statute, DOT is directed to approve cooperative agreements deemed “not adverse” to the public interest. DOT conducts a competitive analysis to make this determination. Second, DOT decides whether to grant antitrust immunity to the agreement’s partners for activities undertaken pursuant to the approved agreement. DOT’s statutory authority provides for such a grant of immunity only to the extent necessary for the parties of the agreement to go forward with the transaction and only if the immunity is “required by the public interest,” vis-a-vis the creation of consumer, commercial, or other public benefits that would not otherwise occur. These steps will be discussed in detail in the following section. The statute does not detail specific competitive metrics or public benefits that DOT must consider in its evaluation but rather provides DOT leeway in making such determinations. The Department of Justice, which is responsible for reviewing and approving domestic mergers, may provide DOT with input during deliberations. DOT may also consult with relevant authorities in the foreign partner’s country. In granting antitrust immunity, DOT may require carriers to comply with specific conditions and, for grants of antitrust immunity approved since 2009, reporting requirements. DOT’s process to consider requests for immunity follows procedural steps delineated in the Administrative Procedure Act (APA). The APA provides for public notice and comment. At the beginning of the proceeding, carriers applying for immunity place information about the proposed cooperative agreement in a public docket. DOT staff then review this material, may request additional information to address any questions raised by their review, and will solicit comments from the public. The APA, in contrast to Department of Justice merger review procedures, specifies steps that afford public involvement and requires agencies to respond to the public comments. In DOT’s proceedings, the Department typically issues “show cause” orders that articulate the tentative approval or disapproval of the application. After publishing this show-cause order, DOT solicits additional public comments for review prior to issuing a final decision. See figure 2 for a summary of this process. DOT’s statutory authority indicates that DOT may conduct “periodic reviews,” but the statute does not include a definition of the nature or frequency of these reviews. All of DOT’s orders granting antitrust immunity state DOT may amend or revoke a grant of immunity at any time. Further, after DOT issues a final order that approves a request for antitrust immunity, the public docket remains open and provides a forum for ongoing public comments that DOT is obligated to respond to. Potential Effects on Consumers Are Included in DOT’s Assessment of Applications for Antitrust Immunity DOT analyzes competitive and public benefit effects, taking into consideration the potential effects on consumers, when deciding whether to approve cooperation agreements and grant carriers antitrust immunity, based on our review of DOT’s processes. In competitive and public benefit analyses, DOT uses the professional experience and expertise of staff to identify and assess relevant market factors, the terms of proposed cooperative agreements, supporting documents, and other information in light of the facts and circumstances specific to each case. DOT’s competitive analysis focuses on the likely effect of the cooperative agreement on competition in key airline markets, while the public benefits analysis focuses on the likelihood of carrier integration yielding consumer benefits. As discussed earlier, DOT’s process includes opportunities for stakeholders’ participation. Stakeholders we interviewed considered the overall review process transparent, though some had criticisms of the underlying economic evidence DOT uses to predict if, and how, consumer benefits might arise. DOT’s Competitive Analysis of Proposed Cooperative Agreements Examines Potential Consumer Effect The potential effects of proposed cooperative agreements on competition, and thus consumers, are central to DOT’s analysis. Specifically, DOT looks to see how the agreement may affect competition across routes affected by the alliance agreement. To make this assessment, according to DOT documentation that we reviewed and officials whom we interviewed, DOT focuses on three key elements of the proposed agreement. Specifically, DOT identifies (1) the geographic scope of the proposed alliance and which markets that the agreement would affect; (2) the number of competitors in each market, their market shares, and the level of market concentration; and (3) the feasibility and likelihood of market entry by new competitors into markets that might be adversely affected by the agreement as well as the ability of existing carriers to compete in such markets (see table 1). DOT’s assessment is based on an array of information provided by applicants and third parties. This information may include competitive analyses or other studies conducted by consulting economists for the applicants, and business plans and data, among other things. DOT may also independently use departmental databases to conduct its own analysis, including those data DOT collects from foreign carriers pursuant to data-reporting requirements in existing grants of antitrust immunity. DOT looks at competitive issues at the region-to-region (e.g., United States to Europe), country-to-country (United States to France), and city- to-city levels (e.g., New York-to-Paris city pair market), or airport-to- airport pairs (Chicago O’Hare-to-London Heathrow). The analysis focuses largely on city- or airport-pairs because the sale of air transportation between cities/airports is the product being sold by airlines and purchased by the consumer, according to DOT officials. Consequently, DOT looks most closely at those city-pair markets where the number of competitors is expected to decline, such as from 3 to 2 or 2 to 1, when the applicants are counted as a single competitor. According to DOT officials, this approach to competitive analysis is consistent with legal and economic practice and in the application of antitrust laws and principles used by other competition authorities, such as the Department of Justice. Officials then recommend determinations as to whether such a reduction in competitors in these markets is likely to be harmful to competition and, in turn, to consumers. According to DOT officials, the department has no predetermined threshold for defining substantive competitive harm because it would not be appropriate to pre-define what constitutes a “substantial reduction in competition” that would necessitate disapproval of an application. Instead, the Department looks at the characteristics of discrete markets where there is a reduction. In addition to looking for potential competitive harms in the city-pair analysis, DOT’s competitive analysis also assesses if the agreement could enhance competition in some markets. In particular, DOT may find that certain markets will have an increase of an effective competitor due to the agreement. Specifically, based on applicants’ filings, DOT may expect the cooperating carriers to enter new routes that neither had previously served. For example, DOT approved a grant of immunity in 2010 based on expectations that the applicants would have increased opportunities for new or expanded transpacific routes and service and enhanced connecting options, among other benefits. Additionally, if two carriers each served a market with a market share under 5 percent—the threshold DOT uses for deeming a carrier as providing competitive service on the route—the agreement may push that market share above the 5-percent threshold and effectively result in a new competitor on the route. Also, according to DOT, the carriers’ agreement could result in connecting flights across two carriers to become effectively “online” (as opposed to “interline”) for some city-pair markets due to the agreement. This could potentially offer consumers competing options among airlines that provide direct flights on a given route. We reviewed DOT documentation in which its analyses had projected these improved competitive outcomes across thousands of city-pair markets based on an application for a cooperative agreement. Finally, according to DOT orders, carrier agreements can promote competition in various markets, if the agreements strengthen inter-alliance network competition. For example, DOT approved and immunized the cooperative agreement between the major partners of the Oneworld alliance, in part, based on the finding that a third immunized global network could better discipline the fares and services offered by the Star and SkyTeam alliances. Specifically, in approving the immunity application, DOT noted consumer benefits stating that “enhanced inter-alliance competition is beneficial for consumers across many markets, in particular the hundreds of transatlantic markets in which the applicants become more competitive as a direct result of the alliance. Travelers in those markets gain new competitive options.” Though DOT may find prospective competitive harm from the agreement, such as a reduction in the number of competitors in certain markets, DOT does not necessarily reject the application if a DOT-stipulated remedy can potentially mitigate those harms, according to department officials. DOT has used different potential remedies over the years, including carving out specified city-pairs from a grant of immunity and requiring carriers to divest from slots at specific airports (see table 2). DOT officials indicated carve-outs are less favored now than in the past because carve-outs on specific routes can, in DOT’s view, diminish broader public benefits of the alliance by limiting the degree carriers can merge their operations. DOT currently has 11 active carve-outs in three alliances, with the last carve- out issued in 2009. More recently, DOT officials indicated mitigations based on slot divestitures have the potential to better target competitive harms on specific routes by enabling new entrants to these cities with slot-constrained airports. DOT required slot-based remedies in two grants of immunity, one in 2010 and one in 2016. In the 2010 immunity grant, DOT required applicants to relinquish slots at London’s Heathrow airport and specified that two slots must be for Boston-Heathrow services and two for services between any U.S. location and Heathrow. DOT expected these remedies, once implemented, to enable other carriers to start new services to compete with the newly immunized alliance, thereby ensuring adequate competition remains in the affect market. Whether and what mitigation strategies are pursued can be a contested aspect of the proceeding, in which DOT, the applicants, and third-parties debate the competitive implications of the agreement and the mitigations based on the facts and circumstances of each situation. In a 2016 case involving Delta and Aeromexico, DOT included two new or rarely used conditions in the grant of antitrust immunity. Specifically, to address competitive concerns specific to this case, DOT made its approval conditional upon the removal of exclusivity clauses in the joint venture agreement that precluded specified types of cooperation with other carriers. Though the carriers argued that such clauses were necessary to encourage long-term investment in their cooperative products and services, DOT took into account the perspectives from stakeholders’ docketed comments, concluding that such clauses could give the carriers an undue ability or incentive to foreclose actual or potential competition. Additionally, DOT placed a 5-year sunset provision on its grant of antitrust immunity to Delta and Aeromexico to allow DOT a defined opportunity to revisit whether specific slot constraints identified at the Mexico City airport had been resolved. Prior grants of immunity regularly included requirements for carriers to resubmit their cooperative agreements to DOT after 5 years as part of DOT’s subsequent monitoring (discussed below), but the immunity was not time limited. DOT officials explained the inclusion of the sunset provision was to address concerns specific to this case, rather than a new departmental policy. Once the competitive analysis and any decisions on mitigations are complete, DOT determines whether, on balance, the proposed agreement would likely have an overall positive, neutral, or negative competitive effect and decides whether to approve the agreement. In all cases where DOT has granted antitrust immunity, DOT found the proposed cooperative agreements, on balance or with any specified remedy in place, to be either neutral or pro-competitive. However, DOT has denied approval of a proposed agreement, citing that the carriers’ combined market share on routes where they both operate service would be so dominant they could, for example, raise prices to the detriment of consumers. DOT’s Evaluation of Potential Public Benefits Depends on Whether Proposed Cooperation Is Sufficient to Yield Pro- Consumer Effects DOT conducts public benefits analyses to determine if there are benefits of proposed cooperative agreements for consumers. Based on our review of applications, carriers typically point to varied benefits such as the potential for lower fares on certain routes, improved connectivity, and reciprocal frequent flier benefits for consumers. In considering the public benefits claims made by applicants as well as any potential benefits of the proposed agreement identified by DOT, the department assesses whether (1) the public benefits identified are significant and likely to be realized in a timely fashion and (2) if a grant of immunity is necessary for the carriers to go forward with the agreement such that benefits will be achieved. DOT officials emphasized that this assessment focuses on the carriers’ anticipated level of integration. The officials said higher levels of cooperation in a proposed agreement, given the nature of the airline industry and depending on the economic incentives employed, can lead to lower fares, especially for connecting itineraries. Though DOT officials acknowledged that the flow of consumer benefits due to high levels of carrier cooperation is not absolute or certain, they said DOT’s analysis has consistently supported the notion that connecting passengers who traverse carriers on a given itinerary pay less as cooperation between alliance carriers increases. DOT has applied this policy in each of the proceedings involving grants of antitrust immunity to the three major air alliances—SkyTeam (2008), Star Alliance (2008), and Oneworld (2010)— as well as subsequent cases. For example, DOT approved immunity within the Star Alliance based on its expectation that fares for connecting itineraries for Star’s transatlantic routes would decrease, benefiting the majority of its transatlantic passengers. DOT further noted that this connecting service would “discipline fares on non-stop routes,” as well. The practical consequence of this policy, according to DOT officials, is that DOT expects applicants to present detailed cooperation agreements, which show integrative efficiencies and processes, at the time the requests for antitrust immunity are made. In other words, DOT expects antitrust immunity, when provided, will provide consumers with an array of benefits—lower connecting fares, new route offerings, among others— that follow from these business efficiencies. DOT’s public benefits analysis considers the specific provisions of each proposed agreement to assess how the applicants plan to coordinate a wide range of business functions. These can include network and capacity planning, scheduling, pricing, sales, revenue management, and customer service, among other considerations. DOT officials told us that they examine the carriers’ revenue-sharing plans, corporate strategic documents, and other relevant documentation. For example, DOT may look to see if carriers plan to: Share revenue in a manner to provide incentives to carriers to coordinate the management and selling of their combined networks to make more seats and more frequencies on routes linking their respective networks available, substantially increasing connectivity and time-of-day schedule options and improving customer service by treating their partner’s customers just as they would their own. Align their different ticket fare and availability classes and procedures such that their revenue management systems make seats available on domestic flights for passengers connecting from the foreign partner’s flights at the same levels and on the same terms as if customers were connecting online from their own international flights. Coordinate marketing and incentivize sales staff to promote the carriers’ combined, rather than individual networks, and thereby creating more options for consumers. Align products for a consistent, seamless passenger experience (e.g., baggage fees, upgrade policies, frequent flyer program rules). According to DOT, the agency further reviews governance and revenue- sharing provisions to ensure that sufficient economic incentives exist to substantially increase the number of passengers flowing through the combined networks and to significantly increase capacity (particularly on hub-to-hub routes and home country hub-to-beyond foreign hub routes). Further, DOT has sought detailed information from applicants on their plans to increase capacity beyond what they would do in the counterfactual scenario in which DOT did not grant immunity. These officials said that DOT places particular emphasis on the quantity, likelihood, and viability of additional capacity when determining whether the application will produce substantial benefits that might not occur if applicants choose not to go forward with the agreement in the absence of a grant of immunity. DOT also considers filings from other parties that support or cast doubt on the applicants’ claims. For example, in 2005, DOT denied an application from six carriers seeking immunity for the SkyTeam Alliance. According to DOT officials, based on the case record and competitive circumstances at the time, DOT found that immunizing the proposed agreement would not provide sufficient public benefits. This finding comported with arguments from objecting parties that immunity was not required to produce benefits because there was a high likelihood that SkyTeam members would continue integrating their management and operations, in order to maintain and maximize the profitability of their existing relationships. As with the competitive analysis, DOT officials use their professional experience and expertise, as well as the case record of each application, to determine the likelihood of benefits, and the necessity of antitrust immunity for carriers to implement their proposed plan quickly. As a general practice, DOT does not attempt to replicate the benefits analyses that carriers may provide as part of their application, according to DOT officials. DOT officials explained that they use their knowledge of the industry to verify and validate the applicants’ benefit claims by qualitatively assessing the reasonableness of the market and broad economic assumptions underlying these claims. Based on this assessment, DOT may condition a grant of immunity on carriers’ first demonstrating a readiness and ability to implement the agreement. For example, in one case, the department did not initially grant antitrust immunity to the partners of a cooperative agreement because DOT determined that incompatibilities in the carriers’ information technology systems would prevent the partnership from yielding consumer benefits. Consequently, DOT officials said they advised the applicants to reconcile these shortcomings, or risk DOT finding the benefits of the proposed cooperative agreement implausible and, in turn, antitrust immunity unwarranted. Similarly, DOT has also conditioned several grants of antitrust immunity on the carriers’ expeditious implementation of the proposed cooperative agreement. Based on our analysis of DOT’s antitrust immunity proceedings, DOT has ultimately approved most of the requests for antitrust immunity that it has received, with some stipulating competitive remedies. Specifically, DOT has adjudicated 38 applications involving a U.S. and foreign carrier(s) since 1993, granting antitrust immunity 31 times, according to our analysis. Twenty-three of these grants remain in effect across 13 different carrier agreements. See appendix I for information on adjudicated immunity proceedings involving U.S. and foreign carriers. In two proceedings, DOT denied antitrust immunity based on findings from its public benefits analysis. Specifically, in one proceeding, DOT found that the overall level of public benefit was small because the proposed alliance focused on a single route and was not likely to create new routes or a significant number of new travel options for consumers. In the other proceeding, DOT noted that code sharing or other less-involved forms of collaboration could produce similar benefits, namely new and expanded service additions, suggested by the carriers. Consequently, DOT denied these applicants’ requests for antitrust immunity. Stakeholders We Interviewed Generally Agreed That DOT’s Process Was Transparent, but Disagreed on Extent That Immunity Is Beneficial to Consumers Most stakeholders, in particular representatives from major carriers, we interviewed considered DOT’s final decisions and application review process to be largely transparent, but lengthy. DOT officials and some stakeholders we interviewed underscored that there are opportunities for interested parties, including competing airlines, to examine all submitted application materials—including confidential and proprietary information— and to provide substantive comments. DOT officials emphasized the importance of a complete record of information on the official docket as the basis for their decisions. DOT is required to make a final decision within 6 months from the date of an application but may issue a notice to suspend the procedural schedule in order to establish a complete record. Some carriers we interviewed said that DOT’s review and efforts to establish a complete record can cause a proceeding to be lengthy. For example, the most recently completed proceeding to date was over 18 months from when the application was filed until DOT issued a decision. This proceeding involved a number of filings that pointed to the likely harm to present and future competition from independent carriers in specific markets due to the potential for exclusionary behavior by the applicant carriers. Our documentation review affirms DOT’s and stakeholders’ view that available proceedings’ records include DOT’s analyses and findings. With the exception of confidential or proprietary information, all applications, notices, DOT orders, and other documentation related to an application can typically be found on the public docket. Our review of all the proceedings found that each DOT order providing a grant of immunity included discussion of DOT’s findings from its competitive and public benefits analyses, as well as discussion of why and how DOT arrived at stipulated remedies, if any. For example, as previously discussed, in the 2010 Oneworld order, DOT described the potential competitive harm at specific airports that the department identified in its analysis and rationale for requiring a divestiture of slots at those airports as a remedy for those potential harms. Though we found consensus among stakeholders that DOT’s process is transparent, there is disagreement among the stakeholders we interviewed about the potential benefits of immunity for consumers. Specifically, two third-party stakeholders and representatives of all non- immunized carriers we interviewed suggested that carriers do not need antitrust immunity to cooperate in ways that benefit consumers, such as through codeshare and interlining agreements. Some of these stakeholders noted that immunized carriers, through their cooperative agreement, could have access to better market data than non-immunized carriers or leverage their increased network size to gain unfair competitive advantages. Representatives for all three U.S. carriers with approved immunized agreements indicated these immunities were, and continue to be, essential to their ability to provide high-quality service to their customers. Moreover, these carriers believed that changes to DOT’s process should be focused on expediting the process so that public benefits achievable only through grants of antitrust immunity could be realized more quickly. DOT officials indicated they are aware of the controversial nature of grants of antitrust immunity and noted that it takes time for DOT to gather and assess the evidence in each proceeding. These officials indicated that the department considers different views when considering applications, monitors academic and other literature on the topic, and applies these ideas as the officials deem appropriate in their decision-making. DOT Monitors Immunized Cooperative Agreements in Various Ways but Does Not Report on the Effects of Granted Immunities DOT Undertakes Multiple Activities to Monitor the Implementation and Effects of Immunized Cooperative Agreements DOT conducts a number of activities to oversee and monitor individual immunized cooperative agreements and to understand how broad trends in international air competition affect immunized agreements. For example, DOT officials responsible for the program explained that they analyze a variety of international and domestic airline-competition issues including, but not limited to, airline alliances and, accordingly, keep track of market developments, such as new carriers entering markets and changes in market shares of established carriers. By monitoring these broad trends, DOT is able to better understand industry dynamics, according to officials we interviewed. For specific grants of immunity, DOT officials emphasized that they may tailor some monitoring activities to the nature of the agreement and the specific requirements set forth in DOT’s grant of immunity. For example, DOT officials explained they track compliance with the required slot divestitures in one grant of immunity through a designated trustee or, for immunities that require carriers to maintain capacity on certain routes, by DOT officials’ own review of existing flight schedule databases. DOT officials noted that the department’s specific monitoring activities are undertaken to track the implementation of cooperative agreements and to assure carriers comply with the terms of immunity grants (see table 3). In recent years, DOT’s monitoring activities have focused on the status of cooperation under immunized agreements and whether that cooperation is leading to merger-like efficiencies. To that end, according to DOT officials, all seven grants of immunity approved since 2009 require carriers to submit confidential annual reports to DOT. These reports cover topics including the public benefits of the agreement and commercial developments between the partners. Each year, DOT develops a template for these reports that delineates what information must be included on operational aspects of the implemented agreement (e.g., integration of routes and service planning) and the extent that partnered carriers have aligned their customer service policies to provide customers with a consistent experience across partners, among other topics. These reports, and DOT’s associated reviews, are the core of DOT’s current monitoring efforts, according to DOT officials and, according to representatives of the carriers submitting these reports, provide DOT with extensive information on the implementation status of the immunized agreement. Our examination of the most recent of these reports, for 2017, affirms they include considerable information on the implementation of the agreement and status of the alliance. DOT’s monitoring activities also include some review of empirical information on the effects of individual immunities. Specifically, as discussed above, carriers seeking immunity routinely identify anticipated consumer benefits, such as lower fares and greater frequency of service, and DOT has predicated grants of immunity on these expected benefits. According to DOT officials, they monitor available schedule, pricing, and other data to check whether observed outcomes are consistent with expectations, and if not, whether other factors, such as fuel prices or other market changes, provide a qualitative explanation of observed trends. The 2017 annual reports that carriers submitted to DOT also included information on these trends, based on our review of these documents. Likewise, according to DOT officials, DOT takes steps to track the status of remedies—such as whether airport slots were, in fact, divested and market entry occurred as was expected. DOT’s specific steps to do so vary depending on the nature of the remedy and the availability of relevant information. Furthermore, DOT officials commented that third parties, such as other air carriers, have incentives to alert DOT to concerns about violations of exclusivity prohibitions that help DOT verify and enforce this condition of some immunity grants. DOT’s monitoring activities do not typically include independent econometric analysis to examine the effects of the immunities it has granted, according to DOT officials, but the department tracks economic literature on these effects and has recently commissioned its own study. As we have noted, DOT looks for substantial integration among carriers requesting immunity as an indication that pricing efficiencies will be attained and benefit consumers. For a connecting airline route where one carrier serves one leg of the route and a different carrier serves the other leg, it is broadly recognized by economists that joint price-setting by the carriers will generally result in a lower airline fare. However, in cases where two airlines are competing on the same route—as could be the case on nonstop routes between the U.S. and another country—carrier coordination could reduce the extent of effective competition and lead to higher fares. Additionally, lesser forms of coordination that do not rely on a grant of immunity may also address the “double pricing” inefficiencies on connecting routes. Academic literature that uses statistical modeling to examine the effect of antitrust immunity has come to differing conclusions on the effect of immunity on fares for airline passengers. For example, one study found that connecting routes served by carriers with immunized cooperative agreements had lower prices compared to connecting routes served by carriers with other forms of cooperating agreements that were not immunized, and this study also found that immunities did not lead to higher fares on nonstop routes. However, another study found that antitrust immunity reduced competition and, thus, caused higher prices on nonstop routes; this study also found that pricing efficiencies on connecting routes did not require antitrust immunity. Recognizing the varying findings of the available literature, DOT commissioned a specialized study in 2016 to improve its understanding of the effect of immunities and airline joint ventures on consumer prices. According to DOT, the department provided guidance, data, and other input to support this work, but did not assist in the analysis or guide its conclusions. The report was provided to DOT in the summer of 2018, and according to DOT officials, as of December 2018, the department was reviewing the study’s findings, and considering how, if at all, it might apply the methodologies used in the study to DOT’s own monitoring activities in the future. DOT officials also indicated they have not made any final determinations about what, if any, adjustments may be appropriate to existing grants of immunity or to DOT’s process for considering future immunity applications based on the study’s findings. Based on our review of antitrust immunity proceedings, DOT has rarely amended or modified, and has seldom revoked immunity of an approved cooperative agreement. However, DOT has changed some terms of approval when carriers have sought immunity for updated agreements that, for example, added other carriers to an existing agreement. DOT officials explained that initiating a change in an existing immunity grant is a time-consuming and technically difficult process because it would involve the same administrative steps as in the initial approval process. Further, DOT officials indicated that carriers have been generally responsive to the requirements laid out in DOT’s grants of immunity, and as a result, DOT has not needed to pursue many corrective actions. Moreover, these officials explained that they are well aware of carriers’ plans to pursue new immunized agreements, and as a result, DOT officials are able to await those proceedings to make incremental changes to the terms of DOT’s original approval. For example, DOT’s early grants of immunity did not include annual-reporting requirements, but as carriers updated their agreements and sought new immunities, DOT used these new proceedings as an opportunity to add this requirement. DOT Does Not Report on Monitoring Activities or on Whether Immunities Have Produced Anticipated Benefits There is generally little, if any, information from DOT available to external stakeholders and the public regarding DOT’s monitoring efforts and its findings on the effects of granted antitrust immunities. DOT publishes one summary document on its website that lists every active and inactive immunized cooperative agreement. This document, which according to officials, DOT updates periodically with each new grant of immunity, includes web links to the dockets of formal proceedings associated with each immunity application and grant. This document provides a single portal for anyone to access materials related to antitrust immunities that are spread across multiple dockets. Each docket remains open for public comment as long as DOT’s grant of immunity remains active. For example, in 2017, stakeholders submitted public docket comments critical of the market effects of a cooperative agreement awarded antitrust immunity 15 years earlier. In this case, DOT provided a formal, public response, as required, on the issues raised. DOT does not report information on its own voluntary monitoring activities in public dockets or elsewhere. For example, DOT does not post information on whether immunized carriers have submitted required annual reports or, as periodically required, resubmitted their cooperative agreements to DOT. Moreover, DOT does not release its assessments of these materials nor does DOT make any public statements on whether a grant of immunity yielded, in actuality, the types of carrier cooperation expected, whether DOT-imposed remedies were implemented and had the expected results, or whether the immunity generated the public benefits as expected when approved. As described previously, DOT has approved grants of immunity based on the expectation of various public benefits. These potential benefits include, for example, lower consumer prices for connecting flights, expanded route and schedule offerings, and increased market entry and competition. DOT provides no reports to the public or Congress related to whether these expectations were met. Internal controls help program managers achieve desired results and adapt to shifting environments, evolving demands, changing risks, and new priorities. As part of an internal control system, management should externally communicate quality information. Attributes of this principle call on federal program managers to communicate quality information externally so that external parties can help the government achieve its objectives and address related risks. Generally, according to this internal control standard, government reporting is intended for the executive branch’s decision makers and Congress as well as the general public. Management may select appropriate methods for external reporting. Accordingly, program managers should consider what methods are appropriate for such a broad audience, considering factors such as the nature of information and cost. In the context of grants of antitrust immunity, relevant parties include Congress, industry stakeholders, and the general public. Each of these groups may have distinct needs and abilities to access, understand, and act upon information about the effects of antitrust immunities in the marketplace. DOT officials cited several reasons for not reporting on their monitoring activities and related findings. DOT officials underscored that much of the information gathered in its voluntary monitoring efforts—annual reports, in particular—are proprietary and, therefore, not information DOT could publicly disclose. Representatives from immunized carriers we interviewed also stressed that public disclosure of the business plans and alliance status assessments provided to DOT would be damaging to their business if made public. DOT officials also expressed concern that commentary from the department about the effects of immunities could be construed as departmental promotion of a specific alliance, or “prejudgment” of an issue that could come before the department in a future proceeding. DOT officials also said competition authorities, such as the Department of Justice, do not typically address the results of a case (e.g., post-merger analyses) and are only involved with the process and guidelines associated with reviewing and adjudicating a case. While there are valid concerns about the publication of proprietary information and statutory prohibitions on doing so, there are available avenues for DOT to report on the findings of its monitoring activities and assessments of the consumer effects of antitrust immunities broadly. Further, many of the expected benefits of grants of immunity—such as changes in prices, schedules, and markets served—can be evaluated without relying on proprietary information. For example, the number of competitors serving city-pair markets and carriers’ market shares can be calculated—as DOT does during the approval process—using publicly available data. Prices changes under the immunity can also be evaluated using publicly available information. Likewise, an assessment of the market outcomes of competitive remedies—such as whether slots were divested and competitors provided new service as expected—does not require business-sensitive information about the internal workings of an immunized alliance, but rather data on the public actions of carriers in the marketplace. These data are publicly available through schedule data and information in DOT datasets. Government reporting can also protect proprietary information from improper disclosure, either by issuing restricted reports to Congress or through stating findings at a very general level. For example, the Federal Trade Commission has balanced the protection of proprietary information from public disclosure while also reporting on the commission’s findings of the effects of its commission- imposed competitive remedies. Specifically, the Federal Trade Commission published two merger remedies studies, eliminating the names of and financial information about the merging parties and the buyers of the divested assets in publicly available versions. The Federal Trade Commission made both of its studies public. The lack of information available on the observed effects of immunities in the marketplace, including the effects of DOT-stipulated remedies, can make it difficult for external stakeholders to assess what consumer benefits have, or have not, been realized. According to consumer and antitrust organizations we interviewed, the lack of available information left them speculating that DOT did not conduct any monitoring of granted immunities after approval. Likewise, representatives from two of the three non-immunized carriers we interviewed noted the contrast between the transparency of DOT’s approval process and the opacity of its monitoring process. Additionally, two stakeholders we interviewed opined that airline alliances have harmed consumers by, for example, creating restrictive rules that make certain types of travel more difficult than in the past, among other anti-consumer effects. Some stakeholders mentioned they had no basis to review or comment on whether DOT monitoring activities are sufficient. Another stakeholder mentioned that in the absence of any reports or other information from DOT, they did not know if alliances have delivered the consumer benefits initially expected. DOT officials stressed that because the process for consideration of immunity is public any outside party may petition the department for review of an existing immunized alliance and provide information on the docket—which remains open—if any party believes that an alliance is acting contrary to the public interest. However, two stakeholders we interviewed indicated that it was difficult to use the docket comments process to lodge observations or criticisms without, for example, disclosing their own competitively sensitive information and absent information on the implementation of immunized alliances. Further, the information available on dockets does not provide congressional policymakers with readily available information on the findings of DOT’s many ongoing monitoring activities. During the approval process, DOT publishes key aspects of its analytic findings in show-cause and final orders to the public docket. These documents provide insights into the basis for DOT’s decisions. DOT could periodically provide information on the effects of immunities, based on its monitoring activities, on the docket, or through other mechanisms, such as public reports or through confidential reports to Congress. This information could provide greater transparency and be useful in considering changes in DOT’s authority to grant antitrust immunity, an authority the Congress and others have considered at various points. With more information about DOT’s monitoring activities and findings, policymakers, stakeholders, and the public would have an improved understanding of the competitive effects of immunities. Conclusions As U.S. and foreign air carriers have pursued more integrated forms of cooperation through international air alliances, DOT has extended American Airlines, Delta Air Lines and United Airlines antitrust immunities with their major foreign partners with the expectation that the immunities would yield public benefits. Cooperation between international air carriers can lead to certain benefits for consumers, and immunizing such cooperation from antitrust laws may yield additional benefits. DOT’s review of requests for immunity and oversight of immunized agreements are important to ensuring robust competition and, thus, consumer benefits in the marketplace. DOT’s ongoing monitoring pays significant attention to whether and how grants of immunity affect consumers. However, DOT generally has not reported on its monitoring activities and market outcomes of immunities. As the authority responsible for granting antitrust immunity, DOT holds a unique responsibility for reporting on these effects. Per internal control standards, the department’s responsibilities extend to communicating information to key stakeholders about the effect of immunities, based on DOT’s monitoring activities. DOT must balance providing information to policy makers and the public with statutory requirements that protect proprietary information from disclosure. DOT rightly keeps information on the status of cooperation under immunized agreements confidential. However, the market outcomes of immunities are not proprietary and DOT could publicly report on them. Such reports feasibly could include DOT’s views on whether the prospective benefits projected at the time of immunities’ approval have been realized and whether the department’s remedies have been implemented by immunized carriers and have had the effects expected by DOT. Like DOT’s current practice of periodically updating the summary document on immunities, DOT could issue such reports at a time interval it determines appropriate. Doing so would improve transparency and provide the public with improved information on the effects of antitrust immunities on consumers. Recommendation for Executive Action The Director of DOT’s Office of Aviation Analysis should provide periodic external reporting, at a time interval DOT determines appropriate, to the public and policymakers, on the effects of antitrust immunity—based on the range of monitoring activities undertaken by DOT—including whether grants of immunity have achieved anticipated benefits and the status of remedies—such as airport slot divestitures—imposed as part of DOT’s approval. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to DOT and the Department of Justice for review and comment. We received written comments from DOT, which are reproduced in appendix II and summarized below. In email, the Department of Justice told us they had no comments on the draft report. DOT and the Department of Justice also separately provided technical comments, which we incorporated as appropriate. In its written comments, DOT partially concurred with the recommendation. More specifically, DOT stated it will provide additional public information about the status of its monitoring activities and remedies, but it did not agree to report publicly on its findings about whether grants of immunity have achieved anticipated benefits. As discussed below, after evaluating the concerns that DOT raised, we continue to believe that periodically reporting on the effects of antitrust immunities would improve transparency and accountability. In its written response, DOT stated that if DOT were to release any additional materials than it already does, it could have a chilling effect, not just on competition by revealing proprietary information and insight on the real-time commercial strategies of a particular alliance, but also on the carriers’ willingness to share detailed and sensitive information with DOT that is necessary to conduct oversight. We disagree with DOT’s assertion that reporting on the effects of immunities would have a chilling effect on competition and the willingness of airlines to share information with DOT. Our report explains that DOT is prohibited from releasing proprietary information to the public and we expressly called on DOT to balance protecting this information while making appropriate information available to policy makers and the public. Moreover, contrary to DOT’s implication, we are not recommending DOT release the information DOT reviews during the annual reporting process, such as alliances’ revenue management and competitive strategies. Instead, the recommendation calls for DOT to report on the market effects of immunity relative to DOT’s anticipated benefits cited in DOT’s approvals of antitrust immunity and the status of remedies. As we noted in the draft report, these include trends in consumer fares, schedule offerings, and the like that DOT could report on without relying on proprietary information. DOT also stated that it must balance the importance of transparency with its statutory obligations to adjudicate each request for antitrust immunity fairly. Further, it stated that making such findings independently from the decision-making process in dockets with pending matters raises issues with prejudgment and ex parte communications, and is administratively unworkable. Doing so for cases that are not pending also raises issues of prejudice and prejudgment of “issues that are likely to be raised in future cases involving amendment of the alliance agreements (e.g., when membership changes).” We agree that DOT’s role as an impartial adjudicator is critical. We do not agree with DOT’s assertion that making public its assessment of the effects of immunities that have been granted would jeopardize its impartiality, because DOT could report this information and still consider each case based on its particular facts and circumstances. Further, the recommendation provides DOT with flexibility on how, when, and exactly what to report on that should allow DOT to avoid any prohibited ex parte communication. DOT described existing activities it believes maintain transparency for the public and ensure an ability for interested parties to seek review on the record of previously granted authorities. These activities include DOT’s public dissemination of passenger ticket and schedule data and the publication of DOT’s own orders that summarize departmental assessments of the state of competition as well as its immunity decisions. We note that our draft report described these activities in detail and recognized the overall transparency of DOT’s application review process. Nonetheless, we maintain that these activities do not provide regular or reliable information on the actual effects of antitrust immunities, based on DOT’s monitoring activities, and that DOT could do more to increase transparency through external reporting on these matters. For example, DOT’s provision of data to the public does not diminish the value of DOT providing its own independent reporting on whether expected consumer benefits, in fact, have materialized. Likewise, DOT’s published orders on specific immunities come at time intervals largely determined by the applicants and, naturally, when reviewing these applications, DOT’s competitive analysis focuses only on those markets relevant to the application at hand. More intentional reporting on the effects of immunity from DOT could address these shortcomings of existing activities. In other comments that were not included in DOT’s letter, DOT questioned the applicability of internal control standards to its role in monitoring grants of antitrust immunity. The principle of internal control we applied calls on management to externally communicate quality information that helps the agency achieve its objectives and manage risks. As we stated in the report, such communication can help program managers achieve desired results and adapt to shifting environments, which is relevant to DOT’s responsibility in this area. Ultimately, the recommendation, in full, aims to improve the transparency on the effects of antitrust immunity. Providing external stakeholders with additional information on DOT’s monitoring activities, as DOT agrees to do, should enhance confidence that DOT is undertaking oversight activities. Providing information on whether grants of immunity have achieved anticipated benefits, will further improve transparency and provide the public and Congress with useful information to inform policymaking in the future. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, the Attorney General, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Listing of Grants of Antitrust Immunity Global Alliance Star Alliance Other Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Heather MacLeod (Assistant Director); John Stambaugh (Analyst in Charge); Friendly Vang-Johnson; Jim Geibel; Amy Suntoke; Delwen Jones; Amy Abramowitz; and David Hooper made key contributions to this report.
Why GAO Did This Study Each year, millions of passengers travel internationally by plane. Many of these passengers are served by U.S. and foreign air carriers that have formed alliances to coordinate and integrate their networks. With antitrust immunity provided by DOT, airline alliances pursue a wide range of cooperative activities as outlined in joint venture agreements between the airlines. While this cooperation is meant to provide consumers with better services, it could also affect the extent of airline competition. GAO was asked to review consumer issues related to immunized international air alliances. This report (1) describes how DOT's review of antitrust immunity applications considers the potential effects on consumers and (2) evaluates how DOT monitors approved grants of antitrust immunity. GAO analyzed DOT's antitrust immunity proceedings, interviewed officials from DOT, the Department of Justice, as well as a nongeneralizable selection of 13 stakeholders, including consumer organizations and domestic air carriers with and without antitrust immunity. What GAO Found Potential effects on consumers are included in the analyses the Department of Transportation (DOT) conducts when reviewing international air carriers' requests for antitrust immunity. If granted, this immunity allows the airlines to engage in certain cooperative activities, such as coordinating prices and schedules, without risk of violating U.S. antitrust laws (see figure). DOT's analyses examine: The potential competitive effect of the proposed cooperative agreement in terms of relevant markets, on changes in the number of competitors and market shares, and on market entry. The potential for the close integration of carriers to create public benefits, such as lower consumer prices or expanded service offerings. Such analyses involve DOT staff's reviewing an array of data, documents, and reports filed in a public docket by carriers and interested parties and, ultimately, making a decision based on their assessment of the application. DOT has premised its decisions to grant immunity on the expectation that consumer benefits flow from high levels of integration of critical business functions between carriers. To date, DOT has granted antitrust immunity 31 times, with 23 grants currently in effect, which cover agreements made among carriers in each of the three major international air alliances. DOT has rejected three applications due to concerns about potential anticompetitive harm or insufficient public benefits for consumers. Stakeholders GAO interviewed generally agreed that DOT's decisions were transparent, but some disagreed on the potential benefits of immunity for consumers. DOT takes multiple steps to monitor alliances and understand the effects of immunity. Since 2009, DOT has required all transatlantic and transpacific partnerships to submit annual reports on the status of their immunized agreement. Additionally, DOT recently commissioned an empirical evaluation of immunities' effects and is currently reviewing the findings. However, DOT does not externally report information on the effects of granted immunities to Congress, industry stakeholders, and the public. As a result, these external entities are unable to determine what, if any, steps DOT is taking to ensure that grants of antitrust immunity remain in the public interest. Further, without additional transparency and information on DOT's findings on the effects of immunities, external entities do not know if immunized alliances have delivered the expected consumer benefits that DOT used as a basis to approve the carriers' request for antitrust immunity. What GAO Recommends GAO recommends that DOT externally report to policymakers and the public on the effects of antitrust immunity, based on DOT's monitoring activities. DOT agreed to provide public information on its monitoring, but not to report on the effects of antitrust immunity. GAO continues to believe its recommendation, in full, is valid as discussed further in the report.
gao_GAO-20-471T
gao_GAO-20-471T_0
Background DOD’s policy is to ensure that eligible personnel and their families have access to affordable, quality housing facilities and services consistent with grade and dependent status, and that the housing generally reflects contemporary community living standards. From the inception of MHPI, the military departments were provided with various authorities to obtain private-sector financing and management to repair, re novate, construct, and operate military housing in the United States and its territories. Through these authorities, the military departments have entered into a series of agreements with private partners to provide housing to servicemembers and their families. The military departments have flexibility in how they structure their privatized housing projects, but typically the military departments lease land to private developers for 50-year terms and convey existing housing located on the leased land to the developer for the duration of the lease. The developer then becomes responsible for renovating and constructing new housing and for the daily management of these housing units. At the end of fiscal year 2017, 14 private partners were responsible for 79 privatized military family housing projects—34 for the Army, 32 for the Air Force, and 13 for the Navy and the Marine Corps—in the United States, each of which includes housing at one or more military installation. Each privatized housing project is a separate and distinct entity governed by a series of legal agreements that are specific to that project. However, there are some common elements in how projects invest and use funds. Every project takes in revenue, which consists mostly of rent payments. Projects then pay for operating expenses, including administrative costs, day-to-day maintenance, and utilities, among other things. After that, projects generally allocate funds for taxes and insurance, followed by debt payments. In the typical funding structure for a privatized housing project, once debt payments are made, funds are allocated to accounts that fund scheduled maintenance, such as repair and replacement of items like roofs, heating and cooling systems, and infrastructure. After that, funds are then allocated to a series of management incentive fees, such as the property management fee. Finally, the project divides these remaining funds according to a fixed percentage between accounts that (1) fund major renovations and rebuilds and (2) are provided to the developer. The Deputy Assistant Secretary of Defense for Facilities Management, under the authority, direction, and control of the Assistant Secretary of Defense for Sustainment, is responsible for all matters related to MHPI and is the program manager for all DOD housing, whether DOD-owned, DOD-leased, or privatized. In this capacity, the Deputy Assistant Secretary is to provide both guidance and general procedures related to military housing privatization, as well as required annual reports to Congress on the status of privatized military housing projects. However, it is the responsibility of the military departments to execute and manage privatized housing projects, including conducting financial management and monitoring their portfolio of projects. Each military department has issued guidance that outlines its responsibilities for privatized housing, such as which offices are responsible for overseeing privatized housing projects. DOD Conducts Some Oversight of the Condition of Privatized Housing, but Efforts Are Limited in Key Areas In our draft report, currently with DOD for review and comment, we found that each military department conducts a range of oversight activities— some more extensive than others—for its privatized housing projects. For example, among other things, military departments review sample work order requests and inspect housing during the change–of-occupancy process. DOD guidance states that because privatization creates a long- term governmental interest in privatized housing, it is essential that projects be attentively monitored. Through its guidance, DOD delegates oversight responsibility of the individual privatized housing projects to each of the military departments. In our draft report, we noted that OSD and the military departments’ oversight efforts have been limited in the following key areas. Specifically, we found that (1) the scope of oversight of the physical condition of privatized housing has been limited; (2) performance metrics focused on quality of maintenance and resident satisfaction do not accurately reflect private partner performance related to the condition of privatized housing; (3) there is a lack of reliable or consistent data on the condition of privatized housing; and (4) past DOD reports to Congress on resident satisfaction are unreliable due to the inconsistent handling and calculation of the data and therefore may be misleading. Military Departments Conduct Some Oversight of the Physical Condition of Privatized Housing, but the Scope of Efforts Is Limited DOD delegates oversight responsibilities of the individual privatized housing projects to each of the military departments, and each military department has subsequently issued guidance outlining oversight roles and responsibilities. Military department oversight activities generally fall into two categories—(1) daily oversight of management and operations and (2) periodic reviews of compliance with each project’s business agreements. Daily oversight of management and operations. Daily oversight of a project’s management and operations is to be conducted by each installation’s military housing office. Military housing officials told us that activities to monitor the physical condition of housing units generally include reviewing sample work order requests, following up with a sample of residents to check on their experience with recently completed work, and inspecting housing units during the change-of-occupancy process. As we noted in our draft report, the implementation and scope of these activities varies and can be limited. For example, during our site visits conducted from June through August 2019, we observed that the rate of inspections of homes following change-of-occupancy maintenance varied at the installations we visited. Military housing office officials at one Air Force installation told us that they inspect 100-percent of homes that have completed change-of-occupancy maintenance, while officials from a different Air Force installation stated they inspect 10 to 20 percent of these homes. Military department officials told us that in spring 2019, each department conducted a “100-percent” review of privatized housing by directing installation commanders to contact all residents of privatized housing and offering a visual inspection of their privatized housing unit. In addition, in March 2019 the Army issued an order directing military housing office officials to inspect 100-percent of homes where change-of-occupancy maintenance has been completed. Officials from Army installations we visited noted that this was an increase from previous practices, and for one installation was a change in practice from only conducting inspections during the move-out process, which occurs prior to change- of-occupancy maintenance. Similarly, in November 2019, Air Force officials told us they were moving to a 100 percent inspection policy. Periodic reviews of compliance with each project’s business agreements. Periodic reviews of compliance with a project’s business agreements are a joint effort between the local military housing office, the private partners, military department installation commands, and other echelons of command. These reviews can include neighborhood tours to view project amenities such as community centers, playgrounds, and pools, all of which are owned, maintained, and operated by the private partner companies, as well as exteriors of housing units. However, our draft report showed these annual reviews have been narrow in the scope of their assessment of the physical condition of the housing units, as interior walk-throughs were, at times, focused on just a few homes at each installation. According to military department officials, each department has completed initiatives and is undertaking initiatives to revise guidance and standardize daily oversight activities in an effort to provide consistent oversight across projects and installations, and to increase the focus on the physical condition of housing. In addition, the military departments have initiatives to increase staffing levels, improve training for military housing office officials, and ensure that military department housing officials have independent access to work order data, to strengthen their oversight activities. However, each military department is working to implement service-specific initiatives with only limited guidance from OSD on the level of oversight expected of the services as it relates to the condition of the housing. Specifically, OSD guidance is focused on the oversight of the implementation of projects, the construction of new housing units, and project financial monitoring. The guidance stipulates that after privatized housing projects are awarded, monitoring should include descriptions of deal structure and strategies for project monitoring. In contrast, OSD guidance for military-owned housing provides clearly defined objectives to the military departments for oversight, including the physical condition of the homes. Unless OSD updates its guidance on the oversight of privatized housing with objectives for overseeing the physical condition of housing units, it cannot be assured that the military departments’ oversight activities will be sustained over time or be sufficiently consistent across projects, raising the risk that private partners may not provide adequate quality housing. DOD Uses Several Metrics to Monitor Private Partner Performance, but the Indicators Underlying Those Metrics Do Not Provide Meaningful Information on the Condition of Privatized Housing The military departments each use a range of project-specific performance metrics to monitor private partner performance, but as we note in our draft report, the metrics designed to focus on resident satisfaction and on the quality of the maintenance conducted on housing units do not provide meaningful information or reflect the actual condition of the housing units. Most, but not all, of the private partners are eligible to receive performance incentive fees based on generally meeting the performance metrics established in each individual project’s business agreement. Private partner performance is commonly measured through four key metrics—resident satisfaction, maintenance management, project safety, and financial management. To determine how well the private partners are performing under the metrics, military housing office officials told us they rely on a range of specific indicators established in the project business agreements. However, the indicators themselves do not provide meaningful information on the private partner’s performance in maintaining quality housing units. For example, we identified the following in our draft report: Maintenance management. One indicator of performance of maintenance management that is regularly included in project business agreements measures how often the property manager’s response time to work orders meets required timeframes established in the project’s business agreements. While this indicator measures the timeliness of the private partner’s response, it does not measure or take into account the quality of the work that was conducted or whether the resident’s issue was fully addressed. As such, a property manager may fully meet the metric for maintenance management even if a given repair has not been adequately completed. Residents in 13 of our 15 focus groups noted that they typically have had to submit multiple work order requests before an individual maintenance issue has been fully addressed. Some projects include indicators that aim to more directly measure quality, such as the number of work orders placed during the first 5 business days of residency, which may indicate the extent to which all of the change-of-occupancy maintenance was completed. Resident satisfaction. One example of an indicator of resident satisfaction is whether a project has met target occupancy rates established in the project’s business agreements. An OSD official and private partner representatives told us they use occupancy as an indicator of satisfaction based on the assumption that residents would move if they were dissatisfied with their home’s condition. However, based on our focus groups, this may not be a reliable assumption. Although most residents are not required to live in military housing, residents in each of our 15 focus groups indicated a variety of reasons for choosing to live in privatized housing, many of which did not have to do with their satisfaction with the quality or condition of their homes. For example, residents in our focus groups cited other factors influencing their decision to live in privatized housing, such as living in close proximity to military medical or educational services for children or other family members that are part of the military’s Exceptional Family Member Program, a lack of safe and affordable housing in the surrounding community, and access to quality schools. OSD and military department officials have recognized that the current indicators for measuring performance do not consistently focus on or prioritize the private partners’ performance with maintaining housing units and ensuring resident satisfaction. For example, Army officials told us they are no longer using occupancy rates as an indicator of resident satisfaction and have taken steps to standardize performance indicators across all Army projects, while still allowing for flexibility at the installation level to modify the weight of indicators to provide incentives reflective of the specific needs of the installation. Limitations to the current indicators may hinder the military departments’ ability to accurately determine private partner performance. However, OSD and military department officials told us they have not yet reevaluated the specific indicators used to determine whether a private partner has met a specific metric because doing so will require negotiation with each of the private partners for each project. Nonetheless, without reviewing the specific indicators used to award performance incentives, OSD and the military departments do not have assurance that the information the military departments are using to award these incentives reflects the actual condition of the housing. DOD and Private Partners Collect Maintenance Data on Privatized Housing, but These Data Are Not Captured Reliably or Consistently for Use in Ongoing Monitoring of Housing Units The housing projects’ business agreements typically include a requirement for the private partner to maintain a records management system to record, among other things, maintenance work requested and conducted on each housing unit. According to private partner officials, each company uses commercial property management software platforms for activities such as initiating maintenance work orders and dispatching maintenance technicians. Some private partner representatives stated that while data from the work order tracking systems are primarily used to prioritize and triage maintenance work, the data were never intended to monitor the overall condition of privatized housing units. While data from these work order tracking systems may be useful for point-in-time assessments of work order volume at a given installation, military department officials told us that efforts are underway to monitor work order data to increase the military departments’ oversight and the accountability of the private partners for providing quality housing. However, as we noted in our draft report, we found that these data are not captured reliably or consistently for use in the ongoing monitoring of the condition of privatized housing units. We received and reviewed data from each of the 14 private partners’ work order tracking systems covering each of the 79 privatized family housing projects. Based on our review of these data and discussions with private partner representatives for our draft report, we found two primary factors that would limit the reliability or consistency of using these data for ongoing monitoring of the condition of privatized housing units over time—(1) inconsistent use of terminology in work order records and (2) differing practices for opening and closing work orders: Inconsistent use of terminology. Based on our review of the data provided by the private partners and discussions with private partner officials, we noted cases where work orders were inconsistently entered into the work order tracking systems with respect to two primary factors—(1) how the request is described by the resident or interpreted by the official entering the data, which can differ for each work order, and (2) the existing range of pre-established service category options in the private partner’s work order tracking system, which differ among the partners. Differing practices for opening and closing work orders. At some installations we visited, private partners noted changes in practices for opening and closing work orders, limiting the usefulness of the data in monitoring the status of work orders over time and thus the condition of privatized housing. In addition, we identified other anomalies in work order data from each of the 14 partners. For example, we identified instances of, among other things, duplicate work orders, work orders with completion dates prior to the dates that a resident had submitted the work order, and work orders still listed as in-progress for more than 18 months. According to military department officials, efforts to review data from the private partners’ work order tracking systems have increased, and military department officials told us they have found similar limitations. However, neither OSD nor the military departments have identified minimum data requirements, established consistent terminology or practices for data collection, or developed processes for the military departments to validate the work order data collected by the private partners. Without direction from OSD to establish minimum data requirements and consistent terminology or practices for data collection, as well as a requirement for the military departments to validate data, the military departments’ ability to use data from the private partners’ work order tracking systems to monitor the condition of privatized homes over time will remain limited and may vary across projects. DOD Provides Reports to Congress on Resident Satisfaction with Privatized Housing, but Data in These Reports Are Unreliable, Leading to Misleading Results DOD is statutorily required to provide reports to Congress that include, among other things, information about military housing privatization projects’ financial health and performance and backlog, if any, of maintenance and repairs. These reports have included information on resident satisfaction with privatized housing based on the results of the annual military department satisfaction surveys. As we state in our draft report, we determined that information on resident satisfaction in these reports to Congress on privatized housing have been unreliable and are misleading due to (1) variances in the data the military departments collect and provide to OSD and (2) OSD’s calculation and presentation of the data. In May 2019, OSD issued its report for fiscal year 2017, which stated that overall resident satisfaction for calendar year 2017 was 87 percent. For OSD’s fiscal year 2017 report, the military departments provided data on resident satisfaction based on information from the annual resident satisfaction surveys. Specifically, OSD’s instructions to the military departments required the military departments to report satisfaction based on resident responses to the question that asks: “Would you recommend privatized housing,” with results indicating how many tenants responded “yes,” “no,” or “don’t know.” However, the military departments’ approaches for collecting data in their annual resident satisfaction surveys vary, which limits their ability to assess whether residents would recommend privatized housing. Instead of asking whether residents would recommend privatized housing, the military departments’ annual resident satisfaction survey asks residents the following: “How much do you agree or disagree with the following statement, ‘I would recommend this community to others.’” A resident’s satisfaction with his or her community and inclination to recommend it to others may not be reflective of satisfaction with either the privatized housing unit or privatized housing in general. Residents are then provided the following response categories on a scale of five to zero: (5) strongly agree, (4) agree, (3) neither agree nor disagree, (2) disagree, (1) strongly disagree, and (0) not applicable, no opinion, don’t know, or no answer. Through our analysis, we have identified variances in the methods that each of the military departments use to translate the residents’ responses into the “yes,” “no,” or “don’t know” categories. The variances in how the military departments calculate “yes,” “no,” or “don’t know” resulted in inconsistencies in how resident satisfaction is ultimately reported to Congress. For example, for the fiscal year 2017 report, Navy and Army officials told us they counted responses reported in category 3 (neither agree nor disagree) as “don’t know.” For the same time period, however, Air Force officials told us they counted responses in category 3 (neither agree nor disagree) as “yes.” If the Air Force had not counted category 3 as “yes,” reported resident satisfaction rates would have been lower. For example, for one Air Force installation, if officials had not counted responses in category 3 as “yes,” the resident satisfaction rate for newly constructed units would have been more than 20 percent lower than what was reported. In our draft report, we also identified instances of errors and inaccuracies in how OSD calculates these data and reports on resident satisfaction to Congress. Specifically, we found missing data points and incorrect formulas, among other errors, in OSD’s calculation of the data submitted by the military departments for OSD’s fiscal year 2017 report to Congress. For example: The formula used by OSD to calculate overall resident satisfaction for the fiscal year 2017 report did not include data for several projects, including for four Army projects that, as of September 30, 2017, accounted for over 18 percent of the Army’s total housing inventory. For one Air Force project, OSD reported identical resident satisfaction data for the fiscal year 2015, 2016, and 2017 reports, despite the fact that Air Force officials had noted in their submissions to OSD that the resident satisfaction data were from the annual resident satisfaction survey conducted in December 2013. In our draft report, we also found that presentation of data in OSD’s report to Congress may be misleading because OSD did not explain the methodology it used to calculate the overall resident satisfaction percentage or include caveats to explain limitations to the data presented. Specifically, OSD did not include information on overall response rates to the annual satisfaction survey for each military department, nor did it include response rates by project. Low response rates can create the potential for bias in survey results. For example, in the report for fiscal year 2017, OSD reported that 25 percent of residents living in renovated housing units for one privatized housing project were satisfied with their housing, but we found that only four residents had provided responses to this question. Thus, only one resident reported being satisfied. In addition, we found that OSD did not provide an explanation in the report for why five projects were listed as “not applicable.” According to OSD officials, this error was a quality control issue that they plan to address. According to OSD officials, there are no plans for quality control in development at this time. The National Defense Authorization Act for Fiscal Year 2020 (fiscal year 2020 NDAA) includes a provision requiring each military installation to use the same satisfaction survey for tenants of military housing— including privatized military housing—the results of which are not to be shared with private partners until reviewed by DOD. Until OSD makes changes to the data collection and calculation efforts that make up the department’s report to Congress and provides explanations of the data in the reports, OSD will not be able to provide Congress with an accurate picture of resident satisfaction with privatized housing. Military Housing Offices Have Not Effectively Communicated Their Role as a Resource for Servicemembers Experiencing Challenges with Privatized Housing Military housing office officials, located at each installation, are available to provide resources to servicemembers experiencing challenges with their privatized housing, among other services. However, as we stated in our draft report, we found that these offices have not always clearly and systematically communicated this role to residents of privatized housing. The military housing office is to provide new residents with information on their local housing options, to include referral services for housing options. According to some military housing office officials, the military housing office then works with the private partner to identify the eligibility and type of home the servicemember qualifies for, if the resident chooses to live in privatized housing. According to some residents we spoke with in one of our focus groups, beyond this initial interaction, military housing office officials generally do not interact with residents on a regular basis. Additionally, residents who participated in our focus groups noted they were sometimes confused about the military housing offices’ roles and responsibilities with regard to the maintenance of their home; there was a perception that the military housing office was not working independently of the partner in the residents’ best interest; or they did not know the military housing office existed. The military department oversight agencies have also found that the military departments have not clearly and systematically communicated their roles to residents, and resident confusion and a lack of awareness regarding the role of the military housing offices is an issue. In April 2019 the Air Force Inspector General reported that less than half of the residents interviewed used their military housing office to resolve complaints, and at some installations officials visited, many residents did not know the military housing office had an oversight role. Similarly, in May 2019, the Army Inspector General reported to the Secretary of the Army that at 82 percent of Army installations with privatized housing, residents did not know how to escalate issues to either the private partner or the Army housing office. Additionally, the Army Inspector General reported that installation command teams and staff cited multiple circumstances where military housing offices and tenant advocacy roles and responsibilities were unclear. Further, some military housing office officials with whom we spoke during our site visits acknowledged the gap in resident awareness regarding the existence and purpose of the military housing office. Some military housing officials also noted that some residents are unaware of the difference between the military housing office and the private partner office, due in part to their physical co- location and unclear building signage. Each military department has issued information that establishes that its housing offices can assist in the resident dispute resolution process. Specifically, if servicemembers are experiencing a dispute with a private partner, military department guidance establishes varying roles for their respective military housing office officials. For example, Army policy states that each installation should have an official tasked with supporting servicemembers regarding resident issues that cannot be resolved by the private property manager. This individual is also responsible for resolving every resident complaint and the military housing office, if required, can request mediation by the garrison commander. OSD has recognized that the military departments’ communication with residents about their role as a resource for them has been limited. In February 2019, the Assistant Secretary of Defense for Sustainment testified before Congress that a way forward in addressing resident concerns would require focus in three key areas: communication, engagement, and responsiveness. Some military housing office officials told us they have taken steps to increase resident awareness, such as increasing the advertising of the military housing office’s role and contact information, conducting town hall meetings, and rebranding their military housing offices to differentiate them from the private partners. For example, a Marine Corps housing office official stated that the housing office established a document, which is distributed to residents by the private partner, informing residents of housing office contact information and the service’s three-step dispute resolution process, but efforts have not been standardized across all projects. Moving forward, having plans in place to clearly and systematically communicate the difference between the military housing office and the private partners—including the military departments’ roles, responsibilities, and military housing office locations and contact information—will better position the military departments to achieve the intended objectives of their initiatives aimed at improving residents’ experience. DOD and Private Partners Are Implementing Initiatives to Improve Privatized Housing, but May Face Challenges OSD, the military departments, and the private partners have identified and begun collaborating on a series of initiatives aimed at improving residents’ experiences with privatized housing, but as we state in our draft report currently with DOD for review and comment, these efforts face challenges. In addition, in the fiscal year 2020 NDAA, Congress established several requirements regarding privatized military housing reform. Several of the statutory requirements provide specific provisions that DOD will need to incorporate into its development and implementation of existing MHPI initiatives, as well as additional requirements aimed at improving oversight of privatized housing. In our draft report, we discuss several of these key initiatives, including the following. Development of a resident bill of rights. DOD has been working to develop a resident bill of rights intended to provide clarity to residents on their rights and responsibilities while living in privatized military housing. The fiscal year 2020 NDAA includes specific requirements to be included in the bill of rights, for example, ensuring residents have the right to have their basic allowance housing payments segregated and held in escrow, with approval of a designated commander, and not used by the property owner, property manager, or landlord pending completion of the dispute resolution process. In January 2020, DOD officials told us that they were in the process of updating their existing resident bill of rights to include these provisions. In February 2020, the Secretary of Defense signed the resident bill of rights, noting that the rights would be available to residents on May 1, 2020. Implementation of a common (enterprise) dispute adjudication process that will apply to all projects. The military departments and private partners have been working to develop a common dispute resolution process that would apply to all privatized housing projects. The fiscal year 2020 NDAA includes requirements reinforcing this initiative, specifically stating that the military department Secretary concerned shall implement a standardized formal dispute resolution process to ensure the prompt and fair resolution of disputes between landlords providing housing units and tenants residing in housing units concerning maintenance and repairs, damage claims, rental payments, move-out charges, and such other issues relating to housing units as the Secretary determines appropriate. Additionally, the statute requires that each military department Secretary designate the installation or regional commander in charge of oversight of housing as the deciding authority under the dispute resolution process. Reviewing MHPI resident satisfaction data collection process and the process by which DOD measures and reports on resident satisfaction data. According to OSD officials, the department is reviewing the process by which it measures and reports resident satisfaction data, and has plans to review the survey questions used to measure resident satisfaction. In line with these planned efforts, the fiscal year 2020 NDAA further requires that DOD’s reports to Congress include additional information, such as the results of residence surveys and other factors related to the condition of privatized housing. Standardizing Performance Incentive Fee Ranges. In October 2019, OSD issued new guidance standardizing the performance incentive fee ranges across the military departments. The fiscal year 2020 NDAA requires that DOD publically report information regarding the use of performance incentive fees. The statute also requires that DOD take into consideration any decision a commander renders in favor of the tenant in the formal dispute resolution process in determining whether to pay or withhold all or part of any incentive fees for which a private partner may otherwise be eligible under the contract. In addition to requirements impacting current DOD initiatives, the fiscal year 2020 NDAA included requirements for increased oversight of the physical condition of privatized housing. For example, the statute requires the Secretary of Defense to designate a Chief Housing Officer to oversee housing units, including the creation and standardization of policies and processes regarding housing units. The statute also requires the Secretary of Defense to establish a uniform code of basic standards for privatized military housing, as well as plans to conduct inspections and assessments of the condition of privatized homes. However, both DOD and private partner representatives have cited several challenges that could affect their ability to implement initiatives aimed at improving MHPI. Specifically: Timeliness of implementation due to the need to collaborate with and obtain input and agreement from the large number of stakeholders involved in privatized housing. According to DOD officials and private partner representatives, many of the initiatives designed to improve privatized housing not only require agreement between DOD and the private housing partners, but also discussion with and, in some cases, approval by the project bond holders. Because DOD does not have the ability to unilaterally make changes to existing business agreements, this need for stakeholder agreement limits DOD’s control over the implementation timeline of any initiative that requires changes to a project’s business agreement. The need for more military department staff with targeted expertise. The military departments reduced their involvement in daily privatized military housing operations as part of the overall privatization effort, to include reducing staffing levels at the installations. Military housing office officials at over half of the installations we visited stated that reduced staffing levels impacted their ability to carry out oversight duties, such as work order data analysis and housing inspections. Each of the military departments has plans to increase the military housing office staffing at each installation to allow for enhanced oversight. In particular, according to military department officials, these positions will focus on quality control and quality assurance of the maintenance of privatized homes. The potential for unintended negative financial impacts on the projects that could outweigh the intended benefits of the initiatives. OSD officials and private partner representatives have expressed concern that some initiatives could result in unintended financial consequences for the housing projects. For example, increased frequency of change-of-occupancy inspections could result in homes remaining vacant longer than planned and therefore not collecting rent. This could unintentionally impact a project’s cash flow. Some of the private partners noted that the financial impact of unfunded requirements to projects that are already experiencing financial distress could result in even fewer funds available to reinvest in improvements to the current and future physical condition of the homes. Without assessing risks to the financial viability of the MHPI projects associated with the implementation of these initiatives aimed at improving privatized housing, DOD’s efforts to improve the privatized housing program over the long term could be compromised. In summary, as we state in our draft report, we found that while DOD and the private partners have taken steps to address concerns raised about their ability to adequately maintain and oversee the condition of these housing units and provide quality housing for servicemembers, the extent to which the efforts will be sustained and result in improvements remains unclear. Our draft report includes several recommendations to OSD to strengthen oversight of MHPI, such as updating oversight guidance and assessing the risks to the financial viability of housing projects. Our draft report also includes recommendations to the military departments to enhance monitoring of privatized housing projects, such as improving processes used for data collection; reviewing private partner performance; collecting and reporting resident satisfaction data; and communicating with residents. Chairwoman Wasserman Schultz, Ranking Member Carter, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions about this testimony, please contact Elizabeth A. Field, Director, Defense Capabilities and Management, at (202) 512-2775 or FieldE1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Kristy Williams (Assistant Director), Tida Reveley (Analyst in Charge), Austin Barvin, Ronnie Bergman, William Carpluk, and Jordan Tibbetts. In addition, key support was provided by Vincent Buquicchio, Juliee Conde- Medina, Mae Jones, Kelly Rubin, Monica Savoy, John Van Schaik, Madeline Welter, and Kelsey Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Congress enacted the Military Housing Privatization Initiative in 1996 to improve the quality of housing for servicemembers. DOD is responsible for general oversight of privatized housing projects. Private-sector developers are responsible for the ownership, construction, renovation, maintenance, and repair of about 99 percent of military housing in the United States. Recent reports of hazards, such as mold and pest infestation, have raised questions about DOD's oversight. This statement summarizes GAO's draft report on privatized housing, which is currently at DOD for review and comment. Specifically, the statement discusses, among other objectives, OSD and the military departments' (1) oversight of privatized military housing and (2) development and implementation of initiatives to improve privatized housing. For its draft report, GAO reviewed policies and guidance; visited a non-generalizable sample of 10 installations representing each military department, among other factors; analyzed work order data; and interviewed DOD officials and private partner representatives. What GAO Found The Office of the Secretary of Defense (OSD) and the military departments conduct a range of oversight activities, but some of these activities have been more extensive than others. Specifically, GAO's draft report notes: The military departments conduct some oversight of the physical condition of housing, but some efforts have been limited in scope. Military departments have authority to conduct oversight of the condition of privatized housing; that oversight generally consists of reviewing a sample of work order requests, visual inspections of housing during change of occupancy, and other point in time assessments. However, GAO found that these efforts are limited in scope. For example, annual interior walk-throughs are limited to just a few homes at some installations, which may not comprehensively reflect the condition of the housing units at those installations. Military departments use performance metrics to monitor private partners, but metrics do not provide meaningful information on the condition of housing. OSD has recently issued guidance to ensure consistency in the framework used to measure project performance. However, the specific indicators used to determine if the metrics are being met do not accurately reflect private partner performance related to the condition of the home. For example, a common indicator is how quickly the private partner responded to a work order, not whether the issue was actually addressed. The military departments and private partners collect maintenance data on homes, but these data are not captured reliably or consistently. The Department of Defense (DOD) is expanding its use of work order data to monitor and track the condition of privatized housing. However, based on GAO's analysis of data provided by all 14 private partners, these data cannot reliably be used for ongoing monitoring of privatized housing because of data anomalies and inconsistent business practices in how these data are collected. DOD provides reports to Congress on the status of privatized housing, but some data in these reports are unreliable, leading to misleading results. DOD provides periodic reports to Congress on the status of privatized housing, but reported results on resident satisfaction are unreliable due to variances in the data provided to OSD by the military departments and in how OSD has calculated and reported these data. OSD and the military departments have made progress in developing and implementing a series of initiatives aimed at improving privatized housing. In addition, Congress established several requirements addressing privatization housing reform. However, DOD officials and private partner representatives have identified challenges that could affect implementation of these various initiatives. These include concerns that implementation could have unintended negative impacts on the financial viability of the privatized housing projects. What GAO Recommends GAO's draft report includes several recommendations, including that DOD take steps to improve housing condition oversight, performance metrics, maintenance data, and resident satisfaction reporting as well as to assess the risk of initiatives on project finances.
gao_GAO-20-71
gao_GAO-20-71_0
Background IRS began posting information on the internet in the 1990s. In the early 2000s, IRS launched its first two interactive services which allowed taxpayers to (1) check on the status of a refund, and (2) set up a payment plan to pay taxes they may owe over time. Since then, irs.gov has expanded to include other online services, such as personal informational accounts and tax transcript request services. In addition to online services, the irs.gov website contains information on various topics, including forms and publications offered on static web pages. While these static web pages do not provide taxpayers with personalized support or information, taxpayers seeking more targeted information may consult online calculators irs.gov offers (e.g., a tax withholding estimator). Two distinctions between these calculators and the online services our report examines are that taxpayers do not have to establish their identity before using a calculator and the calculators can be used to explore hypothetical tax planning situations. IRS reports that its website, which includes online services, static web pages, and calculators received more than 600 million visits in fiscal year 2018. IRS Offices Responsible for Online Services for Individual Taxpayers With respect to taxpayer services, OLS is tasked with leading IRS’s business transformation efforts related to online services and improving the online experience for taxpayers. To improve online services for individual taxpayers, OLS primarily works with IRS’s relevant business operating divisions—Wage and Investment and Small Business/Self- Employed —which assist individual taxpayers in fulfilling their tax obligations. On the operations support side, Information Technology is responsible for delivering services and solutions related to technology and one of its responsibilities is to support IRS’s online services. IRS’s Research, Applied Analytics, and Statistics division conducts research related to taxpayer burden, which is defined as the time and money taxpayers spend complying with their tax obligations. Customer Service and User Experience Requirements Both Congress and presidential administrations have set the expectation that agencies provide high-quality customer service. Starting in the 1990s, they required agencies to develop plans for improving their services and regularly report on the progress they are making. In recent years, Congress and the executive branch have emphasized the importance of improving online services. In December 2018, Congress passed and the President signed the 21st Century Integrated Digital Experience Act (IDEA Act) that includes requirements for agencies when they are creating or redesigning a website or digital service that is intended to be used by the public. Among these requirements are to design the website or digital service around user needs, with data-driven analysis influencing management and development decisions, using qualitative and quantitative data to determine user goals, needs, and behaviors. The IDEA Act also requires agencies to ensure that any paper form related to serving the public is made available in a digital format by December 2020. In July 2019, the Taxpayer First Act became law. It includes a requirement that the Secretary of the Treasury (or designee) submit a comprehensive customer service strategy to Congress by July 2020 including, among other things, a plan to provide assistance to taxpayers that is designed to meet reasonable taxpayer expectations. This plan is to include online services. The act also requires this customer service strategy to identify metrics and benchmarks for quantitatively measuring progress in implementing it. Similarly, the administration has established a cross-agency priority goal called “improving customer experience with federal services” intended to improve the usability and reliability of the most important online services, which contains requirements related to IRS’s online services that will be discussed in more detail later in this report. In addition to these expectations for a high quality user experience, the GPRA Modernization Act of 2010 (GPRAMA) requires, among other provisions, strategic plans identifying Treasury’s and other cabinet departments’ and other executive agencies’ most important goals. It also requires annual performance plans that identify specific targets and reports to Congress and the public on results achieved. While GPRAMA is applicable to the department or agency level (e.g., Treasury), we have previously reported that these requirements should serve as leading practices at other organizational levels, such as component agencies, offices, programs, and projects, and are therefore applicable to IRS. IRS Agreement with Industry to Provide Electronic Tax Preparation and Filing Services IRS requires taxpayers whose income, filing status, and age fall within specified parameters to file a tax return. Taxpayers have five choices for filing a return: (1) hire a tax practitioner to file a return on their behalf with IRS, which the practitioner generally does electronically; (2) obtain tax preparation and filing services on the internet or download software, which allows for assisted preparation in addition to online filing; (3) file on paper for free; (4) use the Free File program; or (5) seek assistance from IRS’s Volunteer Income Tax Assistance or the Tax Counseling for the Elderly programs in which IRS provides funding to IRS-certified volunteers who meet in person with eligible taxpayers to help them prepare their return and the completed return is filed electronically. To encourage taxpayers to file electronically, IRS advertises that it will deliver refunds more quickly to those who file electronically than those who file on paper. In 2002, IRS signed a memorandum of understanding (which we will refer to as an agreement) with a consortium of tax preparation companies now known as Free File, Inc. Initially, the participating companies agreed to provide free electronic tax preparation and filing services for eligible taxpayers. In return, IRS stated that it would not offer its own free, online tax return preparation and filing services. This agreement has been periodically renewed, most recently in October 2018, when IRS and Free File, Inc. extended the terms of the agreement to October 2021. This requirement that IRS not offer its own online filing services has remained the same. The income limit for taxpayers to participate has evolved over time. In 2005, IRS and the consortium of tax preparation companies amended the agreement to provide for coverage for 70 percent of taxpayers based on the taxpayers’ adjusted gross income beginning in filing season 2006. They further agreed that while the percentage of taxpayers covered would remain the same throughout the agreement, the income limit for taxpayers would be adjusted each filing season. For the 2020 filing season—which IRS expects will begin in January 2020—taxpayers will be required to have an income below $69,000 and meet other eligibility requirements to use the participating companies’ preparation and filing services. For married taxpayers, the $69,000 threshold applies to their combined income if the two individuals file a joint return. Taxpayers whose income exceeds $69,000 are ineligible to use the participating companies’ software for free, but are allowed to use Free Fillable Forms (FFF), a service provided by one of the Free File, Inc. participating companies each year, with a link on irs.gov. FFFs are designed to be the online equivalent of paper forms, on which users can type information into a fillable field corresponding to each line item on the paper form. IRS.gov explains that to use FFFs, users “must know how to do your taxes yourself” and states that “only basic guidance” is provided. After a taxpayer completes the FFFs, the Free File, Inc. participating company that provides this service electronically transmits the return to IRS. IRS Primarily Offers One-Way Information Services Online and Measures of Taxpayer Experience Are Limited IRS’s Online Services Are Concentrated in Information and Payment Services IRS provides 10 online services for individual taxpayers (see table 1). We organized these online services into four categories based on common interactions between individual taxpayers and revenue agencies. IRS officials told us and we verified that all services are accessible from personal computers and nondesktop devices, such as smart phones and tablets. IRS also offers an app for mobile devices which taxpayers can use as a portal for accessing “Where’s My Refund” and for making payments. Usage across all online services in fiscal year 2018 was overwhelmingly concentrated within “Where’s My Refund?” (see figure 1). In 2018, taxpayers completed more than 300 million queries on “Where’s My Refund?”, IRS’s most used online service, attempting to determine when their anticipated refunds would arrive. To put this figure in perspective, IRS processed approximately 150 million individual tax returns and approximately 120 million individual income tax refunds in fiscal year 2018, meaning that some taxpayers are making multiple online inquiries about their refund. IRS.gov directs taxpayers to use this online service to follow up on their refund and suggests that taxpayers only call IRS in certain circumstances. “View Your Account Information” in fiscal year 2018 was IRS’s fifth most utilized online service, but it has experienced recent growth. Usage more than tripled between fiscal years 2017 and 2018. IRS has continued to add capabilities and make other improvements to “View Your Account Information” since it first launched in 2016, such as links to IRS’s online payment and transcript services. Further, officials told us in September 2019 that they plan to add information about the status of an installment agreement and additional payment history. As of October 2019, the capabilities of “View Your Account Information” focus on providing information to taxpayers about how much money they may owe IRS and payments made. For example, taxpayers who paid their full tax bill for prior tax years may log into “View Your Account Information” and see they have a $0 balance. Similarly, taxpayers who receive only refunds will log on and see a $0 balance, but not information about their refund. Officials noted that even a $0 balance may be of value to taxpayers who want reassurance that they and IRS have a common understanding of their tax situation. Revenue Agencies in Other Countries and States Have Demonstrated Online Filing and Communication Capabilities We identified three revenue agencies in other countries that offer online services that IRS does not: the Australian Taxation Office (ATO), New Zealand’s Inland Revenue Department (IRD), and the United Kingdom’s (U.K.) Her Majesty’s Revenue and Customs (HMRC) (see figure 2). In each of the selected countries, we found that taxpayers are offered a single online account that integrates the many different online services each revenue agency offers. As noted above, “View Your Account Information” on irs.gov provides links to other online services and taxpayers do not have to log in again to use those services. However, the remaining online services are not connected to “View Your Account Information” and taxpayers must leave the “View Your Account Information” platform to access those services. IRS officials told us that over the long term they would like to integrate additional services into “View Your Account Information,” but they explained that they have prioritized the development of new online services over connecting existing services to “View Your Account Information.” Further, IRS officials told us they also want taxpayers to be able to check their refund without having to establish an account. A second difference is that taxpayers in the three countries can complete their filing obligations on the revenue agency’s website. We examined the extent to which contextual differences between the U.S. income tax system and the three countries’ tax systems could enable or inhibit offering electronic filing on the revenue agency’s website. We found that the U.S. income tax system and the three other countries’ tax systems have similar definitions of income, employers withhold income taxes employees owe, and subsidies for certain social goals are channeled through the tax system. All three selected countries offer taxpayers the ability to communicate electronically with agency employees via the revenue agencies’ websites. For example, Australian taxpayers who are working on preparing their return in their account can communicate through an electronic chat with ATO employees about questions they may have, such as regarding deductions and the capital gains tax. In New Zealand, taxpayers can upload documents requested by IRD to their accounts, whereas American taxpayers generally must mail these documents. IRS’s pilots of electronic communication capabilities between taxpayers and its employees will be discussed later in this report. In addition to the three countries reviewed, we also selected three states that have integrated more services into a single online taxpayer account than IRS has done. Additionally, all three states provide taxpayers with two-way secure electronic communication (see figure 3). For example, Alabama and California taxpayers can log into their respective accounts for a secure electronic chat. California and New York taxpayers can share documents. We found that two of the states—Alabama and California—offer taxpayers the capability to file their tax return on the revenue agencies’ websites. However, officials in both states told us that few taxpayers have used this option. Alabama officials believed this was because many taxpayers prefer to use the same method to file their state tax return as their federal tax return, and as we have previously discussed, IRS does not currently offer this service. New York officials said they previously offered this service, but decided to stop offering it because they did not believe the benefits were sufficient to justify continuing it. Our review of the revenue agency websites for the states that have income taxes (43 states and the District of Columbia) found that 21 of the state revenue agencies, including Alabama and California, allow taxpayers to file their state tax return on the revenue agencies’ website (see figure 4). Little Is Known about the Extent to Which IRS’s Online Services Meet Taxpayer Needs While IRS regularly surveys taxpayers who visit the static pages of irs.gov, the surveys do not provide information on the extent to which all of IRS’s online services meet taxpayer needs. All three of the foreign revenue agencies we reviewed collected survey information that allowed them to more clearly explain the extent to which they believe their online services are meeting taxpayer needs. In addition, IRS has long-running research seeking to estimate the time and money taxpayers spend complying with their tax obligations, but the implications of expanding online services for taxpayer burden have not yet been assessed. Assessing Taxpayer Experience with irs.gov and “View Your Account Information” IRS seeks feedback from a randomly selected sample of users of the static pages of its website and we reviewed an example of feedback IRS had collected between February 28, 2019, and March 31, 2019. This feedback mechanism is not designed to measure taxpayers’ experiences with individual online services or the extent to which services meet their needs. The invitation to participate appears on the static pages of irs.gov, not when a taxpayer is logged into an online service. However, this does not necessarily exclude taxpayers using online services from providing feedback. IRS officials explained that it does allow IRS to capture feedback across irs.gov static pages and related applications, even though it does not provide feedback on any single online service. A hypothetical example would be a taxpayer starts the process of applying for a student loan by logging into IRS’s “Data Retrieval Tool” and then after completing that task peruses IRS’s publications on tax benefits for higher education on the static pages of irs.gov and is then invited to participate. IRS officials stated that one intent of the survey is to measure a taxpayer’s entire experience, including instances when a taxpayer is visiting for multiple reasons. If a user goes to a static page they may be invited to participate, however if they use the online services without visiting a static page, they will not have the opportunity to provide feedback. IRS officials told us that this feedback may provide insight into taxpayers who report coming to irs.gov to do a task, such as obtaining tax records, but then do not successfully complete the task. IRS told us that this information could alert IRS officials to challenges taxpayers may face in locating online services, but officials agreed that this method does not provide specific feedback on individual online services. IRS officials told us that they would like to combine this survey with surveys focused on specific online services, but face resource constraints. In addition to the survey of users from the static web pages, IRS collects feedback from taxpayers who access “View Your Account Information.” For example, IRS selected a random sample of “View Your Account Information” users between January 2019 and March 2019 who successfully logged into their accounts. OLS officials explained that IRS updated this survey to ask four questions that Office of Management and Budget (OMB) guidance directs agencies to use in assessing their customers’ experiences with the agency’s “highest-impact customer journeys.” IRS asked users: 1. If the online tax account tool met their needs. 2. About their overall satisfaction with irs.gov. 3. Whether this experience increased their confidence in IRS. 4. Whether they could find what they needed easily and quickly. IRS’s summary of the results of the “View Your Account Information” taxpayer experience survey identifies potential limitations. IRS states that users who have experienced challenges logging into protected services, such as “View Your Account Information,” provide negative feedback on the survey administered to users of static pages. However, the “View Your Account Information” experience survey is not designed to capture such negative feedback because a taxpayer must log into his or her account to be selected to participate in this survey. IRS officials agreed that our analysis is accurate, but had a different view on the implications. In their view, the purpose of the “View Your Account Information” taxpayer experience survey is to assess taxpayers’ experiences using this particular service. They believe that challenges legitimate taxpayers may experience in logging into “View Your Account Information” have broader implications and affect taxpayers’ experiences using other online services. And as noted above, IRS officials noted that the survey of users of static pages captures negative feedback from users who have had difficulty accessing online services. However, successfully passing the security checks is the first step in the journey legitimate taxpayers must take to use “View Your Account Information.” The result is a potential knowledge gap in the extent to which “View Your Account Information” is providing taxpayers with a satisfactory experience and very little knowledge on the extent to which the other online services are meeting taxpayers’ needs. OMB has directed agencies to ask two additional questions to gauge user experience with agency services: (1) Did it take a reasonable amount of time; and (2) Does the customer believe he or she was treated fairly. IRS asks the first question of a sample of users of static pages, but neither question is currently asked of a sample of “View Your Account Information” users. IRS told us that the first of these questions was covered by their question about whether taxpayers could find what they needed to easily and quickly, although OMB guidance considers these as separate questions. IRS does not believe the second question is relevant because its online services are automated. The OMB guidance authorizes agencies to request exemptions and modifications to the requirements. We confirmed with OMB staff that IRS had discussed its approach with OMB and that staff concurred with it. While OMB staff said they recognize that variation presently exists across agencies in the required survey questions, they told us that they would like to continue working with agencies to bring greater consistency to surveys so that comparable data will be collected by fiscal year 2021. Both IRS officials and OMB staff told us that that IRS is participating in an interagency working group focused on consistent implementation of this guidance. Public Reporting on Taxpayer Experience In addition to surveying customers, OMB’s Circular A-11 section 280 establishes government-wide guiding principles for all executive branch agencies which contain the following requirement: “Agency annual performance plans should include indicators for outcomes related to customer experience.” Our review of IRS’s congressional budget justification and performance plan and report for fiscal year 2020 found no performance measures or indicators summarizing the only taxpayer experience information that IRS collects on one of its online services—the “View Your Account Information” survey discussed above. In regards to the “View Your Account Information” survey, IRS officials told us they are not allowed to publicly share the results because of the process they used to obtain approval to administer this survey pursuant to the Paperwork Reduction Act. The act contains requirements that agencies justify the necessity of collecting information from the public and publish notices informing the public of the planned information collection. In addition the Office of Information and Regulatory Affairs (OIRA) within OMB must review and approve planned information collections. For authorization to administer the “View Your Account Information” survey, IRS officials used approval the Department of the Interior had obtained from OIRA for multiple agencies to administer customer satisfaction surveys for government websites. However, our review of the notice the Department of the Interior published found that while the contractor administering the survey must obtain permission from the agency before releasing the information, there is no prohibition on the agency choosing to release the information. Further to facilitate government-wide comparisons of the customer experiences different agencies are providing, the General Services Administration published a notice in the Federal Register in September 2019 stating that Treasury and other agencies will be publishing relevant data on Performance.gov. This notice does not prohibit agencies from publishing this same data in other publications (e.g., the agency’s performance plan and report). Until it collects more specific feedback on the other online services, it will not be possible for IRS to summarize and report information about the taxpayer experience with online services and the extent to which those services are meeting taxpayer needs. Without information about how effectively IRS’s online services are meeting taxpayer needs, it is difficult for decision makers to appreciate the potential value of these services and help ensure IRS has the necessary resources to maintain and improve these services. The three foreign revenue agencies reviewed all report to their parliaments on the extent to which they believe their online services are meeting taxpayer needs and use that information to help target areas for improvement: Australia: The Australian Taxation Office’s (ATO) annual reports for 2015-2016, 2016-2017, and 2017-2018 tracked “community satisfaction with ATO performance,” which combined more specific measures tracking satisfaction with different service channels—online, telephone, and mail—and satisfaction levels among different groups of taxpayers, such as individuals and small businesses. ATO publishes the more detailed satisfaction levels on its website with the most recent report presenting 2018 results. In its 2017-2018 report, ATO reported that the information collected showed that declining satisfaction with online services was negatively affecting its overall performance and stated that the office plans to use this feedback to improve online services. In its 2018-2019 report, ATO introduced a new measure—”community confidence in the ATO” —which it says is based on surveys of clients who have recently interacted with ATO and surveys of the general community. New Zealand: The Inland Revenue Department’s (IRD) annual reports for 2018 and 2019 present detailed results of its customer satisfaction and perceptions survey, including the overall percentage of customers satisfied with online services, as well as how satisfied subgroups of individual taxpayers are with online services, such as those receiving a tax credit for working families. United Kingdom: Her Majesty’s Revenue and Customs’ (HMRC) annual report presents a quantitative measure of customer satisfaction with online services. HMRC has published a more detailed explanation of how the agency measures customer satisfaction with online services. Assessing the Implications of Online Services for Taxpayer Burden IRS states that the development of new online services should reduce taxpayer burden—referring to the time and money taxpayers spend to comply with their tax obligations—and one of IRS’s strategic goals states that IRS will reduce taxpayer burden. To help IRS officials and policy makers measure the progress they are making in achieving their goal of reducing taxpayer burden, IRS’s Research, Applied Analytics, and Statistics (RAAS) office has periodically surveyed taxpayers since 1984 about the time and money they spend to complete their tax obligations and uses the responses along with information from those taxpayers’ tax returns to estimate the total compliance burden for individual taxpayers. A RAAS official told us that IRS has not conducted any burden research specifically related to online services. IRS’s Data Book for fiscal year 2018 describes the magnitude of the taxpayer assistance provided through online services. More than 300 million electronic transactions took place through online services for individual taxpayers. IRS is missing an opportunity because taxpayers are already making extensive use of IRS’s online services for such tasks as setting up payment plans and obtaining records. Online services will likely continue to assist taxpayers in fulfilling their tax obligations. Two of the case study countries’ revenue agencies—in Australia and New Zealand—have conducted research on taxpayer burden. For example, New Zealand’s IRD’s annual report for 2019 stated that it has made progress in making taxes easier and simpler for its customers. To track its progress, IRD added online services to its taxpayer burden research in 2016 and updated this study in 2018. IRD compared the 2016 and 2018 survey results to burden research conducted in 2013 prior to expanding online services. As a result of this research, IRD found that 20 percent of taxpayers who run small businesses and participated in the survey reported that expanded online services and an improved IRD website have overall reduced their compliance burden. IRD’s report notes that additional online services will be launched in April 2019 and a follow-up survey is planned for 2020 to compare the reported burden with the earlier surveys and, thereby, track the progress it is making. IRS’s Strategy for Expanding Online Services Is Not Fully Consistent with Key Requirements and Leading Practices IRS’s Long-Term Plan Does Not Consider Taxpayer Input for Identifying and Prioritizing New Online Services A series of long-term planning documents establishes priorities to guide IRS decision-making and identify new online services, but does not contain evidence that taxpayer input was used to help identify the highest priority services. In April 2019, IRS published the IRS Integrated Modernization Business Plan (modernization plan). One of the plan’s goals is to modernize the taxpayer experience. To do this, IRS proposes to develop new services including delivering taxpayer notices electronically, modernizing online installment agreements, and establishing omni-channel communication capabilities provided that IRS continues to receive the requested resources from Congress. IRS does not currently incorporate taxpayer input into its prioritization process because it prioritizes services primarily based on their potential to benefit IRS’s operations or because they can be developed quickly. An OMB memorandum directs agencies to understand what their customers want by engaging in research to understand their goals, needs, and behaviors before beginning to develop new services. Going forward, the IDEA Act requires that new digital services be “designed around user needs with data-driven analysis influencing management and development decisions”. This requirement took effect in June 2019. IRS documents describing how new services were prioritized show that IRS did not incorporate taxpayer research or input into the score it assigns to each proposed service. Instead, IRS officials estimated the potential taxpayer value of a new service. For example, supporting documentation for one proposed project from the modernization plan to allow taxpayers to receive notices electronically states that IRS expects taxpayers to receive less paper mail and have easier online access to recent or historical notices if the project is developed. IRS expects that electronic delivery of notices will increase the timeliness of its service, which would improve the taxpayer experience. After a new online service is selected and approved, IRS does obtain input from taxpayers during the development phase, for example, to improve usability of the service and fine-tune technical capabilities. OLS officials provided us with documentation of user experience research they conducted on how to improve specific design elements of existing online services. For example, IRS reworded a button within “View Your Account Information” to access the online payment agreement service from “Need more time to pay?” to “Go to payment plans” to improve clarity. As a result of the change, the rate of taxpayers accessing the online payment agreement service from their online account has doubled, according to data IRS provided. While IRS’s modernization plan outlines new online services it plans to develop, the modernization plan also states that it expects additional services to be added over time as technology advances and customer expectations evolve. The Taxpayer First Act requires IRS to expand an online service—currently offered to taxpayers in nine states and the District of Columbia—to provide taxpayers with Identity Protection Personal Identification Numbers and an online platform to prepare and file a Form 1099 to report independent contractor earnings or other miscellaneous income. In May 2019, IRS officials told us that they will continue to reprioritize new service development based on available resources. As IRS reprioritizes the new services it plans to develop, IRS runs the risk of developing online services which may be of lower priority to taxpayers or that taxpayers do not utilize if IRS does not include input from taxpayers on what new services IRS should prioritize. By contrast, the United Kingdom’s HMRC has conducted taxpayer research to understand user needs and taxpayer preferences. For example, HMRC conducted taxpayer research in 2016 to inform decisions about which services to include in development of the “Personal Tax Account” which as noted above provides taxpayers with integrated access to various online services. HMRC’s research included workshops and interviews with taxpayers who use online services as well as an online survey of 4,000 taxpayers. The online survey asked taxpayers to rank the top five services they would like HMRC to develop from a list of 14 potential services and asked taxpayers to explain their rationale. HMRC then assessed preferences among taxpayers and concluded that secure electronic messaging was one of the services most highly sought, according to the report. As a result, HMRC incorporated taxpayer input into its new service prioritization process and began developing services that it knew taxpayers desired and were more likely to use. IRS’s Plans for Expanding Online Services Do Not Set Specific Targets for Improving Taxpayers’ Experiences or for Decreasing Taxpayer Burden IRS’s modernization plan states that IRS intends to measure the success of its efforts to improve taxpayers’ experience consistent with the administration’s government-wide goal to improve customer experience with federal services. OMB’s guidance to agencies on this topic states that they should measure customer perceptions of the ease, efficiency, and equity in the process of obtaining the service. The modernization plan also states that IRS will measure taxpayer burden hours, which would capture changes in the amount of time taxpayers spend doing their taxes as a result of the modernization of information services and the development of new online services. The GPRA Modernization Act (GPRAMA) requires agencies’ annual performance goals to be expressed in an objective, quantifiable, and measurable form and this principle is relevant to IRS’s modernization plan. In prior work identifying leading practices related to this requirement, we explained that expressing goals in a quantifiable form provides an objective way to assess the agency’s performance. The Taxpayer First Act, enacted in July 2019, similarly requires the Secretary of the Treasury (or designee) to identify metrics and benchmarks for IRS for quantitatively measuring progress in implementing a customer service strategy that the act requires IRS to develop. That strategy must be submitted to Congress within 1 year of enactment. While Treasury and IRS are not required under the act to submit the strategy containing the metrics and benchmarks for quantitatively measuring progress until July 2020, we found that IRS’s modernization plan is not well positioned to help Treasury and IRS implement this new requirement. IRS’s modernization plan states that IRS intends to measure progress towards its customer experience goal through promoting ease and simplicity in taxpayer interactions. To measure that, IRS stated that it plans to increase its “American Customer Satisfaction Index” score, although IRS did not set a numerical target for improvement. The survey for this index score is administered by researchers outside the government and focuses on taxpayers’ experiences filing their tax return, which is not a service offered on irs.gov, making it of little use in assessing taxpayer satisfaction with IRS’s online services. IRS’s modernization plan does set numerical targets for output measures such as the percentage of notices available in an electronic format for taxpayers, but these targets are not aligned with any of IRS’s taxpayer experience feedback mechanisms, including those that come from the feedback mechanism discussed above administered to users of IRS’s “View Your Account Information.” For example, IRS asks a sample of “View Your Account Information” users if the online service met his or her needs, but IRS’s modernization plan does not set a target or desired level of performance for this question or for any other survey question. Our finding that IRS lacks targets for improving taxpayer experience is consistent with our prior work. In April 2013 we reported that previous IRS planning efforts to expand online services had not set a clear target for improving taxpayer experience and we recommended that IRS establish a numerical or other measureable goal to improve taxpayer satisfaction and a time frame for achieving it. While IRS neither agreed nor disagreed with this recommendation, in 2016 IRS said it would consider the development of numerical or other measurable goals related to taxpayer experience. Our current review shows that IRS has not developed such measures. We continue to believe this recommendation is valid and that the issue will continue to grow in importance along with the use of IRS’s online services. While IRS believes that the planned online services will promote “ease and simplicity,” no target is set in the modernization plan for reductions in taxpayer burden hours. As noted above, IRS has not started to examine the implications of expanding online services on taxpayer burden. Without targets for reducing taxpayer burden, IRS cannot determine the success of new online services in helping drive progress towards this goal. All three of the foreign revenue agencies we reviewed set numerical targets for performance measures related to improving online services and used these goals to target areas for further improvement. New Zealand’s IRD stated in 2015 that its business transformation program should improve the percentage of customers “who find it easy to comply” to between 90 and 95 percent by 2023/2024 to assess progress in its goal to reduce taxpayer burden. IRD’s annual report for 2019 states that the business transformation program remains on track and is delivering benefits and this section also provides an update on the percentage of customers “who find it easy to comply.” While the annual report does not refer to the target for 2023/2024, the percentage reported in the 2019 annual report is lower than the target identified for 2023/2024. The 2019 report explains that taxpayers are “getting used to our new systems and processes” and describes additional improvements IRD is planning. In addition, the United Kingdom’s HMRC set a target for 80 percent of taxpayers to report satisfaction with online services for 2018. HMRC published performance towards its goal in its 2018-2019 annual report, finding that 80.4 percent of taxpayers reported satisfaction with online services. IRS Pilot Programs Identify Potential Risks for Future Digital Communication Capabilities IRS’s modernization plan states that taxpayers will be able to sign up to receive notices electronically by fiscal year 2021 and to have text or video chats with IRS employees by fiscal year 2024. IRS currently sends taxpayers notices via mail for identity verification, balance due, or if IRS needs additional information about a tax return. IRS plans to allow taxpayers to access certain notices electronically via a taxpayer’s online account. In July 2019, IRS Information Technology (IT) officials told us that they have established a team to start developing the capability to make notices available to taxpayers electronically, which is IRS’s first step towards developing full-scale digital communication capabilities. IT officials told us in October 2019 that they plan to conduct customer testing to pilot the service before it launches and gather customer feedback after launching the service. While IRS has just begun development of full-scale digital communication capabilities, IRS has experience providing a subset of taxpayers secure messaging capabilities through two pilot programs that we reviewed. Specifically, under the coordination of OLS, IRS began testing digital communication services in December 2016 to allow for secure and personalized correspondence between taxpayers and IRS employees through three pilot programs, as described below: An active pilot within the Small Business/Self Employed (SB/SE) business unit is testing digital messaging for examinations, which have traditionally been done by mailing questions and documents back and forth between the examiner and taxpayer. SB/SE began this pilot in fiscal year 2017 for a subset of taxpayers selected for examination for returns related to itemized deductions, the child care deduction, and education tax credits. Interested taxpayers must successfully complete security checks to verify their identity and then can exchange messages electronically with IRS employees and share requested documents through a platform accessed through irs.gov. A Taxpayer Advocate Service (TAS) pilot that tested the ability for taxpayers to send documents in electronic form began in fiscal year 2017 and ended in fiscal year 2019. TAS designed the pilot for the purpose of helping two sets of taxpayers: (1) those who were facing the prospect of IRS seizing their property to pay a tax debt, and (2) those who were facing an audit of their claim of the Earned Income Tax Credit and had sought TAS’ assistance. IRS officials told us that an authenticated chat pilot to assist taxpayers in completing an Online Payment Agreement was introduced in June 2019. Results for this pilot were unavailable as of October 2019. We evaluated the SB/SE and TAS pilots against leading practices. We found that the two digital communication pilots mostly addressed leading practices our prior work identified for designing a well-developed and documented pilot program. These leading practices are to: (1) establish objectives; (2) develop an assessment plan; (3) assess scalability; (4) evaluate results; and (5) ensure stakeholder communication. These practices enhance the quality, credibility, and usefulness of evaluations and help ensure that time and resources are used effectively. Although we found both pilots to be generally aligned with the leading practices to develop an assessment plan, evaluate results, and ensure stakeholder communication, neither pilot fully established objectives or assessed scalability. These leading practices are also relevant for testing of future capabilities of the electronic messaging platform that IRS plans to develop. Establish Objectives Our leading practices state that objectives for pilot evaluations should be well defined, appropriate, clear, and measurable. We found differences in stated objectives between OLS and the participating offices. OLS set a target for the SB/SE pilot to reduce total case time from greater than 200 days to fewer than 100 days, and for the TAS pilot to improve the relief rate to taxpayers by 5 percent, which OLS officials explained were ambitious goals. However, SB/SE officials told us that they believed the magnitude of OLS’s goal for reduction in case time to be unrealistic. The National Taxpayer Advocate told us that she had narrower objectives for the pilot including testing the viability of sending documents electronically and assessing taxpayer willingness to participate. Having officials from relevant offices with different understandings of the quantitative target they are trying to achieve is not fully consistent with the leading practice and makes it more difficult for officials to evaluate the performance of the pilots. Develop Assessment Plan We previously reported that key features of an assessment methodology include a strategy for comparing the pilot’s implementation and results with other efforts; a clear plan that details the type and source of the data necessary to evaluate the pilot; and methods for data collection, including the timing and frequency. Our review found that the implementing offices for both pilots developed plans to conduct periodic assessments to assess the objectives. For example, the assessment plan for SB/SE’s pilot included measurements of average case time and participation levels. The TAS assessment plan also included measurements of average case time and participation levels as well as the reasons taxpayers provided for not enrolling in the online pilot. Assess Scalability The purpose of a pilot is generally to inform a decision on whether and how to implement a new approach in a broader context. Identifying criteria or standards for identifying lessons about the pilot will help inform an agency’s decisions about scalability and when to integrate pilot activities into overall efforts. A common challenge that both of IRS’s communication pilots experienced was that only a small proportion of eligible taxpayers participated. Among the taxpayers selected for an SB/SE exam for whom the pilot was offered, approximately 11 percent of taxpayers participated in the digital communication pilot and sent their exam responses and supporting documentation through a secure, electronic messaging platform while approximately 51 percent of taxpayers sent their exam responses to IRS via mail. In contrast, 1 percent of invited taxpayers participated in the TAS pilot, according to the TAS report, which found that most taxpayers opted to communicate with TAS through more traditional methods such as telephone, mail, or fax. Officials involved in each pilot reported that additional taxpayers expressed interest in participating, but experienced challenges in getting through the identity verification requirements for enrollment. Of the potential participants in the SB/SE pilot, only 44 percent of those who began the secure enrollment process successfully enrolled. Pilot participants told TAS that they found the secure enrollment system for the TAS pilot to be too complicated to use and preferred instead to fax documents to avoid the burdensome sign up process. The National Taxpayer Advocate concluded that the participation rate was so low that it did not make sense to continue the pilot. Evaluate Results In conjunction with a clearly articulated assessment methodology, a detailed data-analysis plan identifies who will analyze the data as well as when and how data will be analyzed to assess the pilot’s performance and draw conclusions about how to improve procedures moving forward. SB/SE’s pilot report found that use of the Taxpayer Digital Communications (TDC) electronic platform did reduce the number of days to complete an examination compared to paper. However, IRS examiners spent more hours on average on exams conducted through TDC than paper exams because taxpayers sent more attachments in their electronic messages, on average, than in the mail. TAS found that those taxpayers who enrolled in the pilot were able to successfully communicate with them.TAS officials expressed optimism that enrollment in secure communication could help reduce case processing time among those seeking assistance avoiding an IRS seizure of their property to pay a tax debt. Ensure Stakeholder Communication A leading practice is that agencies identify who the relevant stakeholders are and communicate frequently to obtain feedback on the successes and challenges of the pilot. IRS identified taxpayers and participating business units as the relevant stakeholders for each pilot. We found that both the SB/SE and TAS pilots obtained feedback from stakeholders including employees and taxpayers who participated in a pilot as well as taxpayers who chose not to participate. In January 2018, SB/SE developed a web-based survey which sends taxpayers a voluntary survey upon closing of an exam, when communication ceases with a taxpayer. In addition, SB/SE called taxpayers who did not participate in the pilot program to discuss why they chose not to participate. Of the 262 taxpayers who were successfully contacted, taxpayer reasons for not signing up included that they did not remember seeing the invitation to enroll, they could not pass the secure enrollment process, and they thought it was a scam. TAS held focus groups with employees participating in its pilot and found that many employees raised concerns that the digital communication platform was not user friendly and discouraged uptake. The Office of Appeals conducted a pilot between fiscal years 2017 and 2018 using video conferencing software as a way for Appeals Officers who volunteered to participate to conduct video conferences with taxpayers in lieu of a telephone conference. The Office of Appeals concluded the pilot demonstrated the viability of this technology and allowed all Appeals officers who are willing to use this technology to offer it to the taxpayers they are working with as of October 1, 2018. We did not assess the pilot against our leading practices for conducting pilots because we had recently completed a review of IRS’s Office of Appeals, including its video conferencing capabilities, and the Treasury Inspector General for Tax Administration (TIGTA) recently published a review of the pilot. TIGTA identified a risk related to the scalability of videoconferencing. As of September 30, 2018, Appeals’ officials told us that less than 4 percent of invited taxpayers chose to participate in its pilot. They said that some of the taxpayers who declined to use videoconferencing thought it easier to have a phone call rather than go through the steps involved in setting up a videoconference. Unlike the digital communication pilots described earlier, participants in the Appeals pilot were not required to verify their identity through Secure Access, which is a multifactor authentication process for which taxpayers provide personal and financial information and then IRS verifies that the taxpayer has a mobile phone in his or her name by texting a code to the phone or mailing an activation code. Instead, Appeals officers verified taxpayer identities at the beginning of the videoconference. Despite the difference in security requirements, the participation rate for the Appeals pilot was also low. IRS’s modernization plan’s discussion of future video chats between IRS employees and taxpayers makes no mention of the challenges Appeals has experienced in getting taxpayers to use the videoconferences it already offers selected taxpayers. The same concerns about clear objectives and scalability that we found in the TAS and SB/SE digital communication and videoconferencing pilots also have implications for the full-scale services that IRS plans to develop. The modernization plan states that one of IRS’s objectives through development of an online notification service is to reduce mailing costs by sending fewer notices via mail. However, IRS officials told us in September 2019 that they plan to continue to mail all notices once an online notice service is developed. This suggests that IRS will move forward with development of an online notification service without clear objectives such as cost savings. In December 2019, IRS officials noted that while this may be true of initial deployment, future iterations could potentially allow users to change their delivery preferences. The services outlined in IRS’s modernization plan include delivery of tax credit qualification notices electronically to taxpayers, which IRS officials explained would be limited to low-income taxpayers who IRS believes to be eligible for, but not claiming, the Earned Income Tax Credit (EITC). TAS’s pilot of secure messaging with a similar set of taxpayers— taxpayers subject to an audit of their EITC claim—showed that many low- income taxpayers did not have the access to technology to properly enroll. IT officials told us that they plan to conduct customer testing before and after the introduction of the new electronic notice service. Because IRS is just beginning development of its new digital communication platform, it has not yet provided evidence that it plans to consider concerns and limitations identified in prior digital communication pilots. Without developing a pilot to test its new services and incorporate the lessons learned from prior pilots, IRS risks developing a full-scale service targeted to taxpayers with a low potential for uptake. IRS Faces a New Challenge with Private Industry as It Plans to Expand Online Services IRS Continues to Face Security and Human Capital Challenges Our discussions with IRS officials confirmed that they continue to address security and human capital challenges, which we have evaluated in recent reports. IRS’s modernization plan states that IRS faces increasingly sophisticated and frequent efforts by cybercriminals to steal taxpayer data. For example in 2015, IRS temporarily suspended online transcript services after fraudsters used personal information obtained from sources outside IRS to pose as legitimate taxpayers and access tax return information from up to 724,000 accounts. IRS relaunched this service in 2016 with the requirement that taxpayers go through Secure Access. IRS also uses Secure Access for “View Your Account Information”. The remaining online services require different levels of authentication. For example, taxpayers must provide their Social Security numbers or individual taxpayer identification numbers, filing status, and exact refund amounts to access “Where’s My Refund?” The information provided is limited to tracking IRS’s receipt of a return, approving the refund, and sending the refund. Users of this service cannot, for example, redirect the refund from the destination specified on the tax return or access the detailed personal information contained in transcripts. If IRS makes the authentication process too stringent, it may adversely affect legitimate taxpayers, but too easy of an authentication process presents security risks. In June 2018, we recommended 11 actions IRS should take to improve taxpayer authentication, including developing a plan to fully implement new federal guidelines for online authentication and IRS agreed with our recommendations. In June 2019, IRS officials stated that they have identified an approach for improving the security of online authentication consistent with new federal guidelines. However, additional work remains to fully address our recommendations. Further, the Taxpayer First Act, enacted in July 2019, requires IRS to verify the identity of any individual opening an “e-Services account” by January 2020 before the individual can use the e-Service tools. IRS also continues to face human capital challenges. The former Acting Director of OLS told us in March 2019 that her office has faced several challenges in recruiting and hiring: (1) competition with technology companies for employees with the necessary skills; (2) challenges in crafting position descriptions to inform job seekers of openings; and (3) delays in IRS’s Human Capital Office processing of applications. These challenges are similar to IRS-wide challenges we recently identified, including skill gaps in mission critical occupations and limited capacity by the Human Capital Office to hire employees. In March 2019, we recommended IRS take six actions, including improving its workforce planning and addressing delays in the hiring process. IRS agreed with our recommendations. In September 2019, the Deputy Commissioner for Operations Support reported that IRS is working to address our recommendations, including a plan to reduce the hiring backlog, increase hiring capacity, and improve monitoring and reporting capabilities. In November 2019, we determined IRS had addressed two of our recommendations by developing a strategy to address current and future hiring requirements and issuing guidance to business units’ executives on streamlining the hiring approval process. As noted above, the Deputy Commissioner for Operations Support reported that IRS is working to address our remaining recommendations. Long-Standing Agreement with Private Industry Complicates IRS’s Ability to Expand Online Services, Including Filing Amended Tax Returns Electronically Taxpayers cannot file their tax returns on irs.gov and IRS officials told us they have no plans to develop such a capability. As noted above, the absence of electronic filing services on irs.gov is a notable difference between the services IRS provides and those provided in the three countries and two of the three states we reviewed. We found that IRS’s Free File agreement benefits a small proportion of taxpayers, but that the full benefits and costs of this agreement are uncertain. IRS has renewed the nearly 20-year old agreement eight times since its inception in 2002 without sufficient consideration of how this agreement relates to its growing portfolio of online services, such as the development of the capability for taxpayers to file amended returns electronically. Potential Benefits of the Free File Agreement IRS officials do not regard the absence of electronic filing capabilities on irs.gov to be a shortcoming. Rather, they believe that the Free File agreement has served both taxpayers and IRS well. Officials noted that eligible taxpayers can receive free access to electronic tax preparation and filing services provided by the companies which make up the Free File, Inc. consortium. Additional benefits accrue to IRS, according to officials, by encouraging electronic filing which reduces the costs of processing returns. Further, IRS officials noted that having industry assist taxpayers with electronic filing allows them to focus on providing other online services, such as the informational and payment services described above. Officials representing Free File, Inc. expressed similar views on what they consider to be the benefits of the agreement. However, IRS data show that less than 2 percent of all individual tax returns were filed using Free File in fiscal year 2018 (see figure 5). IRS’s annual data books started tracking the number of returns filed using Free File in fiscal year 2009. As the data show, excluding paper returns, approximately 2 to 3 percent of all electronically filed returns were filed through Free File for the 10 years with available data. IRS’s data include taxpayers who were eligible for and used free commercial software or websites and those who used Free Fillable Forms discussed earlier (see table 2). As the data show, the vast majority of taxpayers filing electronically either hired a practitioner to do so on their behalf or obtained commercial tax preparation and filing services outside of the Free File program. The low usage of Free File is one of the topics that has been reviewed in more detail in reports by the National Taxpayer Advocate and IRS’s Advisory Council. Usage was also cited in separate letters that the Chairman and Ranking Member of the Senate Committee on Finance and the Chairman and Ranking Member of the House of Representative’s Committee on Ways and Means sent to the IRS Commissioner in May 2019 requesting a review of Free File. In June 2019, IRS hired the MITRE Corporation to review, among other objectives, the Advisory Council’s findings and recommendations. In an October 2019 report submitted to IRS, the MITRE Corporation examined: (1) The extent to which eligible taxpayers were using Free File; (2) the participating companies’ compliance with the agreement between Free File, Inc. and IRS; and (3) researchers’ observations of taxpayers’ experiences using the software provided by companies participating in Free File. The report found that the program generally appeals to taxpayers who prefer a “do-it-yourself” method of tax preparation and filing and taxpayers’ preferences should be taken into account when interpreting IRS’s usage data. It found that participating companies had generally complied with the terms of the agreement but that taxpayers experienced challenges in navigating the Free File program. The report made a number of recommendations on these and other topics to IRS. Potential Costs of the Free File Agreement Under the terms of the Free File agreement, IRS does not pay Free File, Inc. companies for the services provided. Rather, participating companies benefit from continuing this agreement because they stand to potentially lose business should IRS develop its own online filing capabilities. Although the agreement does not have a direct monetary cost to IRS, our review found there are indirect costs. Specifically, the agreement states that “the federal government has pledged to not enter the tax preparation software and e-filing services marketplace.” IRS’s decision not to develop and offer electronic filing on its website is a contrast to the capabilities offered by some other countries and U.S. states. The Free File agreement in its current form could potentially constrain the development of new online services such as allowing taxpayers to file amended returns on irs.gov. Online services have the potential to decrease both taxpayer burden and costs for revenue agencies in the long term. IRS’s efforts to assist taxpayers with amending previously filed tax returns illustrate the potential costs of renewing the Free File agreement without consideration of IRS’s long-term plans for online services. Irrespective of the method used to file the original return, taxpayers must file an amended return (Form 1040X) on paper. Officials in IRS’s Wage and Investment (W&I) business operating division provided a business case they drafted proposing to give taxpayers the option of filing an amended return electronically. The business case makes clear that IRS officials believe the current paper-based process is inconvenient for taxpayers because the vast majority of taxpayers are filing the original return electronically. It is also costly and challenging for IRS to process these paper forms with more than 3 million of these amended returns received in processing year 2017. The business case says IRS has been assessing a potential online service in this area for more than 10 years, but has not moved forward due to technical and resource challenges. One approach IRS is exploring is to allow private sector tax preparation and filing companies and practitioners to file an amended return on behalf of a client, which would be similar to the current arrangement for original returns established by the Free File agreement. The second approach IRS is exploring is allowing taxpayers to correct the return with a new online service on irs.gov. The business case states IRS currently prefers the first approach of having taxpayers work with industry or a tax practitioner because of a combination of cost and technical considerations. The business case states that IRS officials have had discussions with officials affiliated with Free File, Inc. According to IRS’s account of these discussions, industry is supportive of the first potential approach of having taxpayers electronically file amended returns through their industry. Further, IRS notes that some taxpayers who used software to prepare the original return may find it convenient to use the same software to prepare and file an amended return. However, the business case also states that, “the costs are not insignificant” for IRS in working with industry, even though IRS plans to leverage existing systems as much as possible. A further complication is how IRS’s plans to work with industry on electronic filing of amended returns relate to the Free File agreement. IRS’s business case states that Free File, Inc. officials told them participating companies would be willing to provide electronic filing of amended returns for free, but as noted above less than 2 percent of original returns are filed through Free File. While individual tax preparation and filing companies could choose to offer electronic filing of amended returns for free or include that capability in paid packages they offer for filing an original return, the agreement in its current form would not guarantee free access to electronically filing amended returns for the vast majority of taxpayers who file an original return outside of Free File. If IRS were to return to its earlier idea of offering the capability to file an amended return on irs.gov, that approach also comes with potential risks for IRS regarding the Free File agreement. IRS officials noted that the agreement states that, “this agreement does not limit IRS from providing phone-based, web-based, or electronic interaction between the IRS and a taxpayer (or a taxpayer’s representatives) regarding issues in a previously filed return after such a return has been accepted by IRS.” IRS officials told us they believe this language allows Form 1040X-type actions by IRS. However, as noted above, IRS made a commitment to “not enter the tax preparation software and e-filing services marketplace”. Our analysis determined that the Form 1040X is nearly identical to the Form 1040, with the difference being that a taxpayer notes which lines he or she needs to correct. IRS’s instructions for the Form 1040X state, “When you file Form 1040X for a tax year, it becomes your new tax return for that year. It changes your original return to include new information.” Therefore, the capability for taxpayers to file a Form 1040X on irs.gov would put irs.gov closer to having an online filing capability for original returns. In written documents that officials from Free File, Inc. provided to us, they said they would need to see a specific proposal for electronic filing of amended returns before they could comment. However, they made clear that they believe any future development of IRS’s online services should continue to leave the task of preparing and electronically filing a tax return to industry. Officials provided a copy of a letter the Executive Director of their organization had sent the W&I Commissioner in March 2019 recommending that IRS consider enabling the electronic acceptance of amended returns through the system industry uses to electronically file original returns on behalf of taxpayers. Change in Circumstances Since the Free File Partnership Was First Established Circumstances, including IRS’s technical capabilities, have changed since the Free File agreement was first established in 2002. Today’s irs.gov provides “View My Account Information” and other online services which did not exist in 2002, and as discussed above, IRS is exploring allowing taxpayers to file amended returns electronically. Another potential new online service serves as a second example of a way that IRS might soon interact directly with taxpayers online without the use of private sector intermediaries in the tax preparation and filing industry. In the Taxpayer First Act, Congress directs IRS to develop a new online service for taxpayers to report miscellaneous payments. Specifically, Congress directed IRS to develop no later than January 1, 2023, an internet platform for persons to prepare and file the Form 1099, which is used by persons to report payments made to taxpayers for such things as rent and services performed by someone other than an employee. The House Committee on Ways and Means’ report accompanying the act explains that the committee believes that having IRS provide this online service will improve compliance with the reporting requirements and reduce the administrative burden for taxpayers who run small businesses. OLS coordinates the development of new online services and W&I oversees the Free File agreement. OLS officials referred our questions about the Free File agreement to W&I. W&I officials provided no documentation they had coordinated renewal of the Free File agreement in 2018 with OLS. IRS also could not provide us any evidence that it has analyzed the full costs and benefits of the Free File agreement to IRS, the participating private sector companies, and the public. For example, the MITRE Corporation report discussed above states that the researchers “assume that industry will continue to be the entity that provides free tax return preparation and filing offerings to taxpayers.” IRS’s approach to renewing the Free File agreement is not consistent with leading practices we identified in our prior work stating that decision makers should periodically review government programs, tax provisions, and regulations to ensure they are achieving desired goals. Among the leading practices we identified is that the costs and benefits should be assessed. Without more rigorous examination of costs and benefits to all parties of future renewals of the Free File partnership, IRS runs the risk of not being fully aware of the effects of the agreement; including the effects of constraints on new services that IRS could provide to taxpayers. Conclusions The internet has reshaped how citizens interact with businesses and government agencies. IRS.gov has contributed to this by giving taxpayers access to detailed information about their taxes and allowing them to make arrangements to pay money they owe. Our comparison of IRS to other countries’ and states’ revenue agencies highlights areas for potential future development. IRS has told Congress and the public that developing electronic communication capabilities is an area of focus. Our review identified a number of challenges IRS will need to address as it moves forward, including measuring taxpayers’ experiences with the online services IRS already offers—including the extent to which those services meet taxpayer needs—and how these services may affect taxpayer burden. Summarizing and reporting that information would help decision makers appreciate the potential value of these services and help ensure IRS has the necessary resources to maintain and improve these services. Likewise, including input from taxpayers when prioritizing new services would help IRS reduce the risk of developing online services that taxpayers do not use. While IRS has recently published a modernization plan which outlines its vision for expanding online services, no targets are set for improving taxpayer experience or reducing taxpayer burden, which hinders decision-making. As we recommended in April 2013, we continue to believe that IRS should establish a numerical or other measureable goal to improve taxpayer satisfaction and a time frame for achieving it. IRS also faces a risk that plans for full-scale digital communication services will encounter enrollment challenges similar to those that IRS has experienced in prior digital communication pilots. In 2002, IRS established a Free File agreement in which participating tax preparation companies agreed to provide free electronic tax preparation and filing services for low- and middle-income taxpayers, provided that IRS does not enter the tax preparation software and e-filing services marketplace. IRS’s Free File agreement benefits a small proportion of taxpayers, but the full benefits and costs of this agreement are uncertain. IRS is currently constrained in providing the online services that are part of its long-term plans for taxpayers, including allowing electronic filing of amended tax returns. Recommendations for Executive Action We are making the following seven recommendations to IRS: The Commissioner of the IRS should ensure that information is collected on taxpayers’ experiences with all online services and the extent to which the services are meeting taxpayers’ needs. (Recommendation 1) The Commissioner of the IRS should ensure that information collected on taxpayers’ experiences with online services is summarized in the document serving as IRS’s performance plan and report. (Recommendation 2) The Commissioner of the IRS should direct the Director of OLS and the Chief Research and Analytics Officer to work together to analyze the potential effects of online services on taxpayer burden. (Recommendation 3) The Commissioner of the IRS should ensure that taxpayer input is included as an element of IRS’s identification and prioritization process for new online services. (Recommendation 4) The Commissioner of the IRS should work with relevant officials to set a target to reduce taxpayer burden through the development of new online services. (Recommendation 5) The Commissioner of the IRS should direct the Chief Information Officer and the Director of OLS to ensure that planned future capabilities of digital communication platforms are tested or piloted before deployment with a particular focus on mitigating the risks that were identified in prior pilots of digital communication services, such as challenges in establishing common objectives and enrolling taxpayers. (Recommendation 6) The Commissioner of the IRS should direct the Commissioner of W&I to work with the Director of OLS to ensure that future decisions regarding whether to renew the Free File agreement incorporate findings from a comprehensive examination of the benefits and costs of the agreement as it relates to long term plans for IRS’s online services, including plans to file amended returns electronically. (Recommendation 7) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this report to IRS for review and comment. In written comments provided by IRS’s Deputy Commissioner for Services and Enforcement (reproduced in appendix II and summarized below), IRS agreed with six of our seven recommendations. IRS also provided technical comments, which we incorporated as appropriate. IRS agreed with our recommendations to ensure that information is collected on taxpayers’ experiences with online services, ensure that such information is summarized in IRS’s performance plan and report, analyze the potential effects of online services on taxpayer burden, include taxpayer input in its identification and prioritization of new online services, ensure that planned future capabilities of digital communication platforms are tested or piloted, and ensure that future decisions regarding renewal of the Free File agreement incorporate findings from a comprehensive examination of the benefits and costs of the agreement as it relates to long term plans for IRS’s online services. IRS indicated general steps it plans to take to address these recommendations but did not provide time frames for doing so. IRS disagreed with our recommendation that it set a target to reduce taxpayer burden through the development of new online services. IRS stated that it will continue to look for opportunities to reduce burden through the development of new online services, but believes that a measurable target cannot be set. We recognize that it may take time for the relevant IRS offices to review changes in individual taxpayer burden estimates over multiple years and begin to collect the necessary data to set a measurable target for burden reduction. However, as established in our prior work, goals should be expressed in as specific terms as possible and be expressed in a form which allows the agency and external audiences to assess the progress being made. As noted in our report, IRS has a strategic goal for reducing taxpayer burden and its strategic plan identifies expanding online services as one of the strategies it will use to drive progress on its goal. Further, IRS agreed with our related recommendation that relevant offices analyze the potential effect of online services on taxpayer burden, which should provide a starting point for IRS in working to identify a burden reduction target. In its response, IRS also stated that its methodology for estimating taxpayer burden is not designed to capture the effect of specific program improvements on taxpayer burden. We agree and are not suggesting IRS resurvey taxpayers and re-estimate burden for every new online service it may introduce. Rather, our recommendation refers to total burden reduction from all the online services IRS offers individual taxpayers. As noted above, IRS’s strategic plan anticipates that taken together all these different online services should make it easier over time for taxpayers to fulfill their tax obligations. We continue to believe that this recommendation has merit. We provided relevant sections of this report to OMB staff concerning information they provided us regarding the applicability of customer experience requirements to IRS. Staff confirmed we accurately summarized their statements. We provided relevant sections of the draft report to the revenue agencies and national audit offices in the three countries reviewed and to the revenue agencies in the three states reviewed. Two foreign revenue agencies, three national audit offices, and three state revenue agencies provided technical comments, which were incorporated as appropriate. One foreign revenue agency did not respond as of December 10, 2019. We also contacted 19 additional state revenue agencies which our draft report identified as offering electronic filing of a state income tax return on the revenue agency’s website and verified that 15 of them offer this service. The remaining four did not respond as of December 12, 2019. We provided relevant sections of the draft report to officials representing Free File, Inc.; specifically, sections describing the agreement between their organization and IRS and the views of Free File, Inc., officials towards IRS plans for allowing electronic filing of amended returns. An official representing the organization provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the relevant congressional committees, the Secretary of the Treasury, the Commissioner of the IRS, and other interested parties. In addition, this report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9110 or lucasjudyj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Filing Requirements in the United States and Selected Countries and Relevant Reports by the Selected Countries’ National Audit Offices Filing Obligations for Individual Taxpayers in the United States and Three Countries Common Features of the United States’ and Selected Countries’ Income Tax Systems The four countries in this review—the United States and Australia, New Zealand, and the United Kingdom—define income similarly and tax investment income. For wages, all four revenue agencies require employers to withhold taxes from their employees’ paychecks. Withholding also means that the four revenue agencies have processes for sending refunds to taxpayers should the government end up collecting more money than the taxpayer owes in taxes. Further, all four countries use tax expenditures to channel subsidies through their tax systems to further social goals such as supplementing the wages of lower income workers. The United States The requirement to file a tax return depends on a combination of factors: gross income, filing status such as whether a taxpayer is single or married, age, and whether a taxpayer is a dependent. For example, a single taxpayer who is under the age of 65 and has a gross income of at least $12,000 is required to file. Taxpayers who meet the specified criteria must file a return even if the government owes them a refund. A taxpayer married to another taxpayer can choose to file a joint return which reports the two individuals’ combined income and deductions. Selecting this option may provide a higher standard deduction and access to other tax benefits for married couples. Australia The Australian Taxation Office’s (ATO) website states that most taxpayers need to lodge a tax return each year and taxpayers can choose among using their online account to electronically file a return, submit a paper return, hire a registered tax agent, or taxpayers meeting specified eligibility criteria may seek assistance from ATO-trained volunteers who help taxpayers complete their tax returns online. Spouses file separate returns, although taxpayers are required to report details about their spouse’s tax situation, such as their taxable income. After a return is lodged, ATO issues a notice of assessment informing the taxpayer whether he or she is entitled to a refund or owes tax. A taxpayer can correct errors on the original return electronically, on paper, or through a registered tax agent. New Zealand An Inland Revenue Department (IRD) publication and the IRD website explain that the requirement to file depends on the sources of the taxpayer’s income. Taxpayers who received income other than salary, wages, interest, dividends, or taxable Maori authority distributions must file a return. For example, the filing requirement applies to taxpayers who had self-employed income, rental income, cash jobs, or income derived overseas exceeding 200 New Zealand dollars. Taxpayers required to file can log into their online account to electronically file a return or choose to file on paper or hire a tax agent to prepare their return. After receiving the return, IRD informs the taxpayer of any refund or tax they must pay. Taxpayers can correct errors on their return by logging into their online account or calling IRD. In regards to the remaining taxpayers whose income is from sources IRD is aware of, such as wages, the department sends them an assessment informing them of whether they owe taxes or are owed a refund and directing them to report any additional income over 200 New Zealand dollars IRD does not know about, such as cash jobs. To support this, an IRD official explained that his department receives information from employers on income paid and taxes withheld every pay cycle. Further, the official reported that beginning in April 2020 IRD will receive at least monthly information from financial institutions on investment income, which should help IRD further refine its calculations of the tax positions of taxpayers. Regarding married taxpayers, each spouse is required to file his or her own return or receives their own assessment, although the tax return states that the amount a spouse or partner (or ex-spouse or ex- partner) received for a tax credit for working families may affect the other spouse’s tax situation. The United Kingdom The United Kingdom’s government website explains that Her Majesty’s Revenue and Customs (HMRC) uses a system—called pay as you earn—in which taxes are deducted automatically from wages and pensions. This means that taxpayers whose only income is from wages or pensions are generally not required to file a return. HMRC mails these taxpayers an annual tax summary (or taxpayers can view it online in their account) informing them of their taxable income and amount collected. If HMRC determines that they owe the taxpayer money or the taxpayer owes the government money, HMRC sends a separate form explaining how it will pay or collect this money. Taxpayers with untaxed income (e.g., renting a property, tips and commissions, income from investments and dividends, and foreign income) may be required to file a return (also referred to as a self-assessment). Even if a taxpayer is not required to file a return, he or she may choose to file a return to claim “income tax reliefs” for such activities as making pension or charitable contributions. Before filing a return, taxpayers who did not file a return in the previous tax year must register with HMRC, which can be done online. HMRC will mail an identification number and activation code and set up an online account for the taxpayer to use to complete the self-assessment. Once taxpayers confirm they are registered, they complete their tax return online using their personal account or choose among using commercial software, hiring an accountant or someone else to help them, filing on paper, or taxpayers meeting specified eligibility criteria may be able to get free professional advice. However, HMRC advises there are certain tax situations, such as a taxpayer receiving income from a partnership, in which its website cannot be used and taxpayers in these situations must use commercial software or file on paper. Regarding correcting errors on filed returns, taxpayers who submitted their return on HMRC’s website can make corrections there, while paper filers must mail a corrected form. Married taxpayers file separate returns, but the tax form has a marriage allowance section which allows a taxpayer to transfer a portion of his or her personal allowance to their spouse or civil partner under certain conditions. Relevant Audit Reports Assessing Selected Countries’ Online Services for Taxpayers Australia The ATO commenced development of new online services for individual taxpayers in 2015 under the direction of its modernization plan, titled “Reinventing the ATO”. The Australian National Audit Office (ANAO) reviewed ATO’s modernization plan in a 2017 report and found that the ATO’s modernization plan provided “clear road maps outlining program intent, deliverables and timing” but identified challenges for ATO in conformance to those processes, specifically in completing cost estimates for development of all new services. In addition, the ANAO found that the costs and benefits associated with the “Reinventing the ATO” program and most of its projects had not been tracked. ANAO reported that ATO collects survey information from taxpayers about the ease of accessing services and information, doing business with the ATO, and measures of timeliness in processing complaints. However, ANAO noted that ATO’s online services have experienced periods of outages, but ATO has not monitored the impact of service outages on satisfaction with its services. In 2017, ANAO reported that ATO had successfully implemented its recommendation to develop an overarching cross-channel strategy that detailed how the ATO plans to transition to an improved online service environment, while also continuing to provide and improve the performance of other service channels. New Zealand In 2011, IRD began a long-term business transformation program, which plans to modernize tax administration in New Zealand and offer new online services to taxpayers. New Zealand’s Office of the Auditor General (OAG) reviewed IRD’s governance of the business transformation program in 2015 and found IRD to be providing clear direction and supporting clear and effective decisions, but recommended that IRD continue to manage risks, including identifying clear benefit estimates to decision makers. OAG also recommended that IRD manage risks by improving its outreach to stakeholders and taxpayers in advance of the release of new services. As a result, IRD stated in its 2018 program update that it intends to be more proactive in engaging with individual taxpayers. In regards to IRD’s procurement of goods and services for the business transformation program, OAG found instances in which IRD did not consistently comply with relevant rules and policies and made recommendations for improvement. OAG’s report, however, noted that IRD restructured its procurement function and brought in procurement specialists with appropriate skills and resources and OAG intends to follow up on the progress IRD is making in addressing its recommendations. An OAG official reported in November 2019 that his office is currently doing a performance audit of the measurement of benefits from the IRD business transformation program. OAG anticipates submitting the report to the House of Representatives in the first half of calendar year 2020. The United Kingdom HMRC outlined a strategy in 2014 (it refers to as a transformation program) to improve its online services for individual taxpayers, which included goals of promoting voluntary tax compliance, designing services to meet customer needs, and improving ease and convenience to taxpayers. The United Kingdom’s National Audit Office (NAO) has reviewed HMRC’s customer service performance, including online services. A 2016 NAO review found that HMRC had reduced the cost of its personal tax operations between 2010-2011 and 2014-2015 in part by moving customers from traditional service channels to less expensive service channels, including online services. Initially, HMRC maintained or improved its customer service performance, but HMRC ended up releasing too many customer service staff and wait times for telephone service started to increase in 2015-2016. While HMRC’s performance improved after it recruited additional staff, NAO concludes that the sustainability of HMRC’s cost reductions will depend on the success of new online services in reducing demand for telephone and mail service. Another review by NAO in 2017 credited HMRC for exceeding its target set for customer satisfaction for digital services, which includes both existing services and new services. Moving forward, NAO recommended that HMRC continue to reevaluate its priorities for its transformation program at least annually, including by measuring the impact on customers, to ensure that new services are delivering the anticipated benefits. In addition, NAO recommended that HMRC be clearer about the way it tracks the costs and benefits of its transformation program. In 2019, NAO found that HMRC had reprioritized its plans due to other demands related to the agency’s preparations for the United Kingdom’s planned exit from the European Union, which has resulted in deferment of development of new online services. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Jessica Lucas-Judy, (202) 512-9110 or lucasjudyj@gao.gov. Staff Acknowledgments In addition to the individual named above, Tara Carter (Assistant Director), Michael O’Neill (Analyst in Charge), Michael Bechetti, Jacqueline Chapin, Rianna Jansen, Edward Nannenhorn, Andrew Olson, Julia Robertson, Kayla Robinson, Cynthia Saunders, Stewart W. Small, Andrew J. Stephens, Robyn Trotter, and Christopher Woika made key contributions to this report.
Why GAO Did This Study IRS recognizes that taxpayers want more choices in how they interact with IRS, including through online services. GAO was asked to review IRS's online services—those which allow IRS and individual taxpayers to exchange personalized information electronically. This report (1) examines what is known about how IRS's current online services are meeting taxpayers' needs, and provides information about selected foreign and state revenue agencies' online services; (2) evaluates the extent to which IRS's strategy for identifying and prioritizing the development of new online services is consistent with relevant requirements and leading practices; and (3) examines how IRS is addressing key challenges in providing online services. GAO assessed IRS's online services against relevant requirements, agency goals, and leading practices; interviewed IRS officials; and identified additional services and practices from six foreign and state revenue agencies selected for offering multiple online services for exchanging personalized information with taxpayers. What GAO Found The Internal Revenue Service's (IRS) online services for individual taxpayers primarily provide taxpayers one-way communication of key information derived from their tax return, such as when an anticipated refund should arrive, or allow taxpayers to pay money owed or make payment arrangements. IRS has done little research or reporting on the extent to which its online services are satisfying taxpayers' needs. Also, IRS has not set a target for using online services to help reduce taxpayer burden. Selected foreign and state revenue agencies' online services have developed online filing and communication capabilities, such as filing a tax return on the agency's website and offering electronic chats between revenue agency employees and taxpayers (see figure). IRS has long-term planning documents which detail online services it intends to develop, which include services to communicate digitally with taxpayers, to achieve its goal of modernizing the taxpayer experience. However, GAO found that IRS has not sufficiently considered taxpayer input in the prioritization process for these new services and instead prioritizes services primarily based on the potential benefit to IRS operations or how quickly a service might be developed. Without considering taxpayer input on user needs and preferences, IRS risks developing services that taxpayers do not use. A group of private sector tax preparation companies known as Free File, Inc., has a long-standing agreement with IRS in which the companies provide free electronic tax preparation and filing services to eligible taxpayers in exchange for IRS not offering its own filing capability. However, few taxpayers use these services and GAO found that IRS has given inadequate consideration to the full benefits and costs of the Free File agreement to all parties. Not considering these costs and benefits has implications for the future evolution of IRS's online services, including helping taxpayers electronically file amended returns. What GAO Recommends GAO is making seven recommendations to IRS, including measuring and reporting on the effect of online services on satisfaction and taxpayer burden and setting a target for reducing burden, considering taxpayer input when prioritizing new online services, and ensuring that any renewal of the Free File agreement reflects benefits and costs. IRS agreed with six recommendations, but disagreed on setting a target to reduce burden. GAO continues to believe IRS should set such a target.
gao_GAO-19-688
gao_GAO-19-688_0
Background Organizational Payees Types of Organizations Different types of organizations can serve as representative payees, from residential facilities where beneficiaries live, to organizations that only manage individuals’ Social Security retirement, disability, or other benefits. SSA’s organizational payees include social service agencies, mental institutions (federal, state or local, non-profit, private), non-mental institutions (federal, state or local, non-profit, private), financial organizations, and entities represented by public officials (such as public guardians, officers of the court, and other similar positions). For certain oversight purposes, such as periodic reviews, SSA categorizes organizational payees into several groups, including: (1) fee- for-service organizations, which charge beneficiaries a monthly fee for expenses incurred in providing services; (2) organizations that serve 50 or more beneficiaries and do not charge a fee for their services, referred to in this report as “high-volume”; (3) organizations that serve fewer than 50 beneficiaries and do not charge a fee for their services, referred to in this report as “low-volume”, and (4) state mental institutions participating in the State Onsite review program. SSA data from fiscal year 2018 indicate that the vast majority (86 percent) of organizational payees are low-volume payees, which serve 34 percent of beneficiaries (see fig. 1). Organizational payees decide how to spend beneficiaries’ funds, but must do so for the beneficiary’s use and benefit in a manner the payee determines to be in the beneficiary’s best interest. SSA considers it acceptable if the funds are used for the beneficiary’s current maintenance, which includes the costs of food, shelter, clothing, and medical care. After the representative payee has used the funds consistent with these guidelines, any remaining amounts must be conserved or invested on behalf of the beneficiary. Organizational payees are responsible for keeping records of SSA payments made to them on behalf of each beneficiary and the expenditures for each beneficiary. All organizational payees participate in onsite reviews and—except for state mental institutions participating in State Onsite reviews—are also required to file an annual accounting form to show how benefit payments were used and any amounts that were saved. Organizational payees also are required to notify SSA of certain changes or situations, including: changes that may affect the beneficiary’s eligibility for benefits or the benefit amounts, such as when the beneficiary, or the beneficiary’s spouse, dies; and when the beneficiary moves or is unable to be contacted or located, starts or stops working, or no longer needs a payee; when organizational payees learn that one or more of their employees has stolen a beneficiary’s funds or determines they can no longer serve as the payee for any reason; and when payees establish an account that mingles funds from multiple beneficiaries in one account—referred to as collective accounts— because these accounts must be approved before use. The process for administering the representative payee program is guided largely by requirements in statute and SSA regulations, which SSA communicates through its Program Operations Manual System (POMS). Recent changes to the program—including the new onsite review process—reflect requirements established by the Strengthening Protections for Social Security Beneficiaries Act of 2018. For example, the Commissioner of SSA must now: (1) reassess representative payee selection and replacement policies, (2) award annual grants (totaling at least $25 million nationwide) to each state’s protection and advocacy system to conduct onsite reviews of representative payees, and (3) award annual grants to a national association that can provide state protection and advocacy systems with training, technical assistance, and administrative oversight. In addition, the Act requires SSA to present the results of reviews—including information on representative payees’ misuse of benefits—in an annual report to Congress. SSA administers and manages the representative payee program through three dedicated data systems: The Electronic Representative Payee System (eRPS) is the system used to process payee applications; record poor payee performance; process changes (such as new addresses); and document misuse allegations, significant information about the payee, and why applications were approved or denied. The Electronic Representative Payee Accounting (eRPA) system is used to capture and review annual accounting forms that all organizational payees except state mental institutions participating in SSA’s State Onsite review program, are required to submit for each beneficiary they represent. Field office staff also use this system to track progress in resolving problems identified during reviews of the form, such as representative payees failing to submit complete information. The newly created Representative Payee Monitoring Tool, which is used to track and oversee the updated onsite review process. SSA operates the representative payee program primarily through its network of field offices. Field offices review and approve organizations’ applications to become representative payees, serve as the point of contact when organizations report changes to beneficiary or organization information, and play a role in monitoring and overseeing representative payees. SSA policy describes the required process for designating a representative payee for a beneficiary whom SSA staff have determined to be incapable of managing his or her benefits. First, organizations apply to serve as a payee for specific individuals. Second, SSA staff review applications to assess if the organization is qualified to serve as a payee and is the most suitable payee for the individual beneficiary. Additional qualifications are assessed when organizations apply to collect fees for their payee services. For example, SSA requires that all fee-for- service applicants have already served at least five beneficiaries for a full calendar month or more, and that non-governmental agencies be licensed and bonded. Once approved, organizational payees are subject to ongoing SSA oversight. SSA reviews annual accounting forms from organizational payees on each of the beneficiaries they represent. The accounting forms are used to monitor how the payee spent or saved benefits on behalf of the beneficiary; identify situations where payment to a payee may no longer be appropriate; or determine if the payee is no longer suitable. In addition to reviewing accounting forms annually, every 3 years SSA must review collective accounts established by organizational payees. Whether or not additional oversight in the form of an onsite review is provided, and the frequency of that oversight, generally depends on the organizations’ characteristics (see table 1). Certain types of organizational payees—such as high-volume, fee-for-service, and some state mental institutions—receive onsite reviews every 3 or 4 years. Low- volume organizational payees do not receive periodic reviews; rather SSA selects some of these payees for onsite reviews based on their likelihood of misusing beneficiaries’ funds and may target additional organizational payees because of an event that raises a question about the payee’s performance or suitability or because a protection and advocacy grantee thinks that a review is warranted. SSA’s purpose in conducting onsite reviews is to: (1) ensure organizational payees perform their duties satisfactorily, (2) deter misuse, (3) keep lines of communication open between the organizational payee and the servicing field office, (4) reinforce to the organizational payee their duties and responsibilities, and (5) proactively address the needs of organizational payees. Gaps in the Organizational Payee Approval Process Introduce Risks We identified several gaps in SSA’s process for approving organizational payees, including insufficient detail in SSA’s policies, insufficient documentation of suitability decisions, and absence of background or credit checks on most organizational payees—gaps that may increase the risk of approving an unsuitable payee. We also identified challenges that field offices may face when approving replacement representative payees, such as a lack of local organizational payees and difficulty locating some beneficiaries. Some SSA Policies for Approving Organizational Payees Lack Detail, and SSA Lacks a Process to Ensure that Supplemental Guidance Is Compliant with Policy When an organization applies to serve as a payee, SSA’s policy stipulates that field office staff evaluate whether the organization is suitable. All payee applicants—individuals and organizations—are subject to the same general suitability factors, and organizations are evaluated on an additional set of suitability factors. Organizations are generally evaluated on the same suitability factors, whether they are a first-time applicant, or applying to serve an additional beneficiary. Additional requirements apply to organizations applying to collect fees (see appendix II). Two factors used to determine the suitability of organizations that are applying to be representative payees are straight-forward: (1) whether the payee agrees to receive benefits via direct deposit and (2) whether the payee uses protected accounts for beneficiary funds. However, other suitability factors are more complex, such as whether the applicant: demonstrates sound financial management policies (i.e., has a history of being current in its own financial obligations), demonstrates effective internal communication (i.e., good communication between case management and financial management components), has adequate recordkeeping systems to ensure that the client’s needs are met and benefits are properly administered. We found that SSA’s policies on how to evaluate more complex issues do not provide sufficient detail to ensure staff can fully assess an organization’s suitability. Staff at one of the eight field offices we visited told us the policies can leave room for interpretation, and staff at three field offices use additional guidance developed by field and regional offices that elaborates on how to assess some of the more complex issues in SSA’s policies. For example, SSA’s policy on what constitutes sound financial management states that an organization should have a history of being current in its own financial obligations. However, it generally does not provide direction on how to verify that an organization meets that requirement. Moreover, the policy lacks details on what staff should do to conduct a deeper assessment of an organization’s financial management practices if they think further assessment is warranted. Similarly, SSA policy directs staff to consider whether an organization has effective internal communication, which it defines as good communication between the organization’s case management and financial management components. However, SSA’s policy does not specify what actions constitute effective communication, such as the frequency and method of communication, type of information to be shared, and time frames for transmitting information. According to federal internal control standards, agencies should establish policies to document responsibilities for a process’s objectives and related risks and communicate these policies to personnel so they can fulfill their assigned responsibilities. Although SSA officials were able to point us to sections of agency policy that went into more detail about some of these complex topics, these policies pertain only to the few organizations that are applying to collect fees. In the absence of specific guidance on how to consider factors when assessing the suitability of all organizational payee applicants, SSA staff may be approving some of them without a complete picture of their financial health and ability to be good stewards of vulnerable beneficiaries’ money. According to central office officials, regions are generally given leeway to create their own supplemental guidance documents based on SSA policy to assist with training—documents that may also serve as resources to help staff interpret SSA policy. Officials in field and area offices told us this supplemental guidance is generally made available to staff on internal websites maintained by area or regional offices. Staff in three field offices told us they use supplemental guidance to evaluate organizations. For example, staff in one field office told us they use a supplemental list of questions to interview organizational payee applicants. These supplemental interview questions address some suitability factors in greater detail than SSA policy. For example: SSA policy directs staff to consider whether the organization “has adequate staff and resources to serve its clients.” The supplemental guidance from a regional office includes five questions on the number, type, relationships, and responsibilities of the staff; training and skills of staff dealing with finances; and documentation. SSA policy directs staff to consider whether the organization “has adequate recordkeeping systems to ensure that the client’s needs are met and benefits are properly administered.” The supplemental guidance from a regional office includes nine questions on the systems, records, procedures, and safeguards related to recordkeeping. Staff in another field office told us they created a desk guide on a range of topics related to individual and organizational payees that includes supplemental guidance documents and excerpts from SSA policy. The desk guide is a reference for all employees that work on payee issues and is also used to train new employees. However, SSA lacks a process to ensure that supplemental guidance is reviewed for compliance at the national level and that such guidance is updated by the regional office in a timely manner. Officials told us that because all regions are expected to follow SSA policy, central office staff only review supplemental guidance when the regions request it. Furthermore, SSA central office officials told us that although there is a protocol for communicating policy updates to regional, area, and field office staff, it is up to regions to refresh their own guidance. These officials did not know how long it takes regions to incorporate policy changes into regional guidance documents. As a result, field offices may be using supplemental guidance that has not been updated to reflect policy changes. For example, in a desk guide we reviewed, we identified a policy excerpt that was not the most recent version of that policy. Federal internal control standards stipulate that management should periodically review policies and procedures for continued relevance and effectiveness. Without processes to ensure that supplemental guidance documents are reviewed for compliance or updated in a timely manner when policy changes, decisions to approve organizational payees may be made inconsistently across different regions and field offices. SSA officials told us in May 2019 that they are currently reevaluating the agency’s representative payee approval policies and procedures based on feedback gathered through a forum hosted in September 2018 by the Social Security Advisory Board and in response to a Federal Register notice published in December 2018. However, SSA did not provide additional information on the nature, scope, or timeframes of this effort. Field Offices Do Not Always Fully Document Their Decisions to Approve Organizational Payees SSA policy requires field office staff to document their assessment of an applicant’s suitability as a payee and the rationale for deciding to approve or deny an application. In addition, before approving a payee in eRPS, the system SSA uses to manage representative payee information, field office staff are to enter notes in accordance with the eRPS user guide. Specifically, staff are directed to document their determination regarding the beneficiary’s capability to manage their own finances and the organization’s suitability as a payee for the beneficiary. In certain situations, SSA policy directs staff to enter an additional note to document the relationship between the beneficiary and the payee. However, we found that staff in field offices we visited did not always fully document their decisions before approving organizational payees for the first time. Specifically, of the 21 first-time application files we reviewed, 16 did not contain a note about the organization’s suitability. Of the five files that did contain such notes, three provided limited detail. For example, two of the approved applications contained a note documenting that the beneficiary currently lived in the facility applying to serve as payee. However, notes in these two approved applications did not include any details regarding the prospective payee’s suitability, such as information about the facility or organization itself. Moreover, in two cases where the payee was a creditor for the beneficiary, we found that SSA staff had not documented why they approved these payees even though they were creditors for the beneficiary. Applicants who are creditors for beneficiaries are generally prohibited from serving as payees. Although exceptions are allowed in certain situations—such as when the organization is a care facility licensed or certified by the state, poses no risk to the beneficiary, and whose financial relationship with the beneficiary presents no substantial conflict of interest—staff are required to document why a creditor was selected as the payee. Although being a creditor could affect a payee’s suitability, we found that field office staff had not recorded information about why they selected these two creditors as the beneficiaries’ payees. We found that SSA staff might not fully document their decisions to approve organizational payees in part because eRPS, the system SSA uses to process payee applications, lacks safeguards for certain information entered into the system. As previously noted, staff use eRPS notes to document their assessment of: the beneficiary’s capability, the payee applicant’s suitability, and, in some cases, the beneficiary- payee relationship. However, while eRPS prevents field office staff from approving a payee without first documenting their assessment of a beneficiary’s capability in a note, this automated safeguard does not extend to the other note type. According to federal internal control standards, agencies should clearly document significant events so that that they are available for examination, and design their information systems to obtain and process information that responds to the agency’s objectives and risks. Because eRPS allows SSA staff to approve a payee without fully documenting the decision, SSA staff may not be able to reference that information to inform future decisions about the organizational payee. Specifically, SSA staff will not be as well-prepared to make fully informed decisions about an organizational payee’s continuing eligibility, or whether the organizational payee should be approved to manage benefits for additional vulnerable beneficiaries. This creates a risk that SSA staff may unwittingly approve an inappropriate organizational payee to serve other beneficiaries. Without Screening Checks, SSA Lacks Additional Insight into the Suitability of Most Organizational Payee Applicants SSA uses two types of external screening—background and credit checks—to identify potential concerns regarding the suitability of certain payee applicants. Whether such checks are required depends on the type of applicant, but most organizational payees do not receive either check. Background checks for individual representative payee applicants: According to law and SSA policy, staff should conduct background checks on individual payee applicants to determine if they have a criminal history that would disqualify them from serving as a payee. As part of the background check, policy directs staff to use applicant interviews and tools embedded in eRPS to gather information about the individual payee applicant’s criminal history, including prison time or unsatisfied felony warrants. Unless the payee is exempted by SSA policy, SSA staff will request the payee’s permission to conduct a background check and, if permission is granted, will then obtain a criminal report from eRPS.32, Credit checks for some fee-for-service applicants: According to SSA policy, staff are directed to obtain and review a credit report from Dun & Bradstreet for all non-governmental organizational payees that are applying to collect fees for payee services. These credit reports include information on bankruptcies, pending or completed legal judgments, liens, payment history and risk, credit use, and how the applicant compares to other organizations in its industry. According to POMS GN 00502.113, certain family members with custody of the beneficiary are exempt from the background check. For non-exempt individual representative payee applicants, field office staff must obtain the applicant’s permission before conducting part of the background check. SSA policy provides additional detail about the specific steps staff are directed to take to obtain information on individual representative payee applicants’ past criminal history. If the applicant does not give permission for a background check, their application to serve as payee will be denied. For more information, see POMS GN 00502.113, “Interviewing the Payee Applicant.” potential risk factors that create payee business losses due to fraud, failure, or severe delinquency, and (2) may provide an indication of any risk involved in the organization’s current or future performance as a payee. However, SSA does not assess these risk factors for most organizational applicants because SSA policy generally does not require staff to conduct background checks for organizational payees, and SSA only conducts credit checks for organizational payees that apply to collect fees. According to SSA data, as of July 2018, only 4 percent of organizational payees were authorized to collect fees and, therefore, may have undergone a credit check. Moreover, those credit checks that are conducted for organizations occur after their initial approval—when they are already serving beneficiaries—because organizations can not apply to collect fees until they have regularly served as payee for at least five beneficiaries for 1 calendar month or more. SSA officials told us the agency does not conduct background checks on organizations, in part because the process is more complicated than for individuals. SSA recommends that organizational payees screen employees who deal with beneficiary funds—identifying this as a best practice—but officials told us this is not required. However, in addition to employees who handle beneficiary funds, the criminal history of an organization’s principals (e.g., chief executive and operating officers, director, president, etc.) may also help inform SSA’s assessment of an organizational payee’s suitability, as these individuals may exert great influence over the tone and structure of the organization. Without conducting credit or background checks, SSA risks unknowingly approving questionable organizational applicants, therefore increasing the risk that beneficiary funds may be misused. In May 2019, SSA officials informed us that, while the agency has been focused on implementing criminal background checks on non-exempt individual representative payees, it is also exploring whether to conduct background checks on organizational payees’ employees or require organizational payee applicants to conduct background checks on their employees. In addition, they told us that SSA has also been considering whether to conduct credit checks on additional organizational payees, but has yet to make a decision on this matter. However, SSA did not provide information on the expected timeframes for this decision-making process. Further, SSA lacks a comprehensive plan for evaluating if and how to expand background and credit checks to organizational payees. SSA Also Faces Challenges Approving Replacement Organizational Payees When an organizational payee closes or is terminated, SSA must ensure that all affected beneficiaries can continue to access their benefits, either by finding a replacement payee or—when a beneficiary is deemed capable of managing their own finances—paying the individual directly. SSA officials told us they strive to avoid temporarily suspending benefits. However, temporarily suspending benefits may be necessary to avoid sending beneficiary funds to a former payee that is no longer able or willing to manage them. SSA’s policies delineate when temporarily suspending benefits may be necessary, such as when a beneficiary’s whereabouts are unknown. In 2017, according to SSA data, 427 organizational payees closed or were terminated by SSA. According to SSA data, SSA suspended benefits for more than 13,000 beneficiaries affected by payee closures and terminations in fiscal year 2017; their benefits were suspended for an average of 2.28 months. SSA policy describes the steps that SSA staff must take when dealing with the closure or termination of an organizational payee serving multiple beneficiaries, but SSA’s level of involvement in finding replacement payees varies depending on the situation. Staff at one field office said that, for the only organizational payee that closed in the last several years, they were involved in finding replacement payees for affected beneficiaries before they terminated the organizational payee. However, staff from two field offices told us that SSA is not always involved in finding new payees. For example, staff at one of these field offices said that when the state closed a nursing home in their jurisdiction, it was state officials and not SSA who found new facilities for affected beneficiaries. When these new facilities applied to serve as payee for their new residents, SSA processed the applications (see sidebar). Staff at another field office told us that before closing, some organizational payees identified prospective payees for affected beneficiaries. In those cases, payee staff submitted proposed payee changes to SSA, and SSA told these prospective payees they must file an application to become the approved payee. Officials in SSA’s central office told us that staff determine if the applicant is the most suitable payee before approving them. According to SSA officials, in 2015, SSA enhanced its policy on what to do when beneficiaries are affected by an organizational payee’s closure or termination. Specifically, national officials told us SSA added new procedures for appointing a new payee in cases of immediate payee termination and emphasized the narrow circumstances when it is appropriate to temporarily suspend benefits. Officials told us these changes were in response to a challenging experience terminating a large organizational payee in 2014 that served nearly 1,000 beneficiaries. Despite this change to agency policy on replacing organizational payees that are terminated or closed, SSA continues to face some challenges in approving replacement payees. Specifically, SSA staff we interviewed cited a number of challenges they had encountered, such as shortages of local organizational payees and difficulties obtaining information from terminated organizational payees. While these challenges may not apply to all field offices, they provide examples of circumstances that can complicate the process of reassigning beneficiaries. Lack of local organizational payees. Officials in some field and regional offices said they lack sufficient organizational payees in their local area. For example, staff in three field offices said many organizational payees in their area only serve certain types of beneficiaries, such as the elderly or individuals with developmental disabilities or specific medical conditions. Staff in two field offices told us they had unsuccessfully tried to recruit additional organizational payees in their jurisdiction. Similarly, a member of an SSA managers association noted that it has been several years since a new organizational payee was approved in her state. Difficulty ensuring community presence for fee-for-service organizational payees. Officials from SSA’s regional, area, and field offices told us that it can be challenging to meet the agency’s requirement that non-governmental fee-for-service organizational payees be community based. For example, staff at an SSA area office told us that finding payees within the community is challenging in sparsely populated and remote areas, such as along Maine’s border with Canada, where beneficiaries may not live near any approved organizational payees. In March 2019, SSA updated the policy on community presence for non-governmental fee-for-service organizational payees to better specify what is required for a payee to establish community presence, but it is not yet clear the extent to which this update resolves field office concerns about remote areas. Difficulty locating beneficiaries. Officials in some field and regional offices noted that they sometimes struggle to locate beneficiaries, which hinders reassignment. Homeless beneficiaries, in particular, can be difficult to find, according to staff in one regional office. Difficulty obtaining information from terminated organizational payees. Officials in some SSA offices told us that they may lack information necessary to complete the transfer of an affected beneficiary to another payee. For example, staff in a regional office said that terminated organizational payees may not always be forthcoming about unspent beneficiary funds. Staff in another field office told us that because a terminated organizational payee had not maintained adequate records of beneficiaries’ guardians, SSA staff had to go to court to identify them before approving replacement payees. SSA’s Communication with Organizational Payees Varies and Payee Feedback Is Not Systematically Collected SSA Has Various Opportunities to Communicate with Organizational Payees SSA staff communicate with organizational payees at various points. According to SSA policy, field offices should communicate with organizational payees when they initially apply, and field office staff may communicate with payees as part of periodic oversight activities—such as through record change reporting requirements or following up on annual accounting forms. During the application process, SSA field office staff should explain the responsibilities and duties of a payee. For example, they should explain that payees must submit an annual accounting form and that payees must keep detailed records of how benefits are used in order to provide an accurate report to SSA when requested. Field office staff also should explain when payees must contact SSA, such as when a payee’s address changes. Monitoring and oversight activities, such as reviews of annual accounting forms, also provide opportunities for SSA field staff to communicate with organizational payees. Similarly, SSA’s ongoing reporting requirements—such as to update certain beneficiary or payee information—provide another opportunity to interact. According to field staff we interviewed, staff frequently communicate with organizational payees regarding changes to a beneficiary’s address. Finally, according to SSA officials, SSA also communicates with organizational payees by providing information online and providing guidance documents when payees are approved. While all field offices communicate with their organizational payees, how field offices communicate with payees can vary. Four of the eight field offices we visited had designated specific staff either to work with each organizational payee or with high-volume payees. In the other four field offices, payees talk to whichever staff member is available. SSA officials told us that the different workforce arrangements stem from varying workflows and staffing resources at individual offices. Similarly, we found variation across field offices regarding whether SSA staff reach out to organizational payees even if changes do not need to be made or problems addressed. For example, staff at four of eight field offices also said that they have held training sessions for groups of organizational payees. Further, staff at three field offices told us that SSA provides training to specific organizational payees at their request, such as when an organization experiences staff turnover. Selected Organizational Payees Expressed Frustrations with SSA Communications Seven of the eight organizational payees we spoke with expressed frustration either with SSA’s follow-through on communications or with its processes for receiving information from payees. Application status updates. Three payees said that SSA staff did not tell them how long it would take to review their application. They also said SSA staff had not provided updates during the process, which took 2 to 3 months or longer to complete. Follow-up calls. These three payees also said that they were not told how long it would take for SSA staff to return their calls, and two said that sometimes they never received a call back. Wait times. Two payees said that it takes too long to provide information in person at SSA field offices. For example, after signing in at a kiosk, a payee may have to wait for hours until their number is called. This payee said that they often bring beneficiaries to the SSA office and that long wait times can be very difficult for them, particularly those with mental illness. In some cases, beneficiaries have walked out or passed out while waiting in the SSA office, according to the payee. The payee also said that long wait times are sometimes compounded when field office staff require them to return to the queue for each successive case rather than handling all the payee’s cases at once. However, because field offices are allowed to establish their own workflow processes, this issue may not apply to other field offices. Faxing documents versus sending them electronically. Three payees said that having to fax documentation to SSA rather than send this information electronically creates additional work. SSA officials said that the agency has a plan to allow individual payees to securely transmit personally identifiable information electronically, but has not established a timeframe for allowing organizational payees to do so. At the field offices we visited, managers had different expectations regarding time frames for responding to payee requests. Three managers we interviewed said that staff should respond to payees as soon as possible, three managers said that staff should respond within 24-48 hours, and two managers said staff should respond within 7-14 days. SSA officials told us that SSA has not set timeliness standards for field offices because doing so could affect other workloads in unanticipated ways and it is the agency’s goal to provide service and support to all payees on an ongoing basis. Organizational Payees May Provide Feedback to SSA but SSA Lacks Systematic or Formal Feedback Mechanisms for Collecting and Analyzing Feedback SSA may receive feedback from organizational payees through various mechanisms. Officials from SSA’s central office told us that organizational payees can provide feedback either by contacting their local field office or calling SSA’s national customer service number. Some field office staff also said that they provide informal opportunities for payees to offer feedback. For example, one field office manager told us that he spends time building relationships with organizational payees, solicits feedback by asking how things are going, and sometimes visits organizational payees when he is nearby. Another manager emphasized the importance of gathering and responding to organizational payee feedback. This manager said that she established quarterly calls with multiple payees to discuss issues and solicit feedback. Managers of two field offices told us that they provide standardized SSA customer comment cards in their waiting areas, although the cards do not ask respondents to identify whether they are organizational payees. However, SSA does not have a mechanism for payee feedback to be systematically collected, compiled, and analyzed across field offices to determine if programmatic changes are warranted. SSA officials said they do not have or plan to develop a formal mechanism for collecting and analyzing organizational payee feedback because the current process allows field offices to respond to all public contacts in a consistent and timely manner. However, federal internal control standards state that management should establish reporting lines that allow the agency to receive quality information from external stakeholders and specify that quality information, among other things, should be complete, current, and provided on a timely basis. Without a formal mechanism to systematically collect and analyze payees’ feedback and ideas for program improvement, SSA cannot be sure that it is receiving complete or current impressions from organizational payees on how efficient its processes are or how timely it responds to their needs. Being aware of and responding to payees’ concerns might help the agency retain and attract organizations to serve as payees and ensure it is well-positioned to meet future challenges. SSA Offices Use Several Methods to Oversee Organizational Payees, but the Methods Have Shortcomings SSA uses several methods to oversee organizational payees, including conducting onsite reviews, and reviewing annual accounting forms and collective accounts. However, each of these methods has shortcomings in its design and implementation, weakening SSA’s ability to effectively oversee payees and prevent fraud. SSA officials said they plan to conduct an over-arching assessment of fraud risks to the representative payee program in 2019, but the robustness of such a plan is yet to be determined. SSA is Transitioning to a New Onsite Review Process but its Process for Targeting Low-Volume Payees for Review is Poorly Documented Transition to SSA’s New Onsite Review Process State protection and advocacy agencies (“state grantees”), the national association grantee (which is currently the National Disability Rights Network, or NDRN), and SSA regional offices play key roles in the new onsite review process for organizational payees. Given the extent to which onsite reviews uncover misuse and other problems, the onsite review is a crucial control for the representative payee program. Under the new process, state grantees generally interview selected payees, beneficiaries, and legal guardians or third parties; review financial records for selected beneficiaries over a 12-month period; transmit findings from their reviews to SSA; and, in some cases, follow up on deficiencies they identify. State grantees also suggest additional payees to review (beyond those targeted by SSA) if they think such a review is warranted. According to SSA, the national association grantee’s responsibilities include: (1) training state grantees; (2) ensuring the quality of onsite reviews; (3) serving as the first point of contact for state grantee communication and questions; and (4) providing state grantees with technical assistance, administrative support, and data collection services. According to SSA, the regional offices are responsible for compiling information to facilitate grantees’ onsite reviews that is not automatically provided through SSA’s system and for clarifying procedural and technical information for the grantees. Regional offices also address and resolve all deficiencies the grantees do not resolve, according to SSA. Lastly, under the new system, state grantees, the national association grantee, and SSA input information from reviews and track progress towards completing their assigned reviews using SSA’s new Representative Payee Monitoring Tool, which is used to manage and control the new onsite review process. According to six NDRN representatives, transitioning to the new onsite review system involved challenges with grantees gaining access to equipment, working through bottlenecks at some regional offices, responding to unanticipated workloads, and receiving timely responses to feedback. Specifically, NDRN representatives said that while the process of clearing grantees to access beneficiaries’ personally identifiable information has been efficient, there have been delays providing grantees with access to SSA laptops, printers, and scanners. As a result of these equipment delays, grantees started to conduct reviews on paper and then input the information later, according to NDRN representatives, thus using less efficient, manual processes. NDRN representatives also said that the new onsite review process involves multiple handoffs between grantee and regional office staff, which has contributed to bottlenecks at some regional offices. Moreover, NDRN representatives noted that, in addition to the reviews SSA originally assigned to the grantees, regional offices have tasked them with conducting quick response checks. Because these reviews have generally involved assessing a large number of financial records and conducting many beneficiary interviews, and were not anticipated in SSA’s initial plan, NDRN representatives believe they may affect the ability of some state grantees to complete the other reviews SSA had initially planned. Lastly, an NDRN representative said that the timeliness of SSA’s responses to grantee feedback and concerns (communicated from state grantees via NDRN) has diminished in recent months. Specifically, the NDRN representative said that the computer program SSA staff developed to enable NDRN to submit questions to the agency was initially working well. However, recently, as the volume of NDRN’s questions has increased, the system is not working as well, and NDRN has asked for clarification on some important issues, to which SSA has not yet responded. As of May 2019, SSA reported to us on progress state grantees had made towards reaching the total number of reviews SSA had planned for the fiscal year. Specifically, as of May 21, 2019, state grantees had conducted 112 of 852 planned high-volume reviews; 45 of 461 planned fee-for-service reviews; and 0 out of 60 planned state mental institution reviews. Although SSA initially assigned 2,800 low-volume reviews in grant year 2019, SSA estimated in July 2019 that it will have initiated around 1600 low-volume reviews by the end of the first grant year—about the same number as completed in fiscal year 2018 (1,691). SSA officials acknowledged these challenges and said they have been addressing them, and will continue to address them and to monitor progress. SSA officials cited significant improvements in issuing laptops since they began the process in September of 2018. Regarding delays in distributing printers and scanners, SSA reported that they are in the final stages of procuring printers but that as of May 2019, they had not identified an acceptable scanner model. SSA officials also said they are developing a policy to govern grantee use of printers. SSA acknowledged that workflow bottlenecks involving regional offices may exist, and said that they will continue to monitor all actions required to be taken by regional office staff. SSA staff also acknowledged having initiated more quick response checks than originally anticipated, and said they are researching options to alleviate the impact of these reviews on NDRN and state grantee resources. Finally, SSA staff said that they will continue to evaluate how SSA collects and responds to state grantees’ feedback, and to hold weekly discussions with NDRN to identify ways to improve the new onsite review process. GAO is not making recommendations in this area because the onsite review process is new and SSA continues to implement it and work to address implementation challenges. Targeting Onsite Reviews for Organizations That Are Low- Volume Payees Onsite reviews are resource intensive because they involve examining organizational payee financial records and interviewing payee staff and beneficiaries; therefore, SSA uses a risk-based approach to select which organizational payees receive onsite reviews and how frequently such reviews occur. SSA reviews all fee-for-service, high-volume payees, and certain state mental institutions—which together account for around 67 percent of all beneficiaries and about 14 percent of all organizational payees—at regular intervals of every 3 or 4 years, depending on the type of organization. However, for the vast majority of organizations that are low-volume payees (29,082 of around 33,700), SSA selects a subset of payees to receive onsite reviews each year. As shown in figure 2, more than half of the onsite reviews SSA conducted in fiscal year 2018 were for low-volume payees (1,619 of 2,774 reviews). However, because there are so many low-volume payees, only about 6 percent of these payees received an on-site review. In contrast, the lower number of high-volume onsite reviews conducted (767) covered about 25 percent of high-volume payees. Given that only a fraction of low-volume payees are selected for onsite review each year, it is critical that SSA effectively prioritize which payees should receive onsite reviews so SSA can effectively allocate resources. To this end, SSA uses a predictive statistical model it first implemented in 2012 to rank low-volume organizations based on their chance of misusing beneficiary funds and selects for onsite reviews those organizations identified as having the highest risk. SSA staff told us they determine how many reviews to conduct based on available resources. However, we were unable to fully assess SSA’s decisions in developing its model, or the model’s accuracy at predicting misuse compared to alternative targeting methods, because SSA did not fully document or retain documentation that described in sufficient detail important decisions it made when developing it. For example, the available documentation does not explain in sufficient detail how SSA assembled data on the target population; how SSA sampled organizational payees for assessing characteristics; which variables SSA considered using to help predict misuse but ultimately decided not to include; how, if at all, it assessed or assured itself of the reliability of the data the model used; and how it decided to account for multiple beneficiaries with the same payee. An SSA official responsible for using the model said he was not sure whether documentation existed but was not retained, because the individuals who developed the model are no longer with the agency. Office of Management and Budget (OMB) standards for federal censuses and surveys—which contain accepted practices (but not requirements) for federal statistical efforts not officially covered by the standards—call for documentation that “includes those materials necessary to understand how to properly analyze data” and “replicate and evaluate” statistical estimates. Moreover, federal internal control standards state that effective documentation enables agencies to retain organizational knowledge, mitigate the risk of having knowledge limited to a few personnel, and communicate knowledge as needed to external parties, such as external auditors. Due to the absence of key documentation, neither SSA itself nor an external party is able to affirm whether, in comparison to other approaches, SSA’s predictive model is the optimal approach to identify low-volume payees and beneficiaries with the highest risk of misuse. SSA officials told us that they will revise the model at some point in the future—at which point they could improve the documentation—but that they do not have a formal plan to do so. SSA officials said they do not have imminent plans to update the model because the pool of identified misuse cases, which is driven by the number of onsite reviews conducted, is too small. Finally, SSA officials said they are hesitant to re-evaluate the organizational payee predictive model because they believe the current model is working effectively. However, seven years have passed since SSA first developed the model, and SSA cannot be assured that the current model remains as effective as when it was last formally validated and compared to alternative models or targeting methods. Accepted practices for developing predictive statistical models call for periodic re-estimation and re-validation, using data that are current and applicable to the conditions in which the model will be applied. Moreover, federal internal control standards call for agencies to conduct ongoing monitoring of the design and effectiveness of the internal control system including evaluations of control design. SSA reported conducting ongoing assessments of the model’s continued effectiveness, and provided us with aggregate performance data for 2012 to 2016. However, inclusion of older data and absence of more recent misuse data in aggregated results provide limited assurance of the model’s ongoing effectiveness. In addition, a recent report by SSA’s Office of the Inspector General (OIG) suggests that it may be possible to assess the ongoing suitability of nursing home payees by using additional data, although we did not evaluate the validity of the study’s conclusions. The SSA OIG’s report expressly looked at how data from the Centers for Medicare & Medicaid Services (CMS) might be used to evaluate the suitability of nursing homes and found that these data would help SSA more effectively assess the ongoing suitability of existing nursing home payees. Specifically, the OIG used CMS data reflecting penalties and other signs of underperformance to identify poorly-performing nursing homes that might also be poorly executing their duties as payees. For instance, according to SSA policy documents, the form can help SSA identify previously unreported changes to beneficiaries’ addresses; identify unapproved collective accounts; determine if certain beneficiaries’ savings are too high to qualify for benefits; or determine whether the organizational payee is authorized to charge a fee, if the payee reports charging one. outcomes would significantly dilute the model’s ability to detect misuse, which they consider to be the most important goal of the representative payee review process. Developing additional models to predict other types of poor payee performance besides misuse (such as poor recordkeeping or payees’ failing to meet beneficiary needs, which were identified in the OIG study) could reduce SSA’s reliance on a model for which the low number of misuse findings affects the efficacy of ongoing performance assessments and prevents timely updates. Since SSA has only identified 31 misuse cases using the predictive model since 2012, decades may pass before SSA has the approximately 5,300 misuse cases it wants in order to formally evaluate the model, and before SSA and others can be assured that low-volume payees are being optimally targeted for review. Without re-evaluating whether the current model remains predictive, and periodic assessments about whether it predicts high-risk payees better than an alternative model or targeting method, it is unknown whether SSA has maximized its ability to target low-volume payees. SSA Reviews Annual Accounting Form Submissions and Collective Accounts, but these Reviews Have Shortcomings Timeframes for Annual Accounting Form Follow Up The annual accounting form is a key oversight tool because it touches most organizational payees, and reviewing the annual accounting form helps SSA to maintain current beneficiary and payee information and to identify and resolve potential problems. For instance, according to SSA policy documents, the form can help SSA identify previously unreported changes to beneficiaries’ addresses; identify unapproved collective accounts; determine if certain beneficiaries’ savings are too high to qualify for benefits; or determine whether the organizational payee is authorized to charge a fee, if the payee reports charging one. electronic processing indicates a potential problem, field offices sometimes follow up with the payee to resolve the issue. However, SSA has not established time frames within which field offices must initiate this follow-up. For example, SSA guidance states that when organizational payees do not submit the form timely, field offices should contact the payee by phone to find out why the required form was not completed. However, the guidance does not establish time frames within which field offices should initiate the call. Similarly, SSA told us they do not have time frames within which staff should follow up to resolve potential problems flagged during electronic testing. In the absence of national guidance, area offices we interviewed varied in the extent to which they established time frames for the field offices in their purview to follow up with organizational payees that did not submit an annual accounting form or whose form was flagged for potential errors. One area office we talked with expected staff to follow up with payees within 30 days but did not track time frames, another area office had not established time frames, and officials from one field office told us that their area office considered follow-up over 120 days to be untimely. Given the absence of SSA guidance and variation in area office practices related to establishing timeframes, field offices may not ensure that this oversight mechanism is attended to in a timely manner. Officials at one field office we visited told us that they had a backlog of forms needing follow-up because the designated point person had left the agency. Officials from another field office attributed the backlog to multiple factors, including a staff person being out sick and their workload not being reassigned, and the office taking on a special project. While we heard from several field offices that the majority of follow-up on annual accounting forms is for clerical errors or mistakes, staff from one field office said that when staff must follow up with organizational payees to ensure they submit a simple accounting form, it raises concerns about whether those payees are fulfilling their other duties. Federal internal control standards state that managers should use quality information to achieve the entity’s objectives and that they should ensure information is complete and provided on a timely basis. In May 2019, SSA officials told us that they are now exploring approaches to implement a nation-wide time frame to address these forms because a 2018 law— which reduced the volume of annual accounting forms SSA has to process—allows staff to focus on problematic forms more expeditiously. SSA officials said that they had not previously established a time frame because they expected organizational payees to have routine contact with field offices and expected field offices to re-evaluate the payee’s suitability if the payee did not cooperate when conducting SSA business. In addition, SSA expects state grantees to follow up on accounting forms as part of their onsite reviews. At the same time, one of SSA’s stated purposes for using the annual accounting form is to evaluate payee suitability on a regular basis rather than relying on ad hoc interactions between the payee and field office, or relatively infrequent periodic and targeted reviews. Until SSA establishes time frames within which staff must follow up on issues identified during annual accounting reviews, the agency cannot ensure that it is taking timely action to resolve potential problems and maximize this monitoring tool. Content and Design of the Annual Accounting Form Although the accounting form is a key oversight tool for SSA, shortcomings exist in the form’s content and design. For example, SSA’s annual accounting form does not ask or remind organizational payees about all collective account requirements, and as a result does not fully support SSA’s oversight efforts. Collective accounts are permitted under SSA policy, but SSA reviews and approves them to ensure that payees comply with SSA’s policies and procedures. While the annual accounting form asks payees whether they put any saved funds into a collective account, the form does not ask whether the payee uses a collective account for day-to-day expenses. Payees should disclose the use of any collective account to SSA independent of the form but may have neglected to, and SSA does not use the form to fully ascertain the use of collective accounts. Consequently, SSA may not have up-to-date information about all of the collective accounts that an organizational payee might be using—information that could help place these risk-prone accounts on SSA’s radar to initiate the approval process and provide ongoing oversight. Federal internal control standards state that agencies should design control activities to achieve objectives and respond to risks. When SSA officials were asked why the annual accounting form does not ask about all collective accounts, the officials said this would be unnecessary because payees are required to notify the field office if they wish to open such accounts. SSA also indicated that its periodic and targeted onsite reviews will uncover collective account issues for the highest risk payees. However, SSA finds many instances of unapproved collective accounts during its onsite reviews, suggesting that organizational payees might not be proactively reporting opening such accounts to SSA as required. For example, in fiscal year 2018, SSA found unapproved collective accounts in nearly 17 percent of the onsite reviews it conducted of organizational payees (in 477 instances out of 2,882 reviews). Staff we interviewed from one field office also said they have identified organizational payees with unapproved collective accounts. Specifically, staff said they have identified at least three payees with unapproved accounts, one of which they identified when reviewing the payee’s annual accounting form. This suggests that some payees may be willing to report they have a collective account, but not remember or understand their responsibility to seek approval from SSA when they open such accounts. Although SSA’s accounting form includes reminders of various payee responsibilities, the form does not include a reminder to all payees that they should notify SSA when they establish collective accounts. Reminding payees of these responsibilities could serve as a regular reminder for payees to notify SSA about the existence of these accounts, and thereby help ensure SSA provides regular oversight of them. Stakeholders have also identified shortcomings in the content and design of the accounting form. For example, SSA currently provides payees’ total benefit amounts in the form, and asks payees to report how they spent those benefits. In a 2007 review of SSA’s representative payee program, the National Academy of Sciences (NAS) reported that because SSA preprints total annual benefit amounts on the annual accounting form, it is easy for payees to report spending that matches the total provided by SSA. Even if the amounts the payee reported were incorrect, SSA’s electronic check would not trigger further review of these responses as long as the numbers added up. NAS further suggested that omitting this information would reinforce payees’ responsibility for keeping and consulting their records. In light of this and other findings, NAS broadly recommended redesigning the form to collect more meaningful data—a recommendation echoed by the Social Security Advisory Board in 2018. When asked why SSA did not adopt NAS’ recommendation, SSA indicated to us that it believed that NAS signaled that other recommendations were more important, and cited NAS’ statement that “no form, by itself, is going to detect program misuse.” At the same time, NAS restated its recommendation to redesign the form twice in its report and in each instance noted how the form could complement other oversight efforts. Research also suggests that agencies can improve the quality of the data they collect via forms by applying behavioral science insights. For example, behavioral science research has shown that requiring a signature at the beginning of an online form helps promote honest self- reporting and can lead to government savings.77, Moreover, the Internal Revenue Service has identified approaches based on behavioral science insights for improving compliance and honest self-reporting, and for encouraging people to make good choices when providing information. Given the importance of the annual accounting forms for oversight of payees, considering and applying, where appropriate, behavioral science insights while redesigning the accounting forms could help SSA achieve more reliable and accurate reporting. Collective Account Follow Up Executive Office of the President. National Science and Technology Council. Social and Behavioral Sciences Team Annual Report (September 2015). See ep 31-32. Lisa L. Shu, Nina Mazar, Francesca Gino, Dan Ariely, and Max H. Bazerman, Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end.” Proceedings of the National Academy of Sciences of the United States of America. (Nov. 18, 2012). In contrast, the signature field in SSA’s accounting forms is located at the end of the form. accounts without oversight. A regional office analyst referred to this as a glitch in the system and told us this issue was recently raised during a meeting with the central office. In response to our inquiry about disappearing alerts and collective account information, SSA staff indicated that removing alerts and collective account information after approval expires is appropriate because field offices should always renew collective accounts before this occurs. SSA further explained that the alerts are not deleted from eRPS, but rather removed from the system’s “Workload Action Center” 30 days after the collective account expiration date. Similarly, SSA reported that collective account information is not deleted from eRPS, but rather no longer displayed as an active account. However, removing information on accounts that were not renewed timely weakens the efficacy of its collective account review process to the extent that accounts are operating without SSA approval and oversight. SSA Did Not Have a Process for Periodically Assessing Program Risks, but Recently Said it Plans to Conduct a Comprehensive Fraud Risk Assessment SSA has taken steps to address risk associated with payee oversight, but to date has not continuously assessed and responded to potential risks. Federal internal control standards state that to manage risk, agencies should identify risks that might prevent the agency from achieving its objective; assess the significance of those risks; and design responses so that analyzed risks are within the agency’s risk tolerance level. In June 2013, SSA formed a task team to conduct a comprehensive review of the representative payee program and develop recommendations. This effort resulted in, for example, a new process of sharing misuse information with the Department of Veterans Affairs. While this was a positive step, the task team disbanded in 2014 because it had generated a set of recommendations and SSA wanted to shift to implementing those recommendations, according to agency staff. However, resulting actions did not include a forum or system for continuously assessing lessons learned from audits and reviews or identifying solutions that might have addressed gaps we identified in this report. For example, we found that SSA discovers many instances of unapproved collective accounts during onsite reviews, but we have not seen documentation that SSA has assessed the risk of unapproved collective accounts existing among low- volume payees that do not receive any regular scrutiny. Having a process for continuously assessing and responding to potential risks could better position the agency to respond to pressure placed on the payee program due to an aging population. As of May 2019, SSA reported it was in the early stages of planning a fraud risk assessment of the representative payee program (for both individual and organizational payees). In January 2019, a staff person within SSA’s Office of Anti-Fraud Programs, which provides centralized accountability and oversight for the agency’s anti-fraud activities, told us they had identified the representative payee program as one that might benefit from a risk assessment, and that they were currently developing a strategy for conducting such risk assessments for a number of programs. At that time, the staff person did not know whether they would be doing a fraud risk assessment of the representative payee program specifically. SSA subsequently reported in May 2019 that the agency has established a schedule and business process for conducting its risk assessments, including one on the representative payee program. According to SSA, the fraud risk assessment will provide a comprehensive and strategic look at the fraud risks facing the representative payee program and the controls SSA has in place to mitigate those risks. SSA also reported it plans to begin the assessment of the representative payee program in October 2019, and update it every 3 years beginning in 2024 to determine whether there have been any changes to the risks and whether additional actions are required. While promising, SSA plans have yet to take shape. Ensuring that its fraud risk assessments periodically examine the results of onsite reviews and audits will be an important element in the design of SSA’s risk assessment efforts. Conclusions Organizational payees play a critical role in ensuring beneficiaries’ basic needs are met. The beneficiaries these payees serve—individuals who cannot manage their own finances and lack a family member or friend to do so on their behalf—are dependent on their representative payees and thus extremely vulnerable to financial abuse. It is therefore crucial that SSA take steps to shore up a range of gaps in how the agency evaluates, supports, and oversees payees to better ensure beneficiaries are protected. Carefully screening organizations applying to be representative payees is key to proactively avoiding potential abuse. However, in the absence of detailed and centrally-approved policy guidance on how to assess complex suitability factors for approving payees, SSA cannot be sure that field office staff are consistently and appropriately evaluating applicants’ suitability. Also, until SSA updates its electronic system to ensure staff’s rationale for approving or denying payees is captured in accordance with policy, SSA may not have the benefit of information to better monitor payees and inform future suitability decisions. Lastly, without a comprehensive plan, including timeframes, for evaluating if and how to conduct background and credit checks to help staff vet organizational payees—as it does for individual payee applicants—SSA may forgo potentially valuable safeguards for further protecting vulnerable beneficiaries. Once approved, organizational payees rely on SSA for information or action in order to effectively carry out their responsibilities. Absent a formal mechanism whereby feedback from payees on SSA services and processes can be collected, compiled, and analyzed, SSA may not be sufficiently aware of payee needs and frustrations, which in turn could result in lost opportunities to either retain or recruit organizations willing to serve this critical function, or make program improvements. To ensure payees are managing beneficiary funds appropriately, SSA relies on a number of monitoring mechanisms, including onsite reviews. Onsite reviews represent SSA’s most thorough and resource-intensive monitoring tool and must be appropriately targeted. Until SSA develops a plan to periodically review the predictive model’s design, considers inclusion of additional relevant data in the current model or an alternative model that predicts outcomes other than misuse, and documents any subsequent design changes, the model’s efficacy cannot be fully assessed or ultimately improved upon, and SSA may not be effectively targeting high-risk, low-volume payees for review. SSA may detect payee performance problems by reviewing annual accounting forms for all organizational payees; however, without a process to ensure prompt follow-up, SSA cannot be sure staff resolve problems in a timely manner. Moreover, mingling beneficiaries’ funds in collective accounts can mask misuse, and until SSA addresses gaps in the annual accounting form and issues with eRPS, SSA cannot effectively monitor payees’ use of such accounts. Addressing gaps in existing processes could improve the integrity of SSA’s representative payee program and reduce risks to SSA’s most vulnerable beneficiaries, but may not be sufficient in light of challenges posed by the nation’s aging population, which could swell the number of vulnerable beneficiaries that need payees. Carrying through with its plan to develop initial and periodic fraud risk assessments for the representative payee program—and ensuring that the assessments reflect consideration of findings from onsite reviews and audits—could help SSA anticipate potential problems and develop strategies to mitigate their impact. Recommendations We are making the following nine recommendations to SSA: The Commissioner of the Social Security Administration should ensure that (a) the agency’s policies and guidance are specific enough so field office staff know how to apply complex suitability criteria for assessing payee suitability, such as by providing a minimum set of specific questions; and (b) additional regional guidance that is made available to staff is centrally reviewed for compliance and completeness. (Recommendation 1) The Commissioner of the Social Security Administration should create safeguards in the eRPS system to ensure that field office staff fully document all required information, such as the rationale for their decision, before approving an application. (Recommendation 2) The Commissioner of the Social Security Administration should complete a plan, including timeframes, for comprehensively evaluating if and how to leverage external sources of information on organizations’ suitability, such as by conducting background checks or credit checks on organizations or key staff that handle beneficiaries’ funds or requiring organizations to conduct their own background checks on key staff. (Recommendation 3) The Commissioner of the Social Security Administration should develop and implement mechanisms to systematically obtain and review feedback from organizational payees and communicate findings to SSA management. (Recommendation 4) The Commissioner of the Social Security Administration should (a) establish a plan and time frame for periodically reviewing the predictive model’s design; (b) consider additional data sources that would allow for additional screening or modeling of potentially high-risk organizational payees; and (c) ensure that subsequent design decisions are documented in sufficient detail so the development process can be more fully understood and replicated, either by SSA or a knowledgeable third party, with minimal further explanation. (Recommendation 5) The Commissioner of the Social Security Administration should require field offices to contact payees about missing or problematic annual accounting forms within a specific time frame. (Recommendation 6) The Commissioner of the Social Security Administration should revise the annual accounting form to enhance its effectiveness. Such revisions could include (but not be limited to) more fully ascertaining the use of collective accounts, adopting stakeholders’ recommendations on using the form to collect more meaningful data, and reflecting best practices from behavioral science insights in the design of the form. (Recommendation 7) The Commissioner of the Social Security Administration should enhance the eRPS system to ensure that field offices are (a) alerted when collective accounts are due to be reviewed; and (b) able to take action on expired collective account information and thereby avoid payees’ continued use of these accounts without oversight. (Recommendation 8) The Commissioner of the Social Security Administration should, as it carries through with its plan to develop a risk assessment for the organizational payee program, ensure that that the plan reflects periodic consideration of findings from onsite reviews and audits. (Recommendation 9) Agency Comments We provided a draft of this report to SSA for review and comment. In written comments, reproduced in appendix IV, SSA agreed with all nine of our recommendations and outlined its planned actions to address several of the recommendations. SSA also provided technical comments that we incorporated into the report, as appropriate. SSA provided additional comments on its plans to address four of our recommendations. Specifically, with respect to our second recommendation that SSA create safeguards in its Electronic Representative Payee System (eRPS) to ensure that field office staff fully document decisions to approve organizational payee applications, SSA reported that, as part of implementing the Strengthening Protections for Social Security Beneficiaries Act of 2018, planned changes to eRPS will improve documentation of selection decisions. SSA also reported it will also consider additional enhancements to eRPS in the future. We welcome SSA’s intentions to improve documentation of selection decisions and consider additional enhancements to eRPS. With respect to our third recommendation that SSA complete a plan, including timeframes, for evaluating if and how to leverage external sources of information on organizations’ suitability, such as by conducting background checks or credit checks on organizational payee applicants, SSA officials reiterated that SSA is first focusing on implementing provisions of the Strengthening Protections for Social Security Beneficiaries Act of 2018 related to background checks for certain individual payees. After completing this work, SSA plans to evaluate conducting criminal background checks and credit checks on organizational payees and their staff. While we agree that implementing background screening pursuant to the law should take precedence, SSA should seek opportunities to implement screening for organizational payees at the earliest opportunity. With respect to our fifth recommendation related to SSA reviewing, enhancing and documenting its model for selecting low-volume organizational payees for on-site reviews, SSA reported that it will pursue other data sources to develop additional screening tools and models to identify potentially high-risk organizational payees, but that it is unable to incorporate additional data into the existing model. We recognize that the current model, which focuses on misuse findings and is based on historical data, presents challenges for both updating and including new data sources. Therefore, as SSA considers additional screening tools and models to identify high-risk, low-volume organizational payees, SSA should develop a plan for revising the existing model that allows for more timely updates and results in documentation of related design decisions. With respect to our eighth recommendation that SSA enhance the eRPS system to more effectively address expiring collective accounts, SSA officials reported that they would work with staff to ensure staff know where to find alerts for expiring accounts and enhance how eRPS displays information on collective accounts that have already expired. We agree with SSA’s proposed actions. However, we adjusted our recommendation to clarify that SSA should enhance eRPS in a manner that ensures staff take action on expired accounts and that payees do not continue to use expired accounts without oversight. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Commissioner of the Social Security Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4040 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The three objectives examined in this report are how the Social Security Administration (SSA): (1) approves organizations to be representative payees, (2) communicates with organizational payees, and (3) oversees these organizations. To address our three objectives, we reviewed relevant federal laws and SSA policies and guidance. We interviewed SSA officials in its central office and staff in four regional offices that we selected to reflect a range in the number of states and organizational payees they collectively oversee and to achieve diversity in geographic location. Within those regions, we visited eight field offices covering seven states, which were selected to include both metropolitan and non-metropolitan areas that maximized the number of files we would have available for our review (see next paragraph). We also interviewed officials in one area office per region—two representing metropolitan area field offices, and two representing non-metropolitan area offices that we visited. These interviews with regional, area, and field office staff are intended to obtain perspectives from SSA officials in different parts of the country and are not intended to be representative of all SSA field offices and staff. We also analyzed program data, including the number and type of organizational payees and the number of beneficiaries they serve. We assessed the reliability of these data by reviewing relevant documentation and interviewing SSA staff knowledgeable about the systems used to collect and maintain the data and determined the data were sufficiently reliable for our use. To determine how organizations are approved to be representative payees, we reviewed SSA’s policies and relevant federal laws and regulations. At each field office we visited, we (1) interviewed managers about their role in the application process and (2) reviewed up to six organizations’ electronic files in the Electronic Representative Payee System (eRPS), the primary data system SSA uses to track representative payees. Specifically, at each field office we reviewed up to two applications that SSA had approved (either initial applications to serve as representative payee or initial applications to collect fees); up to two applications that SSA had denied (initial or to collect fees); and files for up to two organizations that were terminated or closed in the past 5 years. In some cases, field offices we visited did not have the full number of cases available, and we reviewed fewer files in those offices. We selected the most recent approval, denial, and termination files that were available. In all, we reviewed 15 recently approved applications, six recently denied applications, and three recent terminations. We also interviewed cognizant SSA officials at the central office and the four regional and area offices we selected. We conducted background checks on a stratified random sample of 205 current organizational payees. The sample was selected to include fee- for-service organizations with 50 or more beneficiaries, fee-for-service organizations with fewer than 50 beneficiaries, non-fee-for-service organizations with 50 or more beneficiaries, and non-fee-for-service organizations with fewer than 50 beneficiaries. We entered information on selected organizations into a database called CLEAR and reviewed the resulting reports for any indication of criminal history. Many of these reports included the criminal history of individuals who are or may be associated with the organizational payee, and we reviewed these with particular focus on the crimes that bar individuals from serving as individual payees. For those reports that contained an indication of criminal history, we selected reports that indicated there may have been federal crimes or felonies at the state or local level and attempted to obtain court records to provide further insight into the nature of the crimes and the outcome of the cases. However, because we lacked information that would have made it possible for us to definitively link a conviction to staff in an organization—such as Social Security numbers for payee staff that are in leadership or financial management roles—the results of our analysis were not reliable enough to report. SSA collects Social Security numbers for individual payee applicants but not for any principals or staff from organizational payee applicants. Without this information, it is impossible to definitively link criminal convictions to individuals associated with organizational payees. To help understand how SSA communicates with organizational payees, we reviewed program guidance and interviewed representatives of eight organizational payees—one in the local area of each field office we visited, in addition to interviewing officials in each field office. We also interviewed cognizant SSA officials at the central office and the four regional and four area offices we selected. To review SSA’s overall oversight of organizational payees—including onsite reviews and reviews of the annual accounting form and payees’ use of collective accounts—we reviewed relevant federal laws and regulations, program policies, and relevant SSA documents; analyzed data; and interviewed SSA officials at the central office, the four regional and four area offices we selected, and the eight field offices we visited. To further understand SSA’s new onsite review process, we reviewed agency documents that describe the roles and responsibilities of key players in SSA’s new onsite review process. We also interviewed SSA officials and representatives of the National Disability Rights Network (NDRN) about the status of its implementation. To determine the extent to which different types of organizational payees receive onsite reviews, we analyzed SSA program data for fiscal year 2018. We assessed the reliability of these data by reviewing relevant documentation and interviewing knowledgeable agency officials and determined they were sufficiently reliable for our purposes. To learn about the outcomes of onsite reviews, such as how frequently unapproved collective accounts were identified, we reviewed SSA’s annual reports to Congress. We determined SSA data on the number of onsite reviews conducted and SSA data reported to Congress on unapproved collective accounts were sufficiently reliable for our purposes. We did not assess the efficacy of the new onsite review process or the quality of onsite reviews because we determined it was too soon to evaluate recent program changes. Instead, we described the roles and responsibilities of key players in the new process and interviewed SSA and NDRN to provide information on the status of implementation. To assess the predictive model SSA uses to select low-volume organizational payees for onsite reviews, we analyzed available documentation and interviewed SSA officials knowledgeable about the predictive model. This information included: (1) a list of variables; (2) the code SSA uses to execute the model; and (3) a brief description of how SSA developed the model, including a high-level description of its methodology and an analysis of the predictive power of the model compared to random chance. We compared the documentation SSA provided us with accepted practices for maintaining documentation of statistical models. For detailed results on the findings of this analysis, see appendix III. To obtain a range of perspectives on the organizational payee program, we interviewed staff of the Social Security Advisory Board, representatives of an SSA managers’ association, an organizational representative payee association, and NDRN. In addition, we interviewed representatives of advocacy groups for the aged, persons with physical disabilities, and persons with mental illness regarding their constituents’ experiences with SSA’s organizational payee program. Appendix II: SSA Policy Regarding How to Evaluate Organizational Payee Applicants Per Social Security Administration (SSA) policy, field office staff should consider certain factors when evaluating organizations’ suitability to serve as payees. Some factors apply to all applicants, including both individuals and organizations, while others apply only to organizational payee applicants (see table 2). In addition, there are some requirements for organizational payees applying to collect fees for their payee services. According to SSA policy, organizational payees that are applying to collect fees must meet the following requirements: Be regularly serving as a payee for at least five beneficiaries for at least 1 calendar month; Generally not be a creditor of the beneficiaries it serves; and Be a state or local agency with a qualified mission, or a non-profit social service agency that is community-based, bonded, and licensed. Appendix III: Additional Information on SSA’s Predictive Model and Our Assessment How SSA Selects Payees for Review Based on the Model The Social Security Administration (SSA) uses a predictive statistical model it implemented in 2012 to rank low-volume organizations based on their chance of misusing beneficiary funds and selects for onsite reviews those organizations identified as having the highest risk. The predictive model uses a logistic regression to estimate the chance that each payee will misuse benefits, given the characteristics of the beneficiary and payee, such as the length of time served as a payee and whether the beneficiary received a large lump sum payment from the payee. SSA takes the predictive model output, which is calculated for every payee and beneficiary pair, and uses it to rank payees. SSA assigns organizations for review depending on (a) their rank (organizations that have a higher likelihood of misusing benefits are more likely to be selected); and (b) available resources. How We Assessed SSA’s Predictive Model To review the predictive model, we interviewed SSA officials knowledgeable about the model and reviewed available documentation. This documentation included: (1) a list of variables; (2) the computer code SSA uses to execute the model; (3) a brief explanation of how SSA periodically assesses the model and related performance statistics; and (4) 2 documents (totaling 5 pages) describing how SSA developed the model. We compared this documentation to accepted practices for maintaining documentation of statistical analysis, such as standards published by the Office of Management and Budget (OMB). The documents describe, at a high level, SSA’s methodology for developing the model. It also includes an analysis of the predictive power of the model compared to random chance. of predictor variables, and ultimately selected a final model using a step- wise selection process. However, available documentation does not include information necessary to evaluate how SSA assessed other candidate models or understand the rationale for SSA’s decision to accept its final model. For example, there is limited documentation to: Reproduce SSA’s Target Population: The documentation does not describe in detail how SSA identified all organizational payees that served from 1993 to 2009 (such as how SSA queried the Representative Payee System), nor does it explain in detail how SSA linked beneficiary and organizational level data, such as to count the number of beneficiaries that each payee served. SSA subsequently explained in its technical comments that it used Social Security numbers to link information among several systems. However, SSA did not describe steps it took to establish the linkages, or steps taken to identify organizational payees that served from 1993 to 2009, in enough detail for an independent analyst to reproduce the work. Moreover, SSA did not provide this written documentation upon our original request, which suggests that SSA did not maintain complete records of the work. Reproduce SSA’s Sample Design: The documentation does not describe in detail how SSA designed the probability sample it used to develop the model or how, if at all, it weighted the sample to account for varying probabilities of selection in the sample. Selecting the appropriate sampling method for a model and applying appropriate weights generally increases its predictive accuracy. Reproduce SSA’s Process for Assembling the Data and Selecting the Final Model: The documentation provides limited information about the input variables and models that SSA tested but ultimately did not use. In addition, the documentation does not show how SSA assessed and addressed potential correlations between the variables it selected. For example, we could expect certain variables, such as receipt of a lump sum payment and receipt of a lump sum payment over $1,000, to be highly correlated. Although highly correlated variables do not necessarily impair the model’s predictive accuracy, they can influence which individual variables test as being predictive during the model’s development. The documentation also does not describe how SSA chose to split continuous variables into categorical variables—a choice which can influence predictive accuracy. Understand How SSA Assessed Data Reliability: Available documentation does not indicate whether SSA assessed the reliability of data used in its model. The reliability of the outcome variable— misuse—is particularly important. Unreliable data regarding whether misuse occurred, either due to incorrect data entry or other errors, would compromise the model’s ability to accurately predict the likelihood of misuse. In contrast, the reliability of variables that could signal risk of misuse—such as whether the beneficiary received a large disbursement of funds—is less critical. Even variables prone to measurement error may still predict misuse accurately. Nevertheless, assessing their reliability remains important, since reducing measurement error can increase the model’s predictive power. Such assessments could range from limited testing of the data—e.g., for outliers, illogical values, and missing data—to broader, independent verification of data reliability. Regardless of the approach used, documenting all data reliability assessments allows internal and external stakeholders to assess, and possibly improve, the model. Explain whether, or how, SSA’s model addressed potential patterns of misuse for beneficiaries served by the same payee: Statistical models typically assume that estimates can be generated independently for each unit of analysis—in this case, unique pairs of beneficiaries and payees. However, in cases where multiple beneficiaries are served by the same payee, this may not be the case. Patterns of misuse might be similar for all beneficiaries served by a given payee, such as if the payee were systematically defrauding all of its beneficiaries. Accurately modeling data with this kind of nested structure—which conflicts with typical statistical assumptions—often requires multi-level modeling methods. However, SSA’s documentation does not specify how or whether it applied these methods, or otherwise assessed or adjusted for the nesting of beneficiaries within payees. Reproduce SSA’s process for ranking organizations: With the current model, which assigns a score for each payee-beneficiary pair, SSA uses the predictive model’s output to then rank payees. However, there are various approaches for ranking payees, ranging in sophistication, and SSA does not have sufficient documentation to determine whether the approach currently being used best predicts risk to beneficiaries. Appendix IV: Comments from the Social Security Administration Appendix V: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Michele Grgich (Assistant Director), Isabella P. Anderson, Dan Meyer and Amy E. MacDonald (Analysts-in-Charge), Daniel Bibeault, Ted Burik, Daniel Concepcion, Jennifer Cook, Gus Fernandez, Alex Galuten, Sheila R. McCoy, Arthur Thomas Merriam, Jr., Mimi Nguyen, Ramon J. Rodriguez, Margie K. Shields, Joy Solmonson, Almeta Spencer, Jeff M. Tessin, Walter K. Vance, Kathleen van Gelder, Srinidhi Vijaykumar, and Khristi A. Wilkins made significant contributions to this report. In addition, Seto J. Bagdoyan, Joy Booth, Gabrielle M. Fagan, Robert H. Graves, Rosalind C. Romain, and Helina P. Wong contributed to the report.
Why GAO Did This Study Nearly a million individuals relied on organizational payees to manage their Social Security benefits in 2018. Due to an aging population more beneficiaries may need organizational payees in the future. These beneficiaries are among the most vulnerable because, in addition to being deemed incapable of managing their own benefits, they lack family or another responsible party to assume this responsibility. SSA reports that misuse of benefits by payees is rare, but its Office of Inspector General has identified cases of misuse that have harmed vulnerable beneficiaries. GAO was asked to review SSA's organizational payee program. This review examines, among other things SSA's process for approving payees and its monitoring efforts. GAO reviewed relevant federal laws, regulations, policies, and guidance; analyzed SSA data from fiscal year 2018; analyzed the predictive statistical model SSA uses to select low-volume payees for on-site reviews; and interviewed SSA central office staff and regional, area, and field office staff in four regions selected for geographic diversity. What GAO Found The Social Security Administration (SSA) approves organizational payees—such as nursing homes or non-profits that manage the Social Security benefits of individuals unable to do so on their own—by assessing a range of suitability factors, such as whether the organizations have adequate staff to manage benefits for multiple individuals. However, GAO found that SSA's policy does not specify how to assess more complex suitability factors, such as whether an organization demonstrates sound financial management. Without clearer guidance, unqualified or ill-prepared organizational payees could be approved to manage benefits. Also, SSA does not currently require background checks for key employees of an organizational payee. In contrast, SSA requires background checks for individual payees—such as a relative or friend of the beneficiary. A comprehensive evaluation could help SSA determine whether and how to expand their use of background checks to organizational payees. To ensure organizational payees are managing funds appropriately, SSA uses several monitoring tools, including resource-intensive onsite reviews. Certain organizational payees, such as those that charge fees for their services or have 50 or more beneficiaries (high-volume), receive onsite reviews every 3 to 4-years. In contrast, payees that serve fewer than 50 beneficiaries (low-volume)—the vast majority—are selected for review based on their estimated likelihood of misusing beneficiary funds, and a relatively low percent of them receive onsite reviews (see figure). SSA uses a predictive statistical model to identify higher risk low-volume payees, but the model's effectiveness cannot be fully assessed by GAO or others due to missing documentation on how it was designed. SSA officials said they will update the model in the future, but do not have a time frame for doing so. Establishing such a time frame and documenting design decisions are key steps toward assessing the model's effectiveness. Another way SSA oversees organizational payees is by reviewing their annual accounting forms, but shortcomings exist in SSA's review of the form and in the form's content and design. For example, SSA lacks timeframes for following up on missing or problematic forms. Also, the accounting form does not capture complete information on whether payees co-mingle beneficiaries' funds in collective accounts, which can limit SSA's ability to monitor those risk-prone accounts. Establishing timeframes and revising the form could enhance the effectiveness of the annual accounting form as an oversight tool. What GAO Recommends GAO is making nine recommendations in this report, including that SSA: clarify how to assess complex suitability factors; assess requiring background checks for organizational payees; establish a timeframe for reviewing the predictive model and document design decisions resulting from that review; and establish timeframes for, and conduct revisions of the accounting form. SSA agreed with all nine recommendations and provided technical comments that GAO incorporated as appropriate.
gao_GAO-19-406
gao_GAO-19-406_0
Background DOD has long recognized that the contracts for items like weapon systems are capital intensive in nature and take a long time to produce. Contract financing assists the defense contractor in managing expenses, such as material, labor, and overhead. In such cases, DOD can agree to help finance these expenses as the work progresses through various types of contract financing payments, including progress payments and performance-based payments. Progress payments based on cost are determined as a percentage of the costs incurred by the contractor as work progresses. Currently, DOD pays 80 percent of incurred costs of large business concerns and 90 percent of incurred costs of small business concerns. To receive progress payments on the basis of cost, contractors are required to have an accounting system and controls that are adequate for proper administration of progress payments. The cognizant contract administration office is to maintain surveillance of the contractor’s accounting system as appropriate, and the Defense Contract Audit Agency is to audit the accounting system. DOD provides contract financing on fixed-price type contracts for non- commercial items. Performance-based payments enable the contractor to be paid for achieving certain contractual milestones, such as delivery of a major subcontracted component. DOD can pay up to 90 percent of either the contract’s price or the price of a deliverable item using performance-based payments. DOD’s performance-based payments guide states that these payments should not be structured such that they amount to advance payments, which in general terms are payments made before work is complete on a contract regardless of what performance milestones are met. Unlike progress payments, however, performance-based payments do not require that the contractor have an adequate accounting system. Lastly, contract financing can also be used when the terms and conditions of a contract are not yet “definitized,” a term that generally means finalized. These actions, which are termed undefinitized contract actions (UCAs) at DOD, are to be used only when the negotiation of a definitive contract action is not possible in sufficient time to meet DOD’s requirements and the department’s interest demands that the contractor be given a binding commitment so that contract performance can begin immediately. The government may incur unnecessary costs if requirements change before the contract is definitized. Defense acquisition regulations generally require UCAs to be definitized within 180 days of the UCA date or before more than 50 percent of the estimated contract price is obligated, whichever occurs first. During this period, progress payments are limited to 80 percent of work accomplished. DOD’s 2014 performance-based payments guide recommends that a UCA be awarded using progress payments first; performance-based payments should then be considered during the definitization process. Table 1 summarizes the conditions and rates applicable to progress payments based on costs and performance-based payments. Several offices and agencies within DOD have a role in managing contract financing. The office of DPC, within DOD’s Office of the Under Secretary of Defense for Acquisition and Sustainment, is responsible for all pricing, contracting, and procurement policy matters. This office formulates and oversees DOD-wide pricing policies and strategies supporting the procurement of major defense programs, including programs that use progress and performance-based payments. DCMA and other contract administration offices monitor contractors’ performance and management systems to ensure that cost, product, and performance are in compliance with the contract terms. DCMA generally maintains contract financing payment data for DOD progress and performance-based payments for contracts DCMA administers. Within DCMA, the Cost and Pricing Center supports DOD-wide analysis of contract data to support DOD-wide decision making, among other things. Relationship of Contract Financing to Contract Profit and Contractor Profitability Contract financing has an impact on the price of negotiated contracts and, more generally, on the health and profitability of the defense contractor. On negotiated contracts, DOD requires contracting officers to use weighted guidelines, a structured approach used to develop profit objectives for individual defense contracts. DOD implements its profit policy through the weighted guidelines. As part of their efforts to determine the government’s negotiating position, including how much profit the contractor should receive under the contract, contracting officers are to consider various factors, including the degree to which the government is providing contract financing. Assuming other factors are held constant, the weighted guidelines suggest that the negotiated profit rate of a fixed-price defense contract might be 1 to 2 percentage points lower when the government provides contract financing. The contracting officer may vary the amount to consider other risk elements when establishing the government’s negotiating position. DOD-provided contract financing can also provide contractors higher rates of return on the amount of corporate funds contractors invest on that same contract. One measure of this benefit is “internal rate of return” (IRR), a tool that can be used to assess the impact of contract financing on overall contractor profitability. DOD’s 2001 Incentives Guidebook notes that IRR is one of the basic tools used by industry to determine where to invest its funds and assess the risks and potential rewards involved in contracting with the government or commercial entities. IRR is a measure that integrates both the contractor’s investment to produce the product and the profit earned on that product. In contrast to contracts in which the contractors must either self-finance or borrow from commercial lenders, when contractors receive financing on a contract from the government the contractor’s IRR can be significantly higher. Figure 1 provides a hypothetical example of how changes in the progress payment rate on a 40-month, fixed-priced contract affects the expected contract profit rate and the contractor’s IRR. As illustrated above, providing contract financing (in this case, progress payments) has a significant impact on the contractor’s IRR and a lesser impact on the actual profit that DOD expects the contractor to make. For example, if DOD provided no contract financing, the weighted guidelines would suggest a profit rate on this hypothetical contract of 13.8 percent, which would provide an internal rate of return to the contractor of 7.5 percent. If DOD provided progress payments at the customary rate of 80 percent, the weighted guidelines would suggest a profit rate on this hypothetical contract of 10.4 percent, or 3.4 percentage points lower than if no financing was provided. However, even though the contractor’s expected profit is lower, the IRR for the contract would increase to 30.9 percent, or a little more than four times what would be realized if the contractor had to finance the effort on its own. Prior Studies of Contract Financing and Contract Profitability Several studies conducted by DOD, nonprofit organizations and GAO have assessed the impact of contract financing on contract profit or contractor profitability. These studies have generally found that, depending on the measure used, the defense industry generates high returns on investment. For example, In 1976, DOD’s Profit ‘76 study examined earnings’ relationship to capital investment and increased productivity. The Profit ‘76 study group concluded that government contractors were able to maintain higher profits by keeping investment low partly because DOD did not have profit policies in place to encourage investment in items such as facilities. As a result of the Profit ‘76 study, DOD made a number of changes to its profit policy to encourage corporate investment in facilities, among other things. In 1991, GAO suggested that using return on assets to measure profitability of defense contractors is beneficial because it recognizes how government financing can affect contractors’ levels of profitability. In 2008, the Institute for Defense Analyses reported that defense contractors generated high returns with low operating margins, in part because government-provided contract financing helped fund the contractors’ long, asset-intensive product cycles. According to DPC officials, however, the most comprehensive study of contract financing and profit policies was conducted by the DFAIR commission in 1985. We discuss this study in more detail below. Changes to the Legislative and Regulatory Framework Governing Contract Financing since 1985 Since the DFAIR commission issued its report in 1985, Congress and DOD have made a number of changes to the statutory and regulatory framework intended to (1) reduce the administrative burden associated with contract financing and (2) encourage the use of performance-based payments (see figure 2). Our review found that DOD paid less in performance-based payments after making some changes to contract financing policies, but started increasing these payments again in 2016. DFAIR Commission’s 1985 Study Considered to Be the Last Comprehensive Study of DOD’s Contract Financing and Profit Policies According to DPC officials, the most comprehensive study of contract financing and profit policies was conducted by the DFAIR commission in 1985. The DFAIR commission assessed, among other issues, whether DOD contract financing policies were equitable in maintaining the defense industrial base and cost-effective for DOD, the effectiveness of DOD contract financing policies as a means of encouraging contractor cost efficiencies, the profitability of defense work and its reasonableness in comparison with the profitability of the non-defense sector, and the interrelationship of DOD’s contract finance and profit policies. In evaluating contractor financing costs, DFAIR developed a model of a typical contract to use in calculating contractors’ contract financing costs, the amount of interest a contractor would have to pay if it were required to bear all those costs, and the effect of payment delays on contractor financing costs. The DFAIR commission reached a number of conclusions about DOD’s contract financing and profit policy in effect at that time. The study concluded that: The progress payment rate was appropriate for the time period studied but should be revised based on changes in short-term interest rates. DOD’s profit policy as reflected in the weighted guidelines at the time of the study did not explicitly take into account the cost of working capital (the difference between a contractor’s assets and liabilities). The profitability of individual defense contracts the commission reviewed had been consistently lower than the profit levels reported to have been negotiated by government contracting officers. DOD’s profit policy needed to be simplified and better integrated with contract financing policy. The study also concluded that there was a need to make DOD contract financing more responsive to economic conditions and that profit policy, contract financing, and contractor investment are related. We agreed with the conclusion that profit policy, contract financing, and contractor investment are related. We also highlighted the need for recurring DOD contract profitability studies using a generally accepted methodology in our 1986 report. DOD Efforts to Reduce the Administrative Costs Our work found that since the DFAIR study was issued, DOD made several changes to reduce the administrative burden associated with contract financing requirements. These changes included Elimination of flexible progress payments (1999) – DOD introduced flexible progress payments in 1981 as a new approach to contract financing. Under flexible progress payments, DOD contracting officers were to use the DOD Cash Flow Computer Model to develop an applicable progress payment rate for that contract. Under this approach, DOD specified the minimum percentage the contractor was required to invest and DOD would provide the remainder. The amount of contractor investment required by DOD varied from 5 to 25 percent, depending upon the year. Flexible progress payments were not allowed on contracts issued after November 11, 1993; the references were eliminated completely from the DFARS in 1999. Elimination of “paid cost rule” (2000) – The paid cost rule required large businesses to pay subcontractors before billing the government for payment. After DOD eliminated this rule in March 2000, large businesses were generally able to include subcontract costs incurred but not yet actually paid on progress payment requests to the government. Elimination of “financial need requirement” (2016) – Since 2000, one of the ways contractors could receive progress or performance- based payments under the FAR was on the basis of financial need or the unavailability of private financing. In that regard, an April 2013 DOD Inspector General report found that contracting personnel did not properly negotiate and verify contractors’ need for contract financing before authorizing performance-based payments. The Inspector General recommended that contracting personnel determine whether private financing is available to a contractor before authorizing performance-based payments. While DOD concurred with the recommendation, it subsequently amended the DFARS in 2016 to eliminate the requirement for DOD personnel to justify the use of contract financing for certain fixed-price contracts. In doing so, DOD stated it was in DOD’s best interests. Efforts to Encourage the Use of Performance- Based Payments Congress enacted the Federal Acquisition Streamlining Act (FASA) in 1994 to provide the executive branch with requirements to improve the process for acquiring goods and services. FASA, among other things, established performance-based payments “wherever practicable” as a form of contract financing. In 1995, the FAR Council amended the FAR to enable the use of performance-based payments up to a maximum amount of 90 percent of the contract’s price. In 2000, DOD issued a rule amending the DFARS to emphasize that performance-based payments were the preferred method of financing. The rule required contracting officers to consider and deem performance-based payments impracticable before deciding to provide progress payments. This rule was part of a larger effort by DOD to make contract financing procedures easier to understand and to simplify related provisions. DOD subsequently issued a user’s guide in 2001 to help its contracting personnel and contractors in using performance-based payments. Despite the provisions to encourage the use of performance-based payments when appropriate, DOD subsequently initiated department- specific actions that, according to industry officials, decreased the frequency with which they received performance-based payments on defense contracts. For example, the Under Secretary of Defense for Acquisition, Technology and Logistics’ September 2010 Better Buying Power memorandum instructed contracting officers to use progress payments as the basis for price negotiations. After the contractor and DOD contracting officer agreed on price using progress payments, contractors could propose using an alternate financing arrangement, including performance-based payments. The memorandum indicated that the rationale for this change was to provide increased incentives for contractor performance. In April 2011, the Director of Defense Procurement and Acquisition Policy (now known as DPC) issued a memorandum that focused on the “practicality” of performance-based payments, stating they “are not practical for use on all fixed-price contracts and require considerable effort between the contractor and Government.” The memorandum noted that if contractors wanted to use performance-based payments, then the contractor should submit a proposed schedule to include all performance-based payment events, completion criteria, and event values, along with the contractor’s expected expenditure profile. To implement its April 2011 performance-based payment policy, DOD issued a proposed rule to amend the DFARS in January 2012. This rule was finalized in March 2014. The 2014 version of the DOD performance-based payments user’s guide noted that performance-based payments are the preferred method only when they are deemed practical by the contracting officer. However, industry officials told us that they frequently cannot reach agreement with DOD regarding performance milestones, and therefore agree to the use of progress payments instead. The impact of DOD’s changes on the relative use of progress versus performance-based payments is uncertain. Between fiscal years 2010 and 2018, DCMA data indicates that DOD provided between $36 billion and $49 billion a year in contract financing on contracts DCMA administered. We found that nearly 98 percent of those contract financing payments were paid to medium and large defense contractors. We also found that the amount DOD paid out in performance-based payments on those contracts fell between 2010 and 2016 before increasing in 2017. In December 2016, Congress enacted Section 831 of the Fiscal Year 2017 NDAA to establish performance-based payments as the preferred type of contract financing for DOD in statute. Section 831 also directed the Secretary of Defense to ensure that nontraditional defense contractors and other private sector companies are eligible for performance-based payments, in line with best commercial practices. Figure 3 shows the differences in DOD’s progress and performance-based payments between fiscal years 2010 and 2018 for contracts administered by DCMA. In August 2018, DOD introduced a proposed rule that was intended to use contract financing rates to help incentivize contractor performance and to implement Section 831. The proposed rule would have set a base progress payment rate for large businesses (specifically, for other than small businesses) at 50 percent and small businesses at 90 percent. At the same time, however, the proposed rule provided opportunities to increase the rate if the contractor achieved certain enterprise-wide priorities such as meeting contract delivery dates. The proposed rule also eliminated some of the administrative requirements associated with performance-based payments to encourage their use. According to DPC officials, the rates would be subject to an annual adjustment based on the performance criteria provided in the rule. Table 2 summarizes key aspects of the proposed rule. DOD officials acknowledged that if implemented, contractors would initially receive a lower level of contract financing, but believe with improvements in their overall performance contractors would eventually receive much higher levels of financing than currently provided. Industry officials voiced a number of concerns about the proposed rule at the January and February 2019 public meetings held after the rule was proposed, as well as in our interviews with them. For example, these officials noted that the proposed rule would change the intent of contract financing from a means of assisting contractors to help meet short-term expenses to a mechanism for ensuring compliance with contract terms and conditions on an enterprise-wide basis. Industry officials said they believe compliance with contract terms and conditions should be addressed on a contract-by-contract basis. Further, industry officials stated that the changes suggested in the proposed rule could negatively impact the health, competitiveness, and resiliency of the defense industrial base and introduce significant uncertainty as to how much contract financing DOD would provide. Additionally, industry officials noted that the rule did not contain specific implementation details in such areas as whether the incentives would be applied on an enterprise-wide basis and how to ensure the data were reliable. DOD withdrew this rule in October 2018, citing the need to conduct additional outreach with industry regarding contract financing methods. Subsequently, DPC held three public meetings in January and February 2019 to obtain public comments on revising policies and procedures for contract financing, performance incentives, and associated regulations prior to proposing a new rule. DPC provided no timeframes for doing so. DOD officials issued the proposed rule in April 2019 to implement Section 831’s statutory preference for performance-based payments for public comment. The proposed rule notes that performance-based payments are the preferred method of contract financing at DOD whenever practicable. The period for public comments ends on July 1, 2019. DOD officials indicated that they hope to issue a final rule in early 2020. DOD Has Not Comprehensively Assessed the Impact of Its Contract Financing and Profit Policy on the Defense Industry Since 1985 Defense Industry and Market Conditions Have Changed since 1985 DOD has not conducted a comprehensive assessment of the impact of its contract financing and profit policies on the defense industry since the DFAIR study was completed in 1985. In the intervening time, there have been significant changes in the composition of the defense industry, business practices, and economic conditions. In December 2018, DPC officials acknowledged the need to assess contract financing policies against market and economic conditions on an ongoing basis and determine the effect these policies have on the defense industry, but did not provide a timeframe for doing so. DOD officials acknowledged that the department has not done a comprehensive assessment of how its contract financing policies affect the defense industry since the DFAIR study was issued in 1985. DOD had previously stated its intent to do such an assessment on a regular basis. Specifically, in 1991 DOD noted that it would issue progress payment rates each February. DOD also noted that it would use the methodology from the DFAIR study to determine the progress payment rate based on short-term commercial interest rates. However, DOD removed the DFARS provision related to flexible progress payments in 1999. Overall, we found that DOD has adjusted the progress payment rate five times since the DFAIR study was completed, but only adjusted the progress payment rate twice since 1991 when DOD indicated its intent to assess the rate annually. DOD last changed the progress payment rate in 2001 (see table 3). Since the DFAIR study was conducted and DOD last assessed progress payment rates, DOD and industry officials noted that the composition of the defense industry has changed, as we have noted in our prior work. For example, in 1997, we reported that the end of the Cold War and the subsequent declines in DOD budgets resulted in, among other changes, a reduction in the number of defense contractors through various mergers and acquisitions. In our current work, DPC officials pointed to a changing proportion of subcontractors relative to prime contractors. Industry officials also identified the emergence of contractors who do not typically work with DOD and technology companies into the defense sector as an issue that should be considered when looking at contract financing and profit policies. According to industry officials, the industrial base has moved away from heavy industrial manufacturing toward technology and more sophisticated industry partners, including contractors who do not typically work with DOD. These officials noted that these contractors may not be eligible for contract financing because they may not have an approved cost accounting system needed to receive progress payments. In that regard, in July 2017, we reported that one company conducted a study that determined it would take at least 15 to18 months and millions to establish a government-unique cost accounting system. Industry officials also noted that the emergence of high-technology companies may pose a challenge to traditional defense contractors in terms of attracting financing and investment from commercial and private investors at competitive rates. Industry officials also identified changing business practices, including the increased use of UCAs, which affect their ability to use performance- based payments. Industry officials stated that it is more difficult to negotiate performance-based payments on UCAs, noting that DOD’s guidance suggests that performance-based payments should not be provided for UCAs until definitization occurs. Our review of DOD’s semi- annual reports to Congress on the use of UCAs found that the number of UCAs and unpriced change orders reported by DOD has varied between March 2014 and September 2018 (see figure 4). DOD reported that the total not-to-exceed dollar value of all UCAs and unpriced change orders was approximately $76 billion as of September 2018. Finally, market and economic conditions have changed since the DFAIR study. For example, at the time of the DFAIR study, short-term interest rates were around 8 percent, whereas the short-term interest rate in 2018 was 2 percent. Figure 5 shows the changes in short-term interest rates and inflation since 1980. Industry officials noted, however, that a comprehensive economic assessment of defense industry returns and the cost of contract financing policies should be conducted. For example, they noted that a reduction to progress payment rates in times of higher interest rates would increase their cost of working on complex contracts. Industry officials acknowledged that while interest rates have been low, they anticipate rates increasing in the near future. DOD’s August 2018 Proposed Rule Did Not Consider Impact on the Defense Industry DPC officials acknowledged that DOD’s August 2018 proposed rule did not assess the proposed rule’s impact on the health and profitability of the defense industry. DPC officials noted that since the proposed rule was focused on incentivizing contractor performance, DOD’s supporting analysis did not include an assessment of how the proposed rule would impact the overall profitability of defense contractors (such as assessing the impacts to a contractor’s internal rate of return) or of the profitability of defense work relative to non-defense industry opportunities. Rather, DOD’s analysis estimated the total financial impact the rule would have on large and small contractors primarily based on interest costs. Further, DOD stated in its supplementary material that it did not consider the extent to which the contract profit policy (in the form of weighted guidelines) would need to be adjusted given the proposed rule changes. DPC officials explained that changes to the weighted guidelines would need to consider how such changes would support the intent of providing higher rates of contract financing for higher levels of contractor performance. If DOD were to only propose a change to the progress payment rate, DPC officials acknowledged that such an assessment should consider what changes, if any, would need to be made to the weighted guidelines. DPC officials said they conducted an informal analysis that assessed contractor profitability, but this analysis was not made publicly available. In December 2018, DPC officials acknowledged the need to assess contract financing policies against market conditions on an ongoing basis and determine the effect these policies have on the defense industry. GAO’s Standards for Internal Control in the Federal Government call for monitoring the effectiveness of systems and policies throughout an organization on a recurring basis. Until DOD conducts a comprehensive assessment and updates that assessment on a recurring basis, it will not be in a position to understand whether current or future contract financing policies are achieving their intended objectives. Conclusions DOD and industry officials have acknowledged that the defense industry, economic and market conditions, legislative and regulatory requirements, and business practices have all changed since the issuance of the DFAIR study in 1985. Despite this recognition, DOD has not conducted a comprehensive assessment of how its contract financing policies affect the defense industry in more than 30 years. Without assessing the collective impact of these changes, DOD may be assuming too much financial risk or providing contractors with levels of working capital that are not commensurate with what is needed to help finance long-term projects, and affecting its ability to attract new entrants into the defense market. That assessment, however, should not be a one-time effort. A prior DOD study, our work, and the department have acknowledged the need to do so on a regular and recurring basis. Without a comprehensive and systemic assessment, conducted on a recurring basis, of DOD’s contract financing policy’s effect on the defense industry, DOD will not be in a position to understand whether current or future policies are achieving their intended objectives. Recommendation for Executive Action We recommend that the Acting Secretary of Defense direct the Under Secretary for Acquisition and Sustainment to ensure it conducts a comprehensive assessment of the effect that its contract financing and profit policies have on the defense industry and update that assessment on a recurring basis. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. DOD provided written comments, which are reprinted in appendix I, and concurred with our recommendation. In concurring with our recommendation, DOD stated it would seek fiscal year 2020 funds to contract a study on DOD contract financing policies and their effect on the defense industry. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of the report to the Acting Secretary of Defense; the Principal Acting Director, Defense Pricing and Contracting; the Director, Defense Contract Management and Agency; the Director, Office of Management and Budget; the Administrator for Federal Procurement Policy, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3665 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Comments from the Department of Defense Appendix II: Legal Chronology of Select Contract Financing Changes Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Bruce H. Thomas, Assistant Director; Lorraine Ettaro, Elizabeth Field, Gina Flacco, Stephanie Gustafson, Kristen Kociolek, John Lopez, Beth Reed Fritts, Miranda Riemer, Anne Stevens, Megan Stewart, Anne Louise Taylor, Alyssa Weir, Robin Wilson, and Alex Winograd made key contributions to this report.
Why GAO Did This Study Each year, DOD provides contractors with billions of dollars in contract financing on fixed-price contracts for major weapons systems and other long-term efforts. Contract financing helps contractors manage expenses until they begin delivering the contracted items to DOD. Contract financing can take several forms, including progress payments based on the cost incurred by the contractor, and performance-based payments, in which the government pays the contractor an agreed-to amount for achieving certain milestones. DOD last performed a comprehensive assessment of its contract financing polices in 1985. The Conference Report accompanying the Fiscal Year 2019 National Defense and Auhorization Act included a provision for GAO to analyze the level of financing currently provided to contractors, among other things. This report (1) describes changes in DOD contract financing policy since 1985 and (2) assesses the extent to which DOD has analyzed the effect of its contract financing policies on the defense industry. GAO assessed relevant legislation and DOD regulations; obtained data on DOD's use of progress and performance-based payments from fiscal years 2010 through 2018; and interviewed cognizant DOD and industry officials. What GAO Found Congress and the Department of Defense (DOD) have changed the contract financing legislative and regulatory framework since DOD last performed a comprehensive assessment, including eliminating a requirement that contracting officers justify a need for contract financing and establishing a preference for performance-based payments. However, Defense Contract Management Agency data indicates that the amount of performance-based payments it administered fell from 2010 to 2016 (see figure). DOD officials acknowledged that DOD has not comprehensively analyzed how its policies affect the defense industry since 1985. Industry and economic conditions, however, have since changed, including lower interest rates and the emergence of contractors who do not typically work with DOD. In August 2018, DOD proposed introducing performance-based elements into its process for setting progress payment rates. DOD officials stated that since the proposed rule focused on incentivizing contractors' performance, they did not assess how it would affect defense contractor profitability or whether other financing or profit policies changes would be needed. DOD withdrew the proposed rule in October 2018. GAO's Standards for Internal Control in the Federal Government call for organizations to monitor the effectiveness of their policies on a recurring basis. In December 2018, DOD officials acknowledged the need to do so. Until DOD conducts a comprehensive assessment and ensures they are done on a recurring basis, it will not be in a position to understand whether current or future contract financing policies are achieving their intended objectives. What GAO Recommends GAO recommends that DOD ensure it conducts a comprehensive assessment of the effect that its contract financing and profit policies have on the defense industry and update that assessment on a recurring basis. DOD concurred with the recommendation.
gao_GAO-19-553T
gao_GAO-19-553T_0
Background Post-9/11 GI Bill Benefits VA has been providing veterans educational assistance benefits since 1944. We previously reported that these benefits have been put in place over time to compensate for compulsory service, encourage voluntary service, avoid unemployment, provide equitable benefits to all who served, and promote military retention. The Post-9/11 GI Bill, which took effect on August 1, 2009, is now VA’s largest educational program. This program generally provides benefits to veterans who served on active duty for at least 90 days beginning on or after September 11, 2001. Full benefits are generally available to those who served on active duty for 36 months, for which VA will pay the net cost for in-state tuition and fees at public schools and up to an annual maximum amount at nonprofit and for- profit schools ($24,477 in academic year 2019-2020). VA pays schools directly for tuition and fees and sends additional payments for housing and books directly to veterans who are eligible for these payments. To receive education benefits through the Post-9/11 GI Bill, students submit applications to VA, schools certify enrollments, and VA processes claims and payments. Other Sources of Student Aid For help covering the costs of their postsecondary education, veterans may also be eligible for grants and loans available from federal student aid programs administered by Education, such as Pell Grants and Direct Loans. According to Education data, an estimated 32 percent of student veterans had received Pell Grants and 28 percent had taken out Direct Loans, during school year 2015-16. VA education payments, such as Post-9/11 GI Bill benefits, are not considered when calculating eligibility for federal student aid and do not affect the amount of aid a veteran can receive from Education. Student veterans may also be eligible for state and institutional aid (scholarships from state governments or schools, for example). Student Veterans Attend a Wide Range of Schools, but a Small Number of Schools Receive a Large Share of Post- 9/11 GI Bill Payments Nearly 700,000 student veterans received Post-9/11 GI Bill tuition and fee benefits to attend almost 6,000 schools in fiscal year 2017. VA paid about 40 percent of the Post-9/11 GI Bill tuition and fee payments to public schools, 30 percent to nonprofits, and 30 percent to for-profits (see fig. 1). Most student veterans used Post-9/11 GI Bill tuition and fee payments to attend schools that provided 4-year undergraduate programs (see fig. 2). Veterans may also use Post-9/11 GI Bill benefits for training opportunities at schools that do not offer college degrees, including training in areas such as driving, emergency medical training, and barber or beautician skills. These programs received about $360 million Post-9/11 GI bill tuition and fee payments in fiscal year 2017. A relatively small number of schools received a large share of Post-9/11 GI Bill tuition and fee payments. In fiscal year 2017, the 50 schools that received the highest total amount of Post-9/11 GI Bill tuition and fee payments accounted for over 30 percent of all such benefits, collectively receiving $1.4 billion for over 190,000 beneficiaries. These 50 schools consisted of 14 public, 16 nonprofit, and 20 for-profit schools (see fig. 3). In fiscal year 2017, the 50 schools received between $11 million and $191 million each in tuition and fee payments and enrolled between around 350 and 28,000 Post-9/11 GI Bill beneficiaries. In contrast, among all schools receiving Post-9/11 GI Bill benefits in fiscal year 2017, the majority of them enrolled fewer than 15 veterans. Student Outcomes Varied Among Schools That Received a Large Share of Post-9/11 GI Bill Payments Student outcomes at the 50 schools that received the most Post-9/11 GI Bill tuition and fee payments were, on average, generally comparable to the national average, but varied more widely across sectors. Since available data on student veteran outcomes is currently limited, we analyzed common outcome measures for the broader student populations at each school: 4-year program graduation rates: the percent of first-time full-time students who completed a 4-year program within 6 years. Full- and part-time retention rates: the percent of first-time students who enrolled in one fall and either successfully completed their program or re-enrolled in the next fall. When examined as a whole, the average student outcomes for the 50 schools that received the most Post-9/11 GI Bill tuition and fee payments were generally comparable to the national average. For example, the average 4-year program graduation rate at the top 50 schools was 61— the same as the national average. For one of the outcome measures— full-time retention rate—the average was higher for the top 50 schools (83 percent) than the national average (75 percent). Within the 50 schools that received the most Post-9/11 GI Bill tuition and fee payments, student outcomes varied across schools in different sectors (see fig. 4). For-profit schools had lower 4-year program graduation and retention rates compared to public and nonprofit schools among these 50 schools, although there was wide variation among schools in each sector. School Closures Affect Thousands of Student Veterans Although a relatively small number of schools close each year, these closures can affect thousands of student veterans. In 2017 we reported that about 95 schools closed in school year 2015-16, according to Education data, which was higher than in previous years, primarily due to a rise in for-profit school closures (see fig. 5). Schools can close in different manners and for a variety of reasons, including declining enrollments, financial problems, loss of accreditation, and legal actions. When a school ceases operations in an orderly process over several months it gives students time to complete the current school term and make arrangements to transfer and continue their education at another school. The effect of school closures is often worse when the closures occur abruptly with little or no advance warning, because these schools generally do not have time to establish transfer arrangements that allow students to easily continue their education at another school. Abrupt closures of large schools, although infrequent, can affect thousands of student veterans and result in large financial losses for the federal government and taxpayers. For example, Corinthian Colleges Inc. (Corinthian) enrolled more than 72,000 students before its closure in April 2015. The following year, ITT Educational Services Inc. (ITT), another large for-profit provider of higher education, closed all of its 136 campuses in September 2016, affecting more than 35,000 students. More than 7,000 Post-9/11 GI Bill students were pursuing educational programs at schools operated by ITT and Corinthian at the time of their closures, according to VA. More recently, closures at Education Corporation of America in 2018 and Dream Center Education Holdings in 2019, which operated schools under multiple brands, including Argosy University and several campuses of The Art Institutes, affected tens of thousands of students, including thousands of Post-9/11 GI Bill recipients. Student veterans attending a school that closes may be eligible to have some or all of their Post-9/11 GI Bill benefits restored. As a result of the Harry W. Colmery Veterans Educational Assistance Act of 2017, VA restores GI Bill entitlements to eligible beneficiaries affected by recent and future school closures. Student veterans may also be entitled to a discharge on eligible federal student loans they may have received from Education or to have their Pell Grant eligibility restored if they are unable to complete a program because their school closed. Despite these options for having benefits restored and loans discharged, school closures can still create hardships for veterans. As we have previously reported, college students in general can face challenges transferring credits and continuing their education at a new school under any circumstances. Students who transferred lost, on average, an estimated 43 percent of their credits, and credit loss varied depending on the transfer path, based on data from 2004 to 2009. For example, students who transferred between public schools—the majority of transfer students—lost an estimated 37 percent of their credits. In comparison, students who transferred from for-profit schools to public schools—which happens less frequently—lost an estimated 94 percent of their credits. Even if a student’s credits transfer, they may not apply toward fulfilling degree requirements for their intended major. In these cases, a student will likely have to take additional courses at their new school, which could potentially delay graduation and result in additional costs to pay for repeated courses. Further, some student veterans with credits that do not transfer may exhaust their Post-9/11 GI Bill benefits before completing their degree. School closures can also exacerbate other challenges veterans may face pursuing their education. As we have previously reported, many student veterans already cope with challenges transitioning from the military to an academic environment. For example, they can face challenges navigating the academic bureaucracy, whether in attempting to receive transfer credit for previous college courses or in determining what other sources of financial aid may be available to them. Many student veterans are also trying to balance school with family and work obligations or dealing with the effects of combat-related physical and psychological injuries. When a school closes, the burden of finding and enrolling in a new school may be especially difficult for these veterans. Closures can also pose a financial risk for the government and taxpayers to the extent that Post-9/11 GI benefits are restored and federal student loans are discharged. For example, in 2017 the Congressional Budget Office estimated that restoring Post-9/11 GI Bill benefits and other VA education benefits to student veterans who attend schools that closed will increase direct spending by $320 million over the 10 year period from 2018 to 2027. School closures can also result in hundreds of millions of dollars in financial losses for the federal government and taxpayers due to discharged federal student loans. In conclusion, the Post-9/11 GI Bill has provided valuable education benefits to millions of veterans who attend a wide range of schools. However, when schools abruptly shut their doors, it can leave student veterans—who already face unique challenges in an academic environment—without a clear path to continuing their education and can force taxpayers to cover the cost of restoring their benefits and discharged student loans. Student veterans who continue their education at another school may also find that many of the credits they earned will not ultimately help them after they transfer, delaying their degrees and resulting in additional costs. As the number of school closures has increased in recent years, the risks and challenges associated with such closures are particularly salient for student veterans, their families, and the federal government. Chairman Levin, Ranking Member Bilirakis, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff have any questions about this testimony, please contact Melissa Emrey-Arras, Director, Education, Workforce, and Income Security Issues at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Will Colvin (Assistant Director), Brian Schwartz (Analyst-in-Charge), and Jeffrey G. Miller. In addition, key support was provided by James Bennett, Deborah Bland, Benjamin DeYoung, Alex Galuten, Theresa Lo, John Mingus, Corinna Nicolaou, and Michelle St. Pierre. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Post-9/11 GI Bill is VA's largest educational program. It provides payments for eligible veterans to cover tuition and fees, housing and other costs while they pursue a higher education. However, for some veterans this pursuit is interrupted when the school they attend unexpectedly closes. This testimony addresses (1) the distribution of Post-9/11 GI Bill tuition and fee payments among schools, (2) outcomes of students at schools that receive the most Post-9/11 GI Bill payments, and (3) how school closures can affect student veterans. To address these topics, GAO reviewed VA data on Post-9/11 GI Bill tuition and fee payments to schools for fiscal year 2017, the most recent school-level data available. GAO analyzed student outcome measures for these schools using Department of Education data reported for school year 2017-2018. GAO also reviewed its prior reports issued between 2013 and 2017 on school closures, credit transfers, and related challenges faced by student veterans. What GAO Found In fiscal year 2017, nearly 700,000 student veterans used their Post-9/11 GI Bill benefits from the Department of Veterans Affairs (VA) to attend programs at almost 6,000 schools. Of the almost $4.5 billion in Post-9/11 GI Bill tuition and fee payments VA made to schools in fiscal year 2017, about 40 percent went to public schools, 30 percent to nonprofits, and 30 percent to for-profits. A small number of schools received a large share of the tuition and fees paid, with 30 percent of payments totaling $1.4 billion going to 50 schools that enrolled over 190,000 veterans in fiscal year 2017. The average student outcomes at the 50 schools that received the highest total amount of Post-9/11 GI Bill tuition and fee payments in fiscal year 2017 were generally comparable to the national averages, but varied widely when examined by school sector. For example, the average 4-year program graduation rate for the top 50 schools was the same as the national average (61 percent). Within the top 50 schools, average graduation rates varied between public (73 percent), nonprofit (66 percent) and for-profit schools (22 percent). Although a relatively small number of schools close each year, these closures can affect thousands of student veterans. School closures, which have increased in recent years, are particularly harmful when they involve large schools that close abruptly with little or no advance warning. For example, more than 7,000 veterans receiving Post-9/11 GI Bill benefits were attending schools operated by Corinthian Colleges and ITT Educational Services when they abruptly closed in 2015 and 2016, respectively. Although veterans affected by school closures may qualify to have their GI Bill benefits restored, these closures can create hardships for veterans and significant costs for taxpayers. For example, veterans can face challenges transferring credits and continuing their education at a new school. This may make it more difficult for veterans to complete their degrees before exhausting their eligibility for Post-9/11 GI Bill benefits. School closures also pose a financial risk for the government and taxpayers due to the costs associated with restoring benefits.
gao_GAO-20-303
gao_GAO-20-303_0
Background The overall objective of the Army Facilities Standardization Program is to achieve savings and benefits in the programing, design, and construction of Army facilities of excellence. To meet AFSP’s objectives in a timely, efficient, and cost-effective manner, the Army established the nine Centers in 2006 to support the AFSP, as shown in figure 1. The AFSP operates under the direction of the Army Facilities Standardization Committee (Committee). As shown in figure 2 below, the Committee is chaired by the Assistant Chief of Staff for Installation Management (ACSIM) and composed of members from USACE and the U.S. Army Installation Management Command (IMCOM). Each of these offices has representatives who are either full-fledged or advisory members of the Centers of Standardization Management Board (the Board). The Board members directly oversee the activities of the Centers and are responsible for developing performance measures and reporting them to the Committee. The Centers have primary responsibility for developing and managing Army standard design packages for designated facility types. The Centers, among other things, ensure that these standard designs and construction of projects comply with two other sets of facility guidelines: DOD’s Unified Facilities Criteria (Facilities Criteria) and general Army standards. As we previously reported, the Facilities Criteria are overarching, DOD-wide technical manuals and standards used for planning, design, construction, restoration, and maintenance of DOD facility projects. These criteria must be used to the greatest extent possible by all DOD components. They are developed through the joint efforts of the U.S. Army Corps of Engineers, the Naval Facilities Engineering Command, and the Air Force Civil Engineer Center, and they are approved by the Engineer Senior Executive Panel of the Unified Facilities Criteria Program. According to Army Regulation 420-1, Army standards are the immutable, unchanging, required facility elements and criteria that define the fundamental purpose and function of a facility’s design and construction. These Army standards are authorized by the Committee. Army standard designs define the facility key components, features, and characteristics that must be included in the design and construction or major renovation of all facilities of the same type regardless of location, available funding, command preferences, or installation mission. Essentially, Army standard designs may consist of architectural and engineering drawings as well as written design specifications that a construction team can easily adapt or modify for site-specific requirements. Figure 3 below compares Army standard designs with Facilities Criteria and general Army standards. In addition to developing and managing Army standard design packages, the Centers’ staff function principally as engineering and architectural consultants within larger project teams as they monitor and oversee the appropriate use of Army standard designs (as well as any incorporated Army standards or Facilities Criteria). According to Centers officials, 12 full-time and 21 part-time staff are currently dedicated to the Centers. Staff are located in USACE headquarters in Washington, D.C., as well as in eight USACE districts and one Engineering and Support Center. Each Center specializes in and is responsible for specific facility types and their designs. While the Centers support the Army’s overall efforts for standardization, not every Army facility is built according to a standard design. Appropriate Centers staff are required to review every proposed Army construction project at its outset and, if an installation has requested a waiver from an existing Army standard or standard design, all voting members of the Committee may authorize waivers in accordance with certain procedures. According to Centers officials, Army standard designs have been developed for about 70 regularly constructed facility types out of the Army’s nearly 900 facility types. For example, the Army has standard designs for fire stations, chapels, dining facilities, and weapons storage. (See appendix II for a listing of the 70 facility types that currently have standard designs or for which standard designs are under development.) According to Centers officials, the Centers’ 70 facility types account for approximately 60 percent of Army Military Construction (MILCON) projects and represent an estimated 55 percent to 70 percent of the overall Army MILCON budget for any given year. (See appendix III for information on overall DOD standardization program, including the Navy and Air Force standard design programs.) In fiscal year 2019, the Centers reported a combined annual budget of about $6.2 million for their operations and personnel. The Centers Have Engaged in Activities That Support Key Objectives and Are Consistent with Key Principles and Concepts in OMB Guidance The Centers identified and engaged in a number of activities designed to support the key objectives found in their charter and these activities are consistent with key principles and concepts in OMB guidance for a disciplined capital programming process. The Centers’ charter includes the following three objectives: (1) developing and refining Centers’ policies and processes; (2) assuring consistent application of standards of the Centers program; and (3) monitoring the Centers’ execution to meet the overarching objectives and priorities of the AFSP and standardization process. To meet the three objectives, the Centers engage in different activities throughout the military construction process. Figure 4 below shows the various points at which the Centers are involved in the life- cycle of a military construction project and examples of the activities in which the Centers engage. For example, Engineer Regulation 1110-3- 113 states that during the design phase of projects, the Centers maintain a lead role and will be the technical lead for coordination, review, and acceptance of design deliverables, including providing field technical assistance, identifying and advising when a waiver is required and coordinating with appropriate authorities in this matter, and reviewing and editing requests for proposal documents—activities that according to our analysis support the Centers’ second objective. Based on our review of supporting documentation from five projects that used standard designs, we found that the Centers were undertaking the activities mentioned above. In addition, activities in which the Centers engaged during the design, construction, and post-construction phases of these projects were consistent with key principles and concepts in OMB guidance. Specifically, we found evidence that, for these five projects, Centers’ staff participated as integrated members of the project delivery teams in planning meetings, design reviews, assessments of the need for standard design waivers, value engineering studies, and life-cycle cost analyses during the projects’ design and construction phases. These activities were consistent with key principles and concepts in OMB guidance for a disciplined capital planning process, including that agencies should use integrated project teams, as appropriate, to manage the various capital programming phases or major acquisition programs within the agency. In addition, we found that other Centers’ activities—performing post- occupancy evaluations (POE) and updating standard designs when applicable—were also consistent with key principles and concepts in OMB guidance for a disciplined capital planning process. For instance, we found that a POE was completed for one project, a post-occupancy questionnaire was completed for another project, a POE was planned during fiscal year 2020 for a third project, and a fourth project was still under construction. According to OMB capital programming guidance, POEs are tools to evaluate the overall effectiveness of an agency’s capital acquisition process. The primary objectives of a POE include (1) identifying how accurately a project meets its objectives, expected benefits, and strategic goals of the agency and (2) ensuring the continual improvement of an agency’s capital-programming process based on lessons learned. The guidance identifies factors to be considered for evaluation in conducting a POE, such as standards and compliance, customer/user satisfaction, and cost savings. The guidance also notes that a POE should generally be conducted 12 months after the project has been occupied, to allow time for the tenant to evaluate the building’s performance and relevant aspects of project delivery. However, the guidance allows agencies some flexibility in the timing of a POE to meet their unique needs if 12 months is not the optimal timing to conduct the evaluation. Our review of Centers guidance and project documents also found that the Centers’ activities supported the Centers’ objectives as well as AFSP objectives and priorities. In addition, Centers officials emphasized that the Centers participate in all Army standard design construction projects to ensure that the facility designs support the objectives of the AFSP, specifically improving the programing, design, and construction processes for Army facilities. As shown in table 1 and further outlined below, we assessed whether the Centers’ activities undertaken on standard design construction projects were applicable to the Centers’ objectives. Then, for those that were applicable, we determined whether those activities supported the Centers’ objectives. (See appendix IV for a detailed analysis of how the Centers activities support the program’s objectives.) Centers use POEs to evaluate standard designs: We found, for example, that the POEs led by the Centers are designed to evaluate whether the project met fundamental Army functional and mission requirements, whether the project implemented Army standard design, and whether improvements to the design could be made. These reviews support Centers objectives 1, 2, and 3— developing and refining Centers’ policies and processes, consistently applying Army standard designs, and supporting AFSP objectives and priorities—by identifying areas of the design needing improvement, evaluating whether a facility was constructed in accordance with the approved project design, and eliciting customer feedback concerning whether the finished facility meets mission requirements. Centers review standard design waivers: The Centers review an installation’s waiver request and advise whether a waiver to Army standards or standard designs is required for that specific project. This process supports Centers objectives 1, 2, and 3—developing and refining Centers’ policies and processes, consistently applying Army standard designs, and supporting AFSP objectives and priorities. Specifically, part of the waiver review and approval process is the Centers’ assessing whether a waiver request represents a unique need of a specific end user or a possible permanent change to the Army standard design or Unified Facilities Criteria. In addition, if the Centers waive the use of or approve deviations from standard design prior to the beginning of the construction phase, it may reduce the number of change orders that occur during construction. Army Has Limited Performance Measures to Track the Centers’ Progress toward Key Objectives The Army, through its Centers of Standardization Management Board, is responsible for oversight of the Centers and has performance measures to track their progress in achieving one of their three key objectives. However, the Army does not have performance measures for assessing progress for their other two objectives. Army’s Centers of Standardization Management Board Is Responsible for Oversight of the Centers The Board provides oversight to the Centers in support of the AFSP. The Board members are responsible for developing, implementing, and reporting on program metrics. The Centers’ Charter of 2006 broadly identifies the mission and objectives of the Board, while more recent program guidance and regulations describe its functions in more detail. The Charter states that the mission of the Board is to provide corporate oversight and consistent Centers execution in support of the AFSP. In overseeing the Centers, it is key that the Board has performance measures that provide it with evaluative information to help make decisions about the program—information that tells them whether, and why, a program is working well or not. Performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly progress toward pre-established goals. It is typically conducted by program or agency management and is critical for providing information concerning whether a program is working well or not. Performance measures may address the type or level of program activities conducted (processes), the direct products and services delivered by a program (outputs), or the results of those products and services (outcomes). Army’s Oversight Processes for the Centers Have Limited Performance Measures for Tracking Progress toward Achieving Centers’ Objectives The Army has a performance measure to support its first key objective. Each fiscal year, the nine Centers develop budget execution plans that outline how they will support the design standards for the specific facility types for which they have responsibility. In these plans, the Centers establish goals for updating specific existing standard designs and developing new standard designs (that is, the output from the Centers’ efforts). The Board’s primary oversight process consists of monitoring program execution of the nine Centers. According to Center officials, the Board reviews these execution plans at the semi-annual board meetings to determine whether the Centers are executing as planned, that is have the Centers met their goals for updating and developing standard designs. We found that this oversight process enables the Board to assess the progress each of the Centers has made toward achieving its goals for updating existing standard designs and developing new ones. For example, in fiscal year 2017 the Fort Worth Center completed all four of its planned standard design updates, and the Honolulu Center completed three of its four planned updates. We also found that the Board does not evaluate progress toward ensuring that the Centers consistently apply standard designs across the Centers of Standardization program (second objective of the Centers). Specifically, as shown in table 1 above, the Centers engage in a number of activities that support the consistent application of Centers standards on a project-by-project basis. However, the Board does not maintain, consolidate, or analyze information about how frequently the Centers engage in such activities, or how the Centers’ activities affect the program. That is because, according to Army and Centers officials, neither the Board nor the Centers have developed and implemented performance measures to assess the progress the Centers are making in ensuring that standard designs are consistently used. Absent such measures, the Army lacks assurance that standard designs are being applied, when appropriate, and that standard designs are being applied consistently across the service. In fact, to provide the project-specific documentation that we reviewed, the Centers needed to request documents from the USACE district office responsible for the projects. According to Centers officials, this was necessary because the Centers currently do not have a document management system in which project documentation is stored. Instead, as the USACE organization responsible for specific projects, each district maintains its own project records. The officials stated that USACE recently moved to a cloud-based system for storing project documents and is exploring whether this system could provide a more central document storage system. We note that having access to such information, along with creating appropriate performance measures, could enable the Board to measure whether progress has been made in ensuring that standard designs are applied consistently. In addition, we found that the Board does not evaluate whether the Centers are making progress in supporting the objectives and priorities of the AFSP (third objective of the Centers). One of the objectives of the AFSP is to reduce design costs and time, construction costs and time, and the number of change orders issued during construction. Although Army and Centers officials told us that the use of standard designs reduces project costs, time, and change orders, they could not provide supporting data. That is because, according to Army and Centers officials, neither the Board nor the Centers have developed and implemented performance measures to assess the effects of the use of standard designs. Creating such measures could enable the Army to assess the extent to which the Centers are reducing design costs and time, construction costs and time, and the number of change orders issued. DOD’s Fiscal Year 2020 Annual Performance Plan and Fiscal Year 2018 Annual Performance Report established a goal of simplifying, delivering faster, and reducing costs of product and service procurement. One of the performance measures associated with this goal was to reduce cost overruns and schedule delays by up to 50 percent for military construction projects. Developing and implementing performance measures related to reducing design costs and time, construction costs and time, and the number of change orders issued would enable the Centers to demonstrate the extent to which they are supporting DOD’s annual performance goals. Use of Standard Design Does Not Introduce Increased Liability to Facility Projects We found that the use of the standard design does not introduce increased liability for the Centers if issues arise during a construction project. Centers officials stated that a contractor could file a claim against the government if the contractor felt there was a flaw in the Army’s standard design or that using the standard design resulted in unanticipated costs during the design or construction phase. However, Centers officials stated that there have been no instances in which any of the Centers was a party to legal action related to the use of a standard design. According to Centers officials, the design for a facility project is typically developed by one of the USACE district offices or an architect-engineer contractor. Further, these officials stated that while the pertinent Army standard design guides the development of Army project designs, the final project design, certified by the USACE district office or an architecture/engineering contractor, represents the plan for a specific project. In addition, according to the Federal Acquisition Regulation (FAR), the architect-engineer contractor is responsible for the professional quality, technical accuracy, and coordination of all designs, drawings, specifications, and other services furnished by the contractor under its contract. Furthermore, the FAR states that the contractor shall, without additional compensation, correct or revise any errors or deficiencies in its designs, drawings, specifications, and other services. The FAR also stipulates that the contractor may be liable for government costs resulting from errors or deficiencies in designs furnished under the contract. Consequently, according to USACE officials, because the Centers are not responsible for the design of a specific project, they would not have increased liability in the event that changes were required during construction. Conclusion The Centers of Standardization develop and update Army standards and Army standard designs within the Army Facilities Standardization Program. In addition, the Centers are responsible for ensuring that the design and construction of Army military construction projects comply with approved Army standards and Unified Facilities Criteria. While the Army tracks the Centers’ program execution related to the Centers’ efforts to develop new and update existing standard designs (first objective of the Centers), it does not have performance measures for assessing progress toward the Centers’ other two objectives. Specifically, the Army does not have performance measures in place to assess the progress the Centers have made toward assuring consistent application of standards from the Centers’ program (second objective of the Centers) or monitoring the Centers’ execution to meet the overarching objectives and priorities of the AFSP and standardization process (third objective of the Centers) including, among other things, reducing design costs and time, construction costs and time, and change orders during construction. This hinders the Centers’ ability to determine how well they are supporting the objectives of both the Army Facility Standardization Program and DOD’s annual performance plans, as well as the Centers’ ability to demonstrate the extent to which they are achieving their objectives. Recommendations for Executive Action We are making two recommendations to the Secretary of the Army. The Secretary of the Army should ensure that the Assistant Chief of Staff for Installation Management, in conjunction with the Centers of Standardization and the U.S. Army Corps of Engineers, establish and implement performance measures to assess the progress the Centers are making in ensuring that standard designs are used consistently. (Recommendation 1) The Secretary of the Army should ensure that the Assistant Chief of Staff for Installation Management, in conjunction with the Centers of Standardization and the U.S. Army Corps of Engineers, establish and implement performance measures to assess the effects of the use of standard designs, specifically the progress the Centers are making in reducing design costs and time, construction costs and time, and change orders. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to the Department of the Army for review and comment. In its written comments, the Army concurred with both of our recommendations, and stated it would take actions to implement them. The Army’s comments are printed in their entirety in appendix V. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense and the Secretaries of the Army, Navy, and Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Diana Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: List of Projects GAO Reviewed Appendix II: Facility Types Supported by Centers of Standardization According to Centers officials, a total of 12 full-time and 21 part-time staff are assigned to the Centers of Standardization. Each Center specializes in and is responsible for specific facility types and their designs. Table 3 below lists the current staffing levels and the facility types supported by each of the Centers. Appendix III: Department of Defense Standardization Program The Department of Defense’s (DOD) department-wide standardization program has the goals of improving military operational readiness, reducing total ownership costs, and reducing cycle time. Overseen by the Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)), the Department of Defense Standardization Program is described in DOD Manual 4120.24, which outlines its governing council, definitions, and procedures that apply to all components within the department. Under the Defense Standardization Program, DOD component heads ensure that materiel standardization, including information technology and facilities, is addressed throughout the acquisition process. The three overarching goals of the Defense Standardization Program are to (1) improve military operational readiness, (2) reduce total ownership costs of the department, and (3) reduce cycle times. The manual also defines the following terms: Standard. A document that establishes uniform engineering or technical criteria, methods, processes, and practices. Standardization. The process of developing and agreeing on (by consensus or decision) uniform engineering criteria for products, processes, practices, and methods for achieving compatibility, interoperability, interchangeability, or commonality of materiel. Defense standard. A document that establishes uniform engineering and technical requirements for military-unique or substantially modified commercial processes, procedures, practices, and methods. There are five types of defense standards: interface standards, design criteria standards, manufacturing process standards, standard practices, and test method standards. DOD’s Unified Facilities Criteria (Facilities Criteria) and Unified Facilities Guide Specifications (UFGS) provide facility planning, design, construction, operation and maintenance, sustainment, restoration, and modernization criteria for facility owned by the DOD. The Facilities Criteria contain technical guidance; introduce new and innovative technology; or provide mandatory requirements to implement laws, regulations, executive orders, and policies prescribed by higher authority documents. The Facilities Criteria also define performance and quality requirements for facilities to support their mission throughout their life cycle. According to DOD guidance, the Facilities Criteria provide the most current operationally effective, cost-efficient, and safe criteria at the time of publication. Both the Facilities Criteria and UFGS are developed through the joint efforts of the U.S. Army Corps of Engineers, the Naval Facilities Engineering Command, and the Air Force Civil Engineer Center, and are approved by the Engineer Senior Executive Panel of the Unified Facilities Criteria Program. The Facilities Criteria and UFGS systems were designed not only to establish uniformity among defense facilities, but to standardize and streamline the process for developing, maintaining, and disseminating construction criteria. The procedures for the development and maintenance of the Unified Criteria and Unified Specifications are outlined in Military Standard 3007G, which is updated by the Engineering Senior Executive Panel. Each military department (Army, Navy, and Air Force) has its own facilities standardization program that implements the Unified Criteria and Unified Specifications as well as service-specific facilities criteria, standards, and guides. The Army’s program, known as the Army Facilities Standardization Program (AFSP), is the oldest among the three departments, having been initiated in 1993. Due largely to the unique construction needs of the Army, the AFSP is the most complex and comprehensive of the facility standardization programs. It utilizes two levels of guidance for standardized facility types: a broad standard, called “Army Standards,” and a specific standard, called “Standard Design.” The Department of the Navy program began in 2014 and provides policy and standards for the design development, and revision of Navy project documents in Navy and Marine Corps Design and Facilities Criteria, while the Air Force program was started in 2016 and provides criteria in an Air Force Instruction for design and construction of Air Force facilities. Appendix IV: Crosswalk of Key Centers of Standardization Activities and Objectives The Centers of Standardization (Centers) undertake a number of activities designed to support the key objectives found in their charter, which includes supporting the objectives of the Army Facilities Standardization Program (AFSP). Table 4 identifies each of these activities along with the specific objectives that we determined the activities support. developing and refining Centers of Standardization policies and processes, assuring consistent application of standards of the Centers’ program, and monitoring the Centers’ execution to meet the overarching objectives of the AFSP and standardization process. increased credibility with the Congress through more consistent construction program development, increased consistency in facility types with equal treatment among Army Commands, installations, and users, improved master planning and site development activities, improved design quality, and the promotion of design excellence, simplified design and construction project management, reduced design costs and times, reduced construction costs and time, and reduced change orders during construction, and increased customer satisfaction through improved responsiveness to users’ functional and operational requirements. Appendix V: Comments from the Department of the Army Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments
Why GAO Did This Study In 2006, the U.S. Army Corps of Engineers began its Centers of Standardization program to develop design standards for facility types that the Army constructs on a regular basis. The Centers support broader Army efforts under the AFSP to standardize facility types with objectives such as improving design quality, reducing design and construction costs and time, and reducing change orders. Senate Report 115-262 accompanying the John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to evaluate the Centers' effectiveness. This report assesses, among other things, the extent to which (1) the Centers have identified activities that support their objectives, and (2) the Army tracks the Centers' progress toward their objectives. GAO reviewed and analyzed applicable regulations and program and project documentation; compared Center activities to program objectives; and interviewed cognizant agency officials to gain an understanding of the Centers' operations and potential financial liabilities. What GAO Found The nine Centers of Standardization (Centers) within the U.S. Army Corps of Engineers undertake a number of activities designed to support each of their program objectives. Their charter includes three objectives: (1) developing and refining Centers' policies and processes; (2) assuring consistent application of the Centers' standards; and (3) monitoring execution to meet the overarching objectives and priorities of the Army Facilities Standardization Program (AFSP) and standardization process. We found that the Centers' various activitiessuch as conducting value engineering and life-cycle cost studies to identify possible cost savings and analyze long-term costs of new facilitiesare consistent with key principles and concepts in Office of Management and Budget guidance for a disciplined capital planning process. Additionally, the post-occupancy evaluations led by the Centers are designed to evaluate whether the Army functional requirements have been met, Army standard design has been implemented, and there are any areas where the design could be improved. These evaluations support all three of the Centers' objectives by evaluating whether a design needs improvement, a facility was constructed in accordance with the approved project design, and customer needs were met. The Army has limited performance measures to track the Centers' progress in achieving program objectives. Semi-annual meetings of the Army's Centers of Standardization Management Board (Board) enable the Army to track the Centers' progress toward their goal of developing and updating Center policies and processes (first objective of the Centers). However, GAO found that the Army lacks performance measures to assess the Centers' progress in ensuring the consistent application of Army standard designs (second objective of the Centers) and in monitoring how well the Centers meet the objectives and priorities of the AFSP and standardization process (third objective of the Centers). Specifically, the Board does not maintain, consolidate, or analyze information about how frequently the Centers participate in construction projects, or how this activity affects the program and supports AFSP objectives, such as reducing project costs, times, and change orders. Taking steps to develop and implement appropriate performance measures would enhance the Army's efforts to ensure that the Centers are meeting their program objectives. What GAO Recommends GAO is recommending that the Army establish performance measures to assess the Centers' progress to (1) ensure the consistent use of standard designs and (2) reduce construction costs and time and reduce the occurrence of change orders. The Army concurred with our recommendations.
gao_GAO-19-532
gao_GAO-19-532_0
Background U.S. Oceanic Airspace FAA, within the U.S. Department of Transportation, provides air traffic services for the continental United States (domestic airspace) and over parts of the Atlantic, Pacific, and Arctic oceans (oceanic airspace). More than 24 million square miles of oceanic airspace are under U.S. control. This airspace is divided into flight information regions (flight regions): Anchorage Arctic, Anchorage Oceanic, New York Oceanic, and Oakland Oceanic. Air traffic service (ATS) route is a specified route designed to channel the flow of traffic as necessary for the provision of air traffic services. ATS routes are defined by predetermined geographical positions— waypoints. For example, ATS route G344 is published by FAA and is defined by waypoints. Organized Track System is a series of ATS routes. For example, A590, R591, and G344, along with other ATS routes, comprise the North Pacific Route System. In areas with high flight volume, such as between California and Hawaii, FAA publishes air traffic service (ATS) routes that allow air traffic controllers to handle large volumes of traffic. A set of ATS routes—an organized track system—functions as a freeway in the sky, with routes serving as lanes (see sidebar). ATS routes may be “fixed” or “flexible.” A fixed route does not change; whereas a flexible route changes daily depending on weather patterns, such as prevailing winds. As detailed in industry reports, multiple factors—including weather conditions, congestion, and airspace restrictions—affect whether aircraft operators plan to fly on ATS routes published by FAA or on routes they determine to be the most efficient for that flight (i.e., user-preferred routes). Figure 1 shows U.S. oceanic airspace and the location of various organized track systems. To fly through U.S. oceanic airspace, aircraft operators (e.g., airlines) file a flight plan, which includes the departure and arrival airports and the planned route (i.e., the path the aircraft plans to take to get to its destination). Air traffic control may clear the flight plan as filed—with no changes—and/or makes changes to an aircraft’s planned route during the flight. Managing Air Traffic To manage air traffic, air traffic controllers must be able to monitor an aircraft’s position as it flies along its planned route. As we have previously reported, in domestic airspace, radar and ground-based Automatic Dependent Surveillance-Broadcast (ADS-B) technology provides this surveillance information. Radar is a ground-based system that provides information on an aircraft’s position to air traffic control facilities. Ground- based ADS-B uses equipment installed in aircraft (transmitters) to broadcast an aircraft’s position, altitude, and other information to ground stations, which transmit the data to air traffic control facilities. Surveillance information from radar and ADS-B is nearly instantaneous— allowing domestic air traffic controllers to effectively “see” where an aircraft is at all times. FAA manages radar and ground-based ADS-B infrastructure, in some cases through contracts. Through its contract with the provider of ADS-B services, FAA also pays for the cost of transmitting ADS-B messages from aircraft to air traffic control in domestic airspace. Future Air Navigation System (FANS) Equipage in U.S. Oceanic Airspace By 2020, FAA estimates that about 80 percent of aircraft flying in U.S. airspace above the Atlantic Ocean will be equipped with FANS as will 84 percent of aircraft flying in U.S. airspace above the Pacific Ocean. However, FANS equipage varies within these airspaces. In the New York flight region, specifically along the West Atlantic Route System, FAA estimates that by 2020 the FANS equipage rate will be 66 percent—lower than other sectors of Atlantic airspace. Similarly, in the Oakland flight region, along the Central East Pacific Route System, FAA estimates that by 2020 the FANS equipage rate will be 75 percent—lower than found in other sectors of Pacific airspace. controllers receive reports on an aircraft’s position from a radio operator who receives verbal updates from pilots using a high frequency radio or automatically through a technology called Future Air Navigation System (FANS): High frequency radio allows pilots to speak with a third-party radio operator and share surveillance information via spoken position reports at mandatory reporting points. The radio operator then relays position reports as a data message to air traffic controllers. FANS includes a communication system—Controller Pilot Data Link Communications (CPDLC)—and a surveillance system—Automatic Dependent Surveillance-Contract (ADS-C). CPDLC allows pilots and air traffic controllers to communicate directly by exchanging text- based messages. Through ADS-C, air traffic control can request position reports and specify their frequency as well as the information they should include. As we have previously reported, position reports sent through ADS-C can transmit at defined time intervals, when specific events occur such as pilot deviation from the planned route, or at the request of air traffic control. ADS-C reports sent at a defined time interval are called periodic reports—in U.S. oceanic airspace these are typically sent every 10 to 14 minutes. As detailed in an industry report, aircraft operators pay to use the satellite communication networks required to transmit communication and surveillance information to air traffic control in oceanic airspace. In addition, aircraft operators are responsible for the cost of equipping their aircraft with communication, navigation, and surveillance equipment. To help them manage oceanic airspace, U.S. air traffic controllers use a computer system called Advanced Technologies and Oceanic Procedures (ATOP). ATOP is a flight data processing system that controllers use at their workstations. It provides oceanic air traffic controllers with several automated tools to assist in maintaining aircraft at safe distances from one another, coordinate with air traffic controllers in other flight regions, and facilitate controller-pilot communication through CPDLC, among other things. ATOP incorporates information from aircraft flight plans and position reports allowing controllers to monitor an aircraft’s progress, ensure it is following the route cleared by air traffic control, and to continually check for any potential conflicts between aircraft flying through their area of control, i.e., aircraft that could get too close to one another. Oceanic Separation Standards Separation standards—the minimum distances required between aircraft—help ensure that aircraft do not collide with one another. As illustrated in figure 2, separation standards dictate the minimum required longitudinal, lateral, and vertical distance between aircraft. The International Civil Aviation Organization (ICAO) publishes minimum separation standards for oceanic airspace. Using ICAO separation standards as the minimum, FAA sets the separation standards and aircraft requirements that are used in U.S. oceanic airspace. Currently, the minimum distance that must be maintained between aircraft in U.S. oceanic airspace is 30 nautical miles lateral and 30 nautical miles longitudinal. To be eligible for this U.S. oceanic minimum separation standard, an aircraft must be equipped with FANS, in addition to meeting other communication, navigation, and surveillance requirements. For aircraft without FANS, the minimum distance required between aircraft is larger, at least 50 nautical miles lateral and approximately 80 nautical miles longitudinal. While requiring more distance between aircraft helps ensure safety, it means less airspace capacity and may result in fewer direct and fuel- efficient routes. To maintain the required separation distance between aircraft, air traffic control may instruct an aircraft—either before or during flight—to fly at an altitude or along a route that is not the most efficient for that aircraft in terms of flight time or fuel usage. For example, aircraft spaced 50 nautical miles apart laterally and longitudinally are less likely to be able to fly at a fuel-efficient altitude (e.g., 38,000 feet) as fewer aircraft will fly at that altitude, especially in congested airspace. In contrast, when aircraft are spaced 30 nautical miles apart laterally and longitudinally, more aircraft can fly at fuel-efficient altitudes. FAA may adopt ICAO’s minimum separation standards for the oceanic airspace it manages or it can adopt standards that require aircraft to fly farther apart than ICAO’s minimum standards. For example, ICAO published the minimum separation standard for 30 nautical miles longitudinal in 2002. FAA began applying these minimum separation standards in the Oakland Oceanic flight region in 2007, in the Anchorage Oceanic flight region in 2012, and in the New York Oceanic flight region in 2013. In 2016, ICAO published a new minimum separation standard, which allows a minimum lateral distance of 23 nautical miles. FAA has not yet adopted the 23 nautical mile lateral standard. Since 2012, ICAO has worked to develop new minimum separation standards for oceanic airspace that require even less distance between properly equipped aircraft. These new minimum separation standards are based on improved surveillance capabilities, with aircraft using space-based ADS-B potentially eligible to use one set of reduced minimum separation standards (19 nautical miles lateral and 17 nautical miles longitudinal) and aircraft using enhanced ADS-C potentially eligible to use a different set of minimum separation standards (23 nautical miles lateral and 20 nautical miles longitudinal). These new minimum separation standards are undergoing review with final approval expected in 2020. Enhanced Surveillance Technologies FAA’s Advanced Surveillance Enhanced Procedural Separation (ASEPS) program, which is part of FAA’s Air Traffic Organization, was tasked with examining how to increase the efficiency and capacity of operations in U.S. oceanic airspace using enhanced surveillance technologies. In fiscal years 2015 through 2018, congressional committees directed FAA to accelerate its evaluation of space-based ADS-B and provided funding for that purpose. In response, the ASEPS program, among other things, evaluated and compared the costs and benefits of two technologies that could improve surveillance capabilities in U.S. oceanic airspace— enhanced ADS-C and space-based ADS-B. Following are descriptions of how these enhanced surveillance technologies work: Enhanced ADS-C. Uses the same ADS-C technology already installed on FANS-equipped aircraft, but ATOP would request that automatic position reports be sent more frequently to air traffic control. Aircraft equipped with ADS-C and transmitting position reports every 3.2 minutes would be eligible for ICAO’s proposed minimum separation standard of 20 nautical miles longitudinal. ICAO’s 23 nautical miles lateral separation standard, published in 2016, does not require more frequent ADS-C position reports. Space-based ADS-B. Uses low-earth orbiting satellites to capture automatic reports broadcast by ADS-B transmitters installed on aircraft, which will be required for aircraft flying at certain altitudes in domestic U.S. airspace by 2020. ADS-B messages are to be received by air traffic control about every 8 seconds. Aircraft equipped with ADS-B transmitters using the space-based ADS-B system and also equipped with required communication and navigation technologies, would meet the eligibility requirements for ICAO’s proposed minimum separation standards of 19 nautical miles lateral and 17 nautical miles longitudinal. As shown in figure 3, enhanced ADS-C and space-based ADS-B use similar transmission networks but relay different information at different time intervals to air traffic control. To compare these options, FAA prepared a business case analysis that estimated the costs to the agency and aircraft operators, identified safety benefits from enhanced surveillance, and identified and calculated the value of operational efficiency benefits from using reduced minimum separation standards enabled by enhanced ADS-C and space-based ADS-B. For more detail on the costs and benefits included in FAA’s business case analysis, see appendix IV. FAA used this business case analysis to inform its decision on which enhanced surveillance technology to use to support new minimum separation standards. FAA Is Implementing New Oceanic Separation Standards in the Near Term and Will Study Options to Enhance Surveillance FAA is implementing new minimum separation standards supported by enhanced ADS-C in U.S. oceanic airspace. FAA does not plan to use space-based ADS-B in U.S. oceanic airspace; instead, the agency intends to study how to use space-based ADS-B in other U.S. airspace over the next 5 years. According to FAA, this approach is driven by its analysis of the costs and benefits of each enhanced surveillance technology and the safety and operational challenges of using space- based ADS-B in U.S. oceanic airspace. FAA Intends to Implement New Minimum Separation Standards Using Enhanced ADS-C in U.S. Oceanic Airspace By 2022 According to FAA officials and based on project status reports, FAA is implementing new minimum separation standards in U.S. oceanic airspace that are supported by enhanced ADS-C. The agency plans to apply these standards in all sectors of U.S. oceanic airspace by 2022, as shown in figure 4. Specifically, FAA will begin operational use of the 23 nautical mile lateral separation standard in U.S. oceanic airspace in 2021 and the 20 nautical mile longitudinal separation standard in 2022. In April 2019, FAA executives approved a schedule and funding for the implementation of these new minimum separation standards (i.e., 23 nautical miles lateral and 20 nautical miles longitudinal) in U.S. oceanic airspace using enhanced ADS-C. To implement these new standards, FAA officials are upgrading ATOP and working through a review process required to change minimum separation standards in U.S. oceanic airspace. This review process involves 18 milestones, including safety assessments, coordinating with industry and international participants, and developing procedures and training materials for pilots and air traffic controllers. According to FAA officials, the costs and benefits of pursuing this approach—using enhanced ADS-C to support the adoption of new minimum separation standards, i.e., 23 nautical miles lateral and 20 nautical miles longitudinal—drove this decision. Specifically, FAA found that the benefits to airspace users of using enhanced ADS-C to enable new minimum separation standards, such as improved access to fuel- efficient altitudes, outweighed, by 2 to 1, the total costs, including FAA’s costs to upgrade ATOP and the aircraft operators’ data costs due to more ADS-C position reports. In addition, FAA officials said that although new minimum separation standards can provide benefits to airspace users overall, the current minimum separation standards support safe operations for current and anticipated levels of air traffic in U.S. oceanic airspace. Officials noted that the benefits to airspace users of new minimum standards are contingent on the communication, navigation, and surveillance capabilities of aircraft in an airspace and the frequency of disruptive weather patterns. According to FAA officials and air traffic controllers we spoke with, the current minimum separation standards (i.e., 30 nautical miles lateral and longitudinal) are rarely used as the density of aircraft traffic in U.S. oceanic airspace does not require such close spacing. In areas of U.S. oceanic airspace with higher traffic volumes, such as along the West Atlantic Route System and the Central East Pacific Route System, the number of aircraft without FANS and the frequency of disruptive weather patterns often prevent air traffic controllers from applying current minimum separation standards. Officials noted that they are also implementing the new minimum separation standards to harmonize with adjacent air navigation service providers. FAA’s ability to implement these new minimum separation standards (i.e., 23 nautical miles lateral and 20 nautical miles longitudinal) in their documented time frames depends on the success of planned ATOP upgrades. For example, FAA officials and air traffic controllers we spoke to told us that there is a current limitation in ATOP that under certain circumstances, air traffic controllers cannot rely on the system to ensure that minimum longitudinal separation distances are maintained. As a result, air traffic controllers cannot grant aircraft flying at the current minimum longitudinal separation distance their requests to deviate from their planned route for reasons such as avoiding disruptive weather or turbulence. Representatives of the union that represents FAA air traffic controllers told us this limitation must be resolved before new separation standards (i.e., 23 nautical miles lateral and 20 nautical miles longitudinal) can be safely applied. FAA officials told us that they have developed an ATOP software upgrade that could resolve this issue; the upgrade is scheduled to occur in 2021. However, if this upgrade does not resolve the issue or it takes longer to resolve than planned, implementation of the new minimum separation standards could be delayed. Due to Cost, Safety, and Operational Concerns, FAA Plans to Study Space-Based ADS-B in Other U.S. Airspace Cost, Safety, and Operational Concerns According to FAA officials, the cost of space-based ADS-B was a major factor in their decision not to use this technology in U.S. oceanic airspace. FAA’s initial business case analysis found that the costs of using space- based ADS-B to enable reduced separation outweighed the benefits. Specifically, the estimated subscription costs to access the data collected by space-based ADS-B and needed upgrades to ATOP outweighed the estimated benefits to airspace users by 6 to 1. As mentioned above, according to FAA officials, current minimum separation standards allow safe operations for current and anticipated levels of air traffic in U.S. oceanic airspace. Therefore, without a positive business case (i.e., benefits are larger than the costs), FAA officials decided they could not pursue this enhanced surveillance option for U.S. oceanic airspace. FAA officials we interviewed also had safety concerns about using space- based ADS-B to manage reduced separation in U.S. oceanic airspace at this time. Specifically, FAA officials told us the operational considerations for most of the U.S. oceanic airspace were not reflected in the data used by ICAO to model the safety of these standards—air traffic control response times and rates of approved and unapproved aircraft weather deviations. For example, the ICAO panel responsible for analyzing the safety of the proposed minimum separation standards enabled by space- based ADS-B used data from the North Atlantic on the number of times aircraft deviate without authorization from their expected flight plan due to weather conditions. According to FAA officials, other oceanic regions— especially in U.S. oceanic airspace—experience a higher frequency of these deviations. As a result, FAA officials do not plan to use the new minimum separation standards enabled by space-based ADS-B (i.e., 19 nautical miles lateral and 17 nautical miles longitudinal) until FAA can further address how to implement these standards in U.S. oceanic airspace. FAA officials we interviewed also had operational concerns about using space-based ADS-B with ATOP to manage separation between aircraft in U.S. oceanic airspace. Specifically, FAA officials told us that ATOP is designed to use information in ADS-C position reports—i.e., an aircraft’s current location, the next waypoint the aircraft will pass and at what time, and the subsequent waypoint the aircraft will pass—to determine potential conflicts in aircraft flight paths. Without this information, ATOP would not receive the data it uses to detect conflicts within the next 2 hours of a flight, according to FAA officials. ADS-B messages do not include this information and therefore, space-based ADS-B would not replace ADS-C in U.S. oceanic airspace. Due to these cost, safety, and operational concerns with using space- based ADS-B to enable reduced separation, the ASEPS program deferred a decision, originally scheduled for September 2018, on whether to invest in using space-based ADS-B in U.S. oceanic airspace. FAA officials said that while they have not yet found a positive business case for using space-based ADS-B in U.S. oceanic airspace, they will further study space-based ADS-B in U.S. offshore and oceanic airspace. According to FAA officials, they expect further study to identify additional benefits and resolve operational challenges to using space-based ADS-B. FAA’s Plans to Study Space- Based ADS-B in U.S. Offshore and Oceanic Airspace FAA officials and documents indicate that the agency has near-term, medium-term, and long-term plans with goals, milestones, and time frames to evaluate how to use space-based ADS-B in U.S. airspace over the next 5 or more years. These plans include an operational evaluation and other studies to assess the uses and benefits of space-based ADS-B in U.S. airspace. FAA officials told us they expect to use findings from the near-term operational evaluation to inform medium-term and long-term plans. According to FAA officials and documentation, the ASEPS program intends to conduct an operational evaluation of space-based ADS-B in U.S. offshore airspace managed by controllers based in Miami, as shown in figure 5. FAA officials told us that this operational evaluation will assess space- based ADS-B with the computer system used by domestic air traffic controllers—the En Route Automation Modernization (ERAM) system. The operational evaluation will also focus on how to use space-based ADS-B in the heavily travelled airspace between the U.S. East Coast and islands in the Caribbean and assess potential benefits. As detailed by FAA officials, a radar that is located on Grand Turk Island provides critical data to U.S. air traffic controllers and enables the use of domestic separation standards of 5 nautical miles in this airspace. When this radar is out of service, which happens on a regular basis, aircraft traversing the airspace between Florida and Puerto Rico must be spaced using oceanic separation standards (e.g., separation distances of 30 nautical miles or greater). According to an industry report and FAA officials, this situation leads to re-routes and delays, which negatively affect airline operations. Using space-based ADS-B as a back-up surveillance system would ensure that even when the Grand Turk radar fails, U.S. air traffic control can continue to manage air traffic using domestic separation standards. In 2021, once the operational evaluation is complete, the ASEPS program expects to make recommendations to FAA executives on how to use space-based ADS-B in the Miami oceanic flight region, in addition to other areas. FAA officials also said that this evaluation will allow the agency to test space-based ADS-B in an operational environment and that the findings can inform its medium-term and long-term plans for using space- based ADS-B. The use of space-based ADS-B in this airspace could also result in more direct routes between the U.S. East Coast and islands in the Caribbean. According to FAA officials and documentation, the ASEPS program expects to study additional potential benefits of space-based ADS-B over the next 3 to 5 years. These medium-term initiatives are expected to: Analyze the use of space-based ADS-B for contingency operations in U.S. airspace. This study would define where space- based ADS-B can be used to provide surveillance capabilities when ground-based infrastructure (e.g., radar) is unavailable, such as after a hurricane. As part of this plan, the ASEPS program would also identify upgrades that would be needed for air traffic control computer systems to support using space-based ADS-B. Analyze operational challenges in U.S. oceanic airspace and potential solutions. This study of U.S. oceanic airspace would include a data-driven analysis of the use and constraints on the use of user-preferred routes by aircraft in U.S. oceanic airspace. In addition to providing information on potential inefficiencies in oceanic airspace operations, the analysis will cover how to mitigate potential safety hazards related to the use of space-based ADS-B in the oceanic environment and the requirements for future upgrades to ATOP to support the use of space-based ADS-B. According to FAA officials, both medium-term initiatives would result in recommendations for consideration by FAA executives in 2021. Using space-based ADS-B for contingency operations could lead to updated air traffic control procedures and computer upgrades; however, this would depend on the results of the analysis and the approval of FAA executives. The analysis of user-preferred routes in oceanic airspace could lead to recommendations on how to optimize route systems and how to use space-based ADS-B to support the use of user-preferred routes. According to FAA officials and documentation, using space-based ADS-B to enable the use of new minimum separation standards in U.S. oceanic airspace will be reviewed and evaluated over the next 5 or more years. This long-term initiative will use information learned through the near-term and medium-term plans. As part of this initiative, the ASEPS program intends to investigate options for enhanced communication technologies and encourage industry development of these technologies. As with the medium-term initiatives, the ASEPS program expects to make recommendations to FAA executives on how to proceed with this plan in 2021. Based on the results of this initiative, program officials told us they could start preparing for an investment decision on using space-based ADS-B in oceanic airspace to enable the use of new minimum separation standards in 2025 or later. Selected Aviation Stakeholders Support FAA’s Overall Approach to Enhancing Surveillance and Identified Expected Benefits from Reducing Separation Most Selected Airlines Support FAA’s Overall Approach to Enhancing Surveillance Most (11 of 14) of the selected airlines we interviewed and surveyed support FAA’s approach to enhance surveillance capabilities in U.S. oceanic airspace by pursuing enhanced ADS-C and adopting new minimum oceanic separation standards of 23 nautical miles lateral and 20 nautical miles longitudinal in the near term. Most (12 of 14) also support continuing to evaluate how to use space-based ADS-B in oceanic airspace. Of those selected airlines that did not support FAA’s approach, the reasons included concern that using enhanced surveillance technologies will increase operator costs with no clear benefits and that FAA is prioritizing enhanced ADS-C over space-based ADS-B despite the safety and technological advances the latter would enable. While most selected airlines (12 of 14) were satisfied or very satisfied with how FAA manages the safety of U.S. oceanic airspace, most noted the need to improve operational efficiency in this airspace. Specifically, many selected airlines (10 of 14) reported experiencing operational inefficiencies, including not being able to fly at fuel-efficient altitudes. Many of these airlines (9 of 10) view adopting new minimum separation standards as a way to address these inefficiencies. Other aviation stakeholders, including the unions representing FAA air traffic controllers and commercial airline pilots, also see the need to enhance surveillance and adopt new minimum separation standards to ensure that U.S. oceanic airspace remains efficient as international air traffic grows. Selected Airlines Identified Expected Benefits from FAA’s Implementation of New Minimum Oceanic Separation Standards Selected airlines identified several benefits they would expect to see from the implementation of new minimum oceanic separation standards, including improved access to fuel-efficient altitudes, redesigned organized track systems, and improved access to user-preferred routes. Improved Access to Fuel- Efficient Altitudes Most selected airlines (12 of 14) we surveyed view improved access to fuel-efficient altitudes as a benefit of reduced separation standards. Aircraft flying in controlled airspace cannot change altitudes (e.g., move from 36,000 feet to 38,000 feet) without air traffic control approval. With reduced minimum separation standards, air traffic control could grant more altitude change requests, allowing aircraft to more consistently fly at fuel-efficient altitudes. For example, representatives from one airline told us that an aircraft’s ability to climb and descend as needed provides both safety and operational benefits. Other airline representatives also told us that the ability to fly at fuel-efficient altitudes results in savings on fuel costs. Redesign of Organized Track Systems Many selected airlines (9 of 14) think FAA should make changes to organized track systems once new minimum separation standards are adopted. These changes include reducing lateral separation between routes or removing the systems entirely to enable aircraft to fly user- preferred routes all the time. Reduce lateral separation between the routes in organized track systems. Currently, all organized track systems in U.S. oceanic airspace have routes spaced at least 50 nautical miles apart laterally. Several selected airlines (3) told us that they would expect FAA to take advantage of new reduced minimum separation standards by spacing routes more closely together. For example, representatives from one airline suggested spacing the routes in the West Atlantic Route System 30 nautical miles apart laterally—thus increasing the number of routes from 10 to 19 and significantly increasing airspace capacity. In a report prepared by the NextGen Advisory Committee’s Enhanced Surveillance Task Group at the request of FAA, there was also support for taking advantage of new minimum separation standards enabled by enhanced surveillance to reduce the lateral separation between routes in the Central East Pacific Route System. Remove all organized track systems. Several selected airlines (5 of 14) also viewed the adoption of new minimum separation standards as a step toward the removal of all organized track systems. Removing all organized track system routes would, by definition, mean aircraft operators could fly user-preferred routes optimized according to their preferences, such as fuel use and flight time. Air navigation service providers in Canada and the United Kingdom, which are responsible for managing the North Atlantic Organized Track System, told us that the use of space-based ADS-B and the proposed separation standards it supports (i.e., 19 nautical miles lateral and 17 nautical miles longitudinal), may lead to the end of published ATS routes for the North Atlantic Organized Track System. Access to User-Preferred Routes Many selected airlines indicated that current separation standards inhibit their ability to fly user-preferred routes (10 of 14) as well as their ability to fly the most efficient user-preferred routes (11 of 14). Many selected airlines (9 of 14) view more access to user-preferred routes or the ability to fly more efficient user-preferred routes as an expected benefit of new minimum separation standards. Several selected airlines (3 of 14) also told us that they no longer request to fly user-preferred routes in the airspace covered by the Central East Pacific Route System or along the West Atlantic Route System because these requests are denied or they are re-routed during the flight. Selected airlines also noted the importance of understanding the costs, benefits, and timelines associated with the implementation of enhanced surveillance technologies in making their own investment decisions. Specifically, most selected airlines (11 of 14) told us that their decision to use an enhanced surveillance technology is contingent upon how much it will cost them to implement the technology—which can involve equipping aircraft and potentially paying subscription costs for the service— compared to the benefits airlines receive from the technology. For example, representatives from one airline told us that they are interested in the benefits of space-based ADS-B and enhanced ADS-C, but before paying for new or additional surveillance services, they would need to have evidence that the benefits of these services would outweigh the costs. Specifically, the representatives would like to know to what extent enhanced surveillance, if at all, would result in the actual use of new minimum separation standards and the likelihood they would be able to fly the flight plan they filed. With this information, the airline representatives said the airline could determine whether they could realize cost savings or additional revenue, such as through adding flights to their schedules. Representatives from another airline told us they would like to know what FAA’s plan is for enhancing surveillance and enabling new minimum separation standards and to have assurance that FAA will stick to this plan. FAA Is Taking Steps to Realize the Benefits of New Minimum Oceanic Separation Standards According to FAA officials and documents, the agency’s approach addresses some of the efficiency benefits expected by airspace users. Improved access to fuel-efficient altitudes. FAA officials and air traffic controllers we spoke to expect the adoption of new minimum separation standards to offer efficiency benefits to airspace users through more consistent access to fuel-efficient altitudes. In a business case analysis, FAA estimated that this benefit would result in over $280 million in cost-savings for aircraft operators. According to air traffic controllers we spoke to, with new minimum separation standards they would be able to more frequently grant aircraft requests to access these altitudes. Redesign of organized track systems. When considering changes to organized track systems, FAA officials said they must balance benefits to airspace users with workload demands that would be placed on air traffic controllers. FAA officials told us they are currently redesigning the North Pacific Route System to take advantage of the 23 nautical mile lateral separation standard by reducing the lateral separation between tracks. According to FAA officials, this redesign, which is planned to be complete by 2021, could offer benefits to aircraft operators flying between Japan and Alaska, such as allowing air traffic to move more efficiently and with fewer restrictions on user- preferred routes. FAA officials told us that redesigning the North Pacific Route System is possible because of high FANS-equipage rates (over 95 percent) and the absence of disruptive weather patterns. However, according to FAA officials, they do not plan any changes to other organized track systems, such as the Central East Pacific Route System and the West Atlantic Route System, at this time because of aircraft equipage rates and weather patterns. In such areas, moving the routes closer together would prevent air traffic controllers from approving aircraft requests to deviate due to bad weather. Access to user-preferred routes. FAA officials differ with selected airline representatives on whether reduced separation standards would lead to increased access to user-preferred routes. According to FAA officials and documents, improved access to user-preferred routes requires an increase in aircraft equipped with FANS, not changes to the airspace. FAA officials also said that airlines can fly user-preferred routes in the Central East Pacific Route System and the West Atlantic Route System but also acknowledged that air traffic controllers often cannot grant access to user-preferred routes in these airspaces because of the volume of air traffic or disruptive weather patterns. Given the differing perspectives and limited data on user- preferred routes, in April 2019, FAA decided to engage a third-party research company to study the use of and access to user-preferred routes in U.S. oceanic airspace, to be completed in late 2021. Based on this study, FAA may investigate changes to U.S. airspace to address problems identified. FAA identified venues to share and coordinate their enhanced surveillance plans, timelines, and expectations with aviation industry stakeholders. As previously noted, FAA’s process for implementing changes to separation standards requires the agency to coordinate with and brief domestic and international aviation industry stakeholders. FAA officials also pointed to other venues where they plan to share information on these plans with airlines, including formal and informal working groups. Given the relatively early stages of the implementation of the 23 nautical mile lateral and 20 nautical mile longitudinal separation standards enabled by enhanced ADS-C, FAA has not yet completed this industry outreach. The agency plans to coordinate with the aviation industry on the implementation of these separation standards by January 2021. Selected Airlines and Other Aviation Stakeholders Raised Concerns about Two Possible Consequences of FAA’s Approach to Enhanced Surveillance International Leadership Several selected airlines and other aviation stakeholders—representing pilots, commercial airlines, business aircraft operators, and general aviation—noted the importance of FAA taking advantage of technology advancements and benefits that space-based ADS-B can offer. For example, several (5) selected airlines view space-based ADS-B as a major advancement in oceanic surveillance. Representatives from one airline told us that FAA risks losing its position as a global leader if it does not move forward with space-based ADS-B and the reduced separation standards it enables. According to FAA officials, the agency is a leading air navigation service provider as demonstrated by its use of advanced computer systems to apply minimum separation standards when possible, its role in developing ICAO’s new minimum separation standards, and its plans to move forward with space-based ADS-B in a manner that best fits U.S. oceanic airspace needs. FAA officials also pointed to other air navigation service providers, such as the Japan Civil Aviation Bureau, that are not currently planning to use space-based ADS-B. Harmonization with Adjacent Flight Regions Several selected airlines and other aviation stakeholders representing commercial and business airlines expressed concern that by not adopting enhanced surveillance and the minimum separation standards it enables, aircraft transitioning into and out of U.S. oceanic airspace could experience delays. Representatives of the Canadian and United Kingdom air navigation service providers, which began using space-based ADS-B and the new minimum separation standards it enables in 2019, told us that different separation standards between their oceanic airspace and U.S. oceanic airspace could lead to delays for aircraft as air traffic increases. Specifically, as air traffic grows and air traffic controllers apply separation distances closer to the minimum standards, those flight regions with lower minimum standards will have to space out aircraft crossing into flight regions with higher minimum separation standards prior to an aircraft crossing a flight region boundary. This situation could lead to delays crossing flight region boundaries and less access to efficient routes across oceanic airspace. FAA views other factors, such as the low volume of air traffic in some airspaces, the frequency of disruptive weather patterns, and the relatively low percentage of aircraft equipped with FANS in high volume airspaces, to contribute more to the operational efficiency of the oceanic airspace than the use of minimum standards. As previously noted, according to FAA officials and air traffic controllers, the current minimum separation standards for U.S. oceanic airspace (30 nautical miles lateral and longitudinal) are rarely used because of these factors. In addition, FAA officials told us that the difference between the separation standards FAA plans to adopt in U.S. oceanic airspace with enhanced ADS-C (23 nautical miles lateral and 20 nautical miles longitudinal) and the separation standards enabled by space-based ADS-B (19 nautical miles lateral and 17 nautical miles longitudinal) is unlikely to result in delays even as air traffic increases. Other air navigation service providers in the Atlantic and Pacific Oceans are still assessing the costs and benefits of space-based ADS-B. For example, the Portuguese air navigation service provider told us they are still considering whether to use space-based ADS-B. In the Pacific Ocean, the Japanese air navigation service provider has not decided whether to use space-based ADS-B and therefore will not be adopting the minimum separation standards (19 nautical miles lateral and 17 nautical miles longitudinal) enabled by this technology. While the Japanese plan to adopt the 23 nautical mile lateral separation standard supported by enhanced ADS-C, they do not plan to adopt the 20 nautical mile longitudinal separation standard at this time. Agency Comments We provided a draft of this report to the Department of Transportation (DOT) for review and comment. DOT responded by email and provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) the Federal Aviation Administration’s (FAA) approach to enhancing surveillance capabilities to improve safety and efficiency in U.S. oceanic airspace and (2) selected aviation stakeholders’ perspectives on FAA’s approach to enhancing surveillance. To address both of our objectives, we reviewed FAA and other aviation stakeholders’ documents on the management and organization of U.S. oceanic airspace; the functionality and use of communication, navigation, and surveillance equipment in aircraft flying in U.S. oceanic airspace; and descriptions of the enhanced surveillance technologies that were being considered by FAA—space-based Automatic Dependent Surveillance- Broadcast (ADS-B) and enhanced Automatic Dependent Surveillance- Contract (ADS-C). Specifically, to understand how U.S. air traffic controllers manage oceanic airspace and the procedures aircraft operators must follow, we reviewed FAA Advisory Circulars on Oceanic and Remote Continental Airspace Operations (91-70B) and Data Link Communications (90-117) and FAA Order JO 7110.65X: Air Traffic Control. We also reviewed a NextGen Advisory Committee report, Enhanced Surveillance Capabilities in FAA Controlled Oceanic Airspace: Operational Need and Added Benefits, that was prepared at the request of FAA on this topic, to understand the industry perspective on the need for enhanced surveillance in U.S. oceanic airspace and the costs and benefits of using space-based ADS-B. To understand how space-based ADS-B and enhanced ADS-C would function, we interviewed representatives from Aireon, which offers the space-based ADS-B service, and Inmarsat, which provides the primary satellite communication network used by the providers of ADS-C services. We also interviewed other aviation industry stakeholders, including trade associations representing aircraft operators and unions representing pilots, including Airlines for America, International Air Transport Association, National Air Carrier Association, National Business Aviation Association, Aircraft Owners and Pilots Association, Coalition of Airline Pilots Associations, and Air Line Pilots Association. These organizations were selected based on several factors: their inclusion in prior GAO reports, their role in the aviation industry, and recommendations from other industry stakeholders or FAA. To examine FAA’s approach to enhancing surveillance capabilities in U.S. oceanic airspace, we reviewed FAA documents and interviewed FAA officials. The documents we reviewed included those related to FAA’s plans to modernize management of oceanic airspace, specifically The Future of the National Airspace System (June 2016) and National Airspace System Capital Investment Plan FY2018-2022. We also reviewed FAA’s policy guidance on acquisitions and investment documents related to the Advanced Surveillance Enhanced Procedural Separation (ASEPS) program’s planned investment decision on enhanced surveillance. These internal FAA documents included the ASEPS Concept of Operations, the Initial and Final Business Case Analyses, the Final Investment Decision Benefits Basis of Estimate, and a Safety Risk Management Assessment of space-based ADS-B and enhanced ADS-C. In reviewing the business case analysis, we did not independently evaluate the methodology or data sources used. We interviewed FAA officials and program managers that are working on different elements of FAA’s efforts to enhance surveillance in U.S. oceanic airspace. Within the Air Traffic Organization, we interviewed officials from several offices, including the ASEPS program, which managed the evaluation of surveillance technologies; the Oceanic/Offshore Standards and Procedures Branch, which oversees air traffic operations in oceanic airspace such as facilitating changes to air traffic procedures and systems to enable the use of new technologies and new standards; and the Advanced Technologies and Oceanic Procedures Program Office, which oversees changes to the air traffic control computer system used to manage oceanic air traffic. We also interviewed FAA officials with the Flight Standards Service, which works to improve flight operations, standardization, and aviation safety across U.S. and international airspace systems. In addition, we interviewed the contractor who prepared FAA’s business case analyses. We interviewed FAA air traffic controllers at the Anchorage, New York, and Oakland air route traffic control centers, which are responsible for managing the flight information regions that comprise U.S. oceanic airspace. In addition, we conducted site visits to the New York and Oakland air route traffic control centers, where we observed air traffic controllers providing oceanic air traffic services. We also interviewed representatives from the National Air Traffic Controllers Association, which is the union representing FAA air traffic controllers. We also interviewed or received written responses from representatives of the air navigation service providers for oceanic airspace adjacent or close to U.S. oceanic airspace—Canada, Japan, Portugal, and the United Kingdom—to understand their plans to enhance surveillance capabilities. To obtain selected aviation stakeholders’ perspectives on FAA’s approach to enhancing surveillance in U.S. oceanic airspace, we selected 10 U.S. and foreign commercial airlines using FAA data from fiscal year 2016 on the annual number of flights by airline in U.S. oceanic flight information regions–Anchorage Arctic and Oceanic, Oakland Oceanic, and New York Oceanic. Specifically, we selected the five airlines in each U.S. oceanic flight information region with the most annual flights. Some airlines were in the top five in more than one flight information region. All 10 airlines selected using this method were passenger airlines. We selected an additional passenger airline because it planned to begin service in U.S. oceanic airspace. We selected three large cargo airlines, based on tons of cargo transported, to ensure that the cargo airlines’ perspective was represented. Of the 14 airlines we selected, we conducted semi-structured interviews with or received written responses to our questions from 13. To obtain additional information from airline operators, we conducted a follow-up survey of the 14 selected airlines. The survey included questions on perceptions of the safety of FAA’s management of U.S. oceanic airspace, operational inefficiencies experienced by airlines in U.S. oceanic airspace, effect of current separation standards on airlines’ use of user-preferred routes, airlines’ expectations of the benefits of reduced separation standards, and airlines’ support for FAA’s planned approach to enhance surveillance in oceanic airspace. We developed the survey based on our objectives and included topics not covered in our initial interviews. We pre-tested our survey with representatives of three of the 14 selected airlines. We conducted the survey between December 2018 and January 2019, and all 14 selected airlines completed the survey. For the complete list of airlines we interviewed and/or surveyed, see table 1. In this report, we use the following conventions in reference to information obtained from the 14 selected airlines: “several” is three to seven, “many” is eight to 10, and “most” is 11 to 13. We conducted this performance audit from March 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Current and Proposed Separation Standards for Oceanic Airspace Current and Proposed Minimum Separation Standards for Oceanic Airspace The International Civil Aviation Organization (ICAO) publishes minimum separation standards and related eligibility requirements for oceanic airspace. Air navigation service providers, such as the Federal Aviation Administration (FAA), may adopt these standards or apply standards that are more conservative (e.g., require greater distances between aircraft). Table 1 lists selected ICAO current and proposed minimum separation standards for oceanic airspace that rely on either Automatic Dependent Surveillance-Contract (ADS-C) or space-based Automatic Dependent Surveillance-Broadcast (ADS-B). Separation Standards Commonly Applied in U.S. Oceanic Airspace The lateral and longitudinal separation standards commonly applied by U.S. air traffic controllers in U.S. oceanic airspace—the Anchorage Arctic, Anchorage Oceanic, New York Oceanic, and Oakland Oceanic flight regions—are shown in table 2. Aircraft meeting these communication, navigation, and surveillance equipment and performance requirements are eligible for the separation standards detailed above. However, the actual standards applied by U.S. air traffic controllers depend on several factors, including the number of similarly eligible aircraft and air traffic volume. For example, while an aircraft may be eligible to use the 30 nautical mile lateral separation standard, nearby aircraft may not. When aircraft with differing communication, navigation, and/or surveillance capabilities are flying near one another, air traffic controllers must apply the larger separation standard based on the aircraft with the fewest capabilities. Air traffic controllers consider not just an aircraft’s current location but also where it is going when applying separation standards. Therefore, as aircraft approach the boundaries of U.S. oceanic airspace, U.S. air traffic controllers also consider the separation standards and eligibility requirements of the neighboring flight region. Based on our interviews, U.S. air traffic controllers hand off aircraft to their foreign counterparts (and vice versa) so that aircraft enter a new flight region in conformance with that flight region’s standards. For example, air traffic controllers managing aircraft in the Anchorage Oceanic flight region do not typically space aircraft heading towards Russian oceanic airspace (the Magadan Oceanic Flight Information Region) at the minimum separation—even if they are eligible. According to these air traffic controllers, any benefits that aircraft would gain from flying at the minimum separation distance in U.S. airspace would be lost when entering Russian airspace, where the separation standards are 10 minutes longitudinal (approximately 80 nautical miles). Therefore, aircraft must be spaced at least 10 minutes apart longitudinally upon entering Russian airspace. As shown in tables 1 and 2 above, FAA uses the 30 nautical mile longitudinal standard but does not use the 23 nautical mile lateral standard. According to interviews with FAA officials and FAA documentation, FAA plans to adopt and start using the 23 nautical mile lateral standard in U.S. oceanic airspace in 2021 and the 20 nautical mile longitudinal standard in this airspace in 2022. According to FAA officials, the agency does not plan to adopt the other ICAO proposed minimum standards (i.e., 19 or 15 nautical miles lateral and 17 or 14 nautical miles longitudinal) that depend on the use of space-based ADS-B at this time. Appendix III: Acquisition Steps Completed by the Advanced Surveillance Enhanced Procedural Separation (ASEPS) Program The Federal Aviation Administration (FAA) Acquisition Management System (AMS) policy outlines a process for evaluating potential investments. This process includes the following milestones: 1. definition of the concept and requirements of a program; 2. investment analysis readiness decision; 3. initial investment decision (business case analysis to determine the 4. final investment decision (final business case and implementation 5. solution implementation (program implementation). FAA’s corporate-level acquisition decision-making body—the Joint Resources Council (JRC) —approves or disapproves at each AMS milestone. If the JRC approves the final investment decision, this commits FAA to funding the program segment and moving forward with the investment plan. From January 2014 to April 2019, FAA’s Advanced Surveillance Enhanced Procedural Separation (ASEPS) program—tasked with evaluating and comparing the costs and benefits of enhanced Automatic Dependent Surveillance–Contract (ADS-C) and space-based Automatic Dependent Surveillance–Broadcast (ADS-B)—progressed through the following steps in the AMS process to prepare for a final investment decision on enhancing surveillance and enabling new minimum separation standards in U.S. oceanic airspace. January 2014 (investment analysis readiness decision). JRC approved FAA to begin further analysis of options, including enhanced ADS-C and space-based ADS-B to support the adoption of reduced separation standards in U.S. oceanic airspace. As part of this analysis, FAA took the following actions. July 2015. JRC recommended that the ASEPS program continue evaluating the space-based ADS-B option to accommodate user (i.e., airline) preference. July 2016. FAA tasked the NextGen Advisory Committee with evaluating (1) the need and benefit of enhanced surveillance capabilities, including associated costs, funding mechanisms and funding models and (2) evaluate the business case, including insight regarding several operational factors impacting potential benefits from an investment. FAA requested input from the NextGen Advisory Committee to better understand industry’s assessment of (1) the quantified benefit that industry expects the investment will deliver and (2) how much industry would be willing to pay if it was responsible for the investment. However, according to FAA officials, the report did not address the quantified benefit industry expects the investment will deliver, determine how much industry would be willing to pay if it was responsible for the investment, or conduct an overall assessment of whether the investment is cost beneficial to industry. The report cited not having sufficient information, such as expected benefits and costs, to conduct an analysis of how much industry would be willing to invest. October 2017 (initial investment decision). ASEPS Program presented the initial business case analysis comparing the two enhanced surveillance options, enhanced ADS-C and space-based ADS-B, to the JRC. Given the negative return on investing in space- based ADS-B, the JRC directed the ASEPS program to evaluate the costs and benefits of space-based ADS-B within sub-sectors of U.S. oceanic flight regions, such as Oakland flight region north and New York east. March 2018. JRC directed the ASEPS Program to proceed with both enhanced surveillance options—enhanced ADS-C and space-based ADS-B—to a final investment decision, which was planned for September 2018. June 2018. ASEPS Program proposed a strategic shift, which involved delaying the final investment decision on enhanced ADS- C and deferring a final investment decision on space-based ADS- B to allow additional testing on how to use space-based ADS-B in oceanic and domestic airspace. Drivers of this shift in approach included the results of the business case analysis. September 2018 (strategy decision). JRC approved the ASEPS program’s strategic shift. The ASEPS Program asked the JRC to approve their plan to delay a final investment decision on enhanced ADS-C and to defer a final investment decision on space-based ADS-B. The JRC approved the ASEPS program’s proposal to merge the ASEPS enhanced ADS-C investment with a planned final investment decision on upgrades to the Advanced Technology and Oceanic Procedures (ATOP) system. The JRC also approved the ASEPS program’s proposal to continue studying space-based ADS-B through an operational evaluation in U.S. offshore airspace and longer-term studies concerning using space-based ADS-B for contingency operations and future use in U.S. oceanic airspace. April 2019 (final investment decision). JRC approved a final investment decision on the ASEPS Program’s plan to use enhanced ADS-C to enable new minimum separation standards in U.S. oceanic airspace. The ATOP program management office asked the JRC to approve investments in large-scale ATOP enhancements that include system changes that will enable the implementation of new minimum separation standards (i.e., 23 nautical miles lateral and 20 nautical miles longitudinal) with the use of enhanced ADS-C. Appendix IV: Costs and Benefits in the Advanced Surveillance Enhanced Procedural Separation (ASEPS) Business Case Analysis As part of its acquisition process (outlined in app. III), the Federal Aviation Administration (FAA) contracted with a third-party to prepare a business case analysis for the Advanced Surveillance Enhanced Procedural Separation (ASEPS) program. This analysis estimated the costs to the agency and aircraft operators, identified safety benefits from enhanced surveillance, and identified and calculated the value of efficiency benefits from applying new minimum separation standards enabled by two technologies: enhanced Automatic Dependent Surveillance-Contract (ADS-C) and space-based Automatic Dependent Surveillance-Broadcast (ADS-B). The analysis described below was developed for FAA’s initial and final investment decision on the program: ASEPS Initial Business Case (August 2017). This business case analysis compared the costs and benefits of space-based ADS-B and enhanced ADS-C to a baseline scenario. ASEPS Final Business Case (August 2018). This business case analysis compared the costs and benefits of enhanced ADS-C to a baseline scenario. No final business case analysis was prepared for space-based ADS-B since FAA deferred a final investment decision on the use of space-based ADS-B. This appendix discusses the costs and benefits that were included in these business case analyses based on our review of FAA’s business case documentation and interviews with FAA officials. Description of Baseline, Enhanced ADS-C, and Space- based ADS-B Scenarios In the initial business case, a baseline scenario and two alternative scenarios were used to evaluate the costs and benefits of using enhanced ADS-C and space-based ADS-B as compared to not using these enhanced surveillance options: baseline with no change in current minimum separation standards of 30 nautical miles lateral and 30 nautical miles longitudinal, use enhanced ADS-C with minimum separation standards of 23 nautical miles lateral and 23 nautical miles longitudinal, and use space-based ADS-B with minimum separation standards of 15 nautical miles lateral and 15 nautical miles longitudinal. In the final business case analysis, only a baseline scenario and the enhanced ADS-C scenario were included. In the business case analysis, costs and benefits were modelled between 2020 and 2040 in the Atlantic and Pacific Oceans. To model these scenarios, researchers used projections on flight demand and aircraft equipage with the technology required to use these enhanced surveillance services: Future Air Navigation System (FANS) or ADS-B and FANS. Costs In order to use enhanced ADS-C and space-based ADS-B to enable new minimum separation standards, FAA and airspace users will need to make certain investments. Based on our review of FAA’s business case documentation, we found that certain costs were factored into the business case analysis, including: upgrades to the Advanced Technologies and Oceanic Procedures additional ADS-C message traffic, and subscription fee for space-based ADS-B service The final business case analysis focused on enhanced ADS-C and included only those costs to FAA and users related to use of this service. The business case analysis focused on the costs of these enhanced surveillance services and did not include the cost of equipping aircraft with FANS and/or ADS-B equipment, which are required to use these enhanced surveillance technologies. According to FAA officials, these costs were not included because aircraft operators are equipping their aircraft for other reasons. Specifically, FAA regulations requiring ADS-B equipment for aircraft flying through U.S. domestic airspace by 2020 means most aircraft flying in U.S. oceanic airspace will be ADS-B equipped. In addition, mandates from other air navigation service providers requiring FANS will compel most aircraft crossing into non-U.S. oceanic airspace to equip with FANS. Costs to FAA The business case considered the costs FAA would incur using the data from these enhanced surveillance technologies, including upgrades to ATOP software. Costs to Airspace Users The business case analysis also considered the costs airspace users would face in using these enhanced surveillance technologies. In the business case analysis, FAA assumed that aircraft operators would continue to pay for ADS-C services. Since enhanced ADS-C would involve more messages per flight hour than currently sent via ADS-C, FAA estimated that aircraft operators would see an increase in messaging costs per flight hour, according to our review of FAA documentation. FAA also made assumptions about how much a subscription fee for space-based ADS-B will cost. As a new service that FAA has not yet contracted for, the actual cost of space-based ADS-B subscription fees are not known. However, initial estimates of the cost per flight hour for space-based ADS-B are much greater than the estimated cost per flight hour of additional ADS-C messages, according to FAA. Benefits FAA’s business case analysis considered safety benefits and efficiency benefits. As detailed in the analysis, the size of these benefits depends on the participation of aircraft in each enhanced surveillance service (i.e., enhanced ADS-C and space-based ADS-B). The benefits presented in the business case represent the maximum benefit pool. Specifically, the analysis assumes that all properly equipped aircraft will use space-based ADS-B or enhanced ADS-C services. Safety Benefits The business case analysis discussed safety benefits offered by improved surveillance, such as increased air traffic controller situational awareness and improved detection and resolution of aircraft on conflicting flight paths. According to oceanic air traffic controllers we interviewed at the three air route traffic control centers responsible for U.S. oceanic airspace, enhancing surveillance capabilities offers safety benefits, such as improved situational awareness and search and rescue capabilities. Enhanced ADS-C and space-based ADS-B both offer these safety benefits. However, space-based ADS-B also provides information to air traffic controllers to reduce the risk of a vertical collision between aircraft. This safety benefit was monetized by FAA. Efficiency Benefits Enhanced surveillance can enable a reduction in the minimum required distance applied between aircraft, with potential efficiency benefits for airspace users. The three efficiency benefits included in FAA’s business case analysis that were monetized are: Improved accommodation of altitude requests. According to FAA’s analysis, a primary benefit of reduced separation standards is that aircraft will be more likely to fly at a fuel-efficient altitude. In oceanic airspace, aircraft must make a request to air traffic control to change their altitude. Despite the immensity of oceanic airspace, there is competition for the most fuel-efficient altitudes at certain times of day. For example, according to oceanic air traffic controllers in Oakland, the majority of the air traffic they handle is flights between Hawaii and the U.S. west coast, with most aircraft departing at the same time. Air traffic controllers we spoke with agreed that with enhanced surveillance and reduced separation standards, they should be able to grant more altitude requests and allow more aircraft to fly at optimal altitudes. Reduced need for aircraft to carry extra fuel. According to FAA’s analysis, aircraft operators typically carry more fuel on an aircraft than needed to fly their planned route. Aircraft carry extra fuel to hedge against the possibility that its actual flight path will be less fuel-efficient than its planned flight path. The cost of carrying extra fuel (i.e., the cost to carry) comes from the added weight of carrying extra fuel, weight that causes an aircraft to use more fuel and that reduces an aircraft’s ability to carry revenue-generating cargo. This benefit flows from the improved accommodation of altitude requests, discussed above. More efficient arrivals and departures at Pacific island airports. According to FAA’s analysis, some Pacific island airports do not have radar surveillance and require U.S. oceanic air traffic controllers in the Oakland air route traffic control center to manage aircraft arrivals and departures. As a result, oceanic separation standards are applied as aircraft arrive and depart these islands’ airports. FAA’s analysis shows that reducing oceanic separation minimums will allow air traffic controllers to allow more frequent arrivals and departures from these airports. According to this analysis, the benefit of more frequent arrivals and departures is measured in terms of the costs to aircraft operators (an aircraft’s direct operating costs) and the cost to passengers (a passenger’s value of time). FAA’s business case analysis also includes efficiency benefits of reduced separation that were not monetized, including emissions savings and improved air traffic control accommodation of aircraft requests for descents, routing changes, and speed changes. FAA policy does not currently allow programs to value carbon dioxide emissions avoided for investment decisions. Another efficiency benefit of reduced separation— giving air traffic controllers more flexibility to grant deviations from planned flight paths due to disruptive weather—was quantified and monetized, but not factored into the benefit calculation. Appendix V: Federal Aviation Administration’s 18 Critical Milestones to Implement a New Separation Standard To implement new separation standards in U.S. oceanic airspace, the Federal Aviation Administration (FAA) has a set of 18 critical milestones that it follows: 1. Determine the operational need. 2. Evaluate the benefits. 3. Establish an operational concept. 4. Assess the impact on air traffic control. 5. Conduct a safety assessment and record it with the appropriate safety risk management documentation. 6. Determine requirements. 7. Conduct a feasibility and economic analysis. 8. Establish requirements for aircraft and operator approval. 9. Conduct rulemaking. 10. Coordinate with industry and international participants. 11. Coordinate with air traffic control representatives and pilot groups. 12. Complete regional documentation. 13. Acquire approval for aircraft and operators. 14. Develop pilot and air traffic control procedures. 15. Design pilot and air traffic control training materials. 16. Confirm that the system works. 17. Employ the separation standard. 18. Monitor the performance of the system in accordance with safety risk management practices. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jonathan Carver (Assistant Director), Sarah Arnett (Analyst-in-Charge), Amy Abramowitz, Melissa Bodeau, Samuel Gaffigan; David Hooper, Richard Hung, Amanda Miller, Malika Rice, and Pamela Vines made key contributions to this report.
Why GAO Did This Study Recent developments in surveillance technologies, which provide an aircraft's location to air traffic controllers, have the potential to improve air traffic operations over the oceans. FAA has explored how to improve surveillance capabilities in U.S. oceanic airspace to take advantage of new international separation standards that could lead to the more efficient use of this airspace. GAO was asked to review planned improvements to aircraft surveillance. This report examines: (1) FAA's approach to enhancing surveillance capabilities to improve safety and efficiency in U.S. oceanic airspace and (2) selected aviation stakeholders' perspectives on FAA's approach. GAO reviewed documents related to FAA's planned investment in enhanced oceanic surveillance and interviewed FAA officials working on this effort. Interviews included those with the Air Traffic Organization and air traffic controllers who manage U.S. oceanic airspace. GAO surveyed representatives of 14 commercial airlines, including 11 U.S. and foreign passenger airlines, which were selected based on factors such as flight volume; and 3 U.S. cargo airlines, which were selected based on tons of cargo shipped. GAO also interviewed other aviation stakeholders, including trade associations, unions representing pilots, and foreign air navigation service providers that manage airspace adjacent to U.S. oceanic airspace. What GAO Found The Federal Aviation Administration (FAA) evaluated two aircraft surveillance technologies that would allow aircraft to safely fly in closer proximity while in oceanic airspace. Based on its evaluation, FAA committed to using one in the near term and to continue to study another for future use. Specifically, in April 2019, FAA committed to implement by 2022 new international standards that allow reduced distances between aircraft, called minimum separation standards. These reduced distances would be enabled by a surveillance technology known as enhanced Automatic Dependent Surveillance-Contract (ADS-C). FAA also decided to continue studying the use of another enhanced surveillance technology known as space-based Automatic Dependent Surveillance-Broadcast (ADS-B)—to further improve surveillance in U.S. airspace. Both technologies offer increased frequency in reporting of an aircraft's location, which enhances safety, and can support new minimum separation standards. FAA decided to proceed with enhanced ADS-C in the near term because the efficiency benefits to airspace users exceeded the costs of more frequent location reporting and air traffic control system upgrades by 2 to 1. In contrast, FAA determined that the costs of using space-based ADS-B in U.S. oceanic airspace outweigh the efficiency benefits by 6 to 1. FAA officials added that operational challenges to using space-based ADS-B to manage air traffic in U.S. oceanic airspace have not yet been resolved. FAA plans to continue studying potential uses for space-based ADS-B in U.S. airspace to determine if benefits can outweigh the costs (see figure). GAO found that most selected airlines (11 of 14) support FAA's overall approach to enhance oceanic surveillance. Selected airlines also said they expect the new minimum separation standards to improve access to more direct and fuel-efficient routes. FAA is taking steps to provide these benefits by restructuring routes in one area of U.S. oceanic airspace and by applying new minimum standards to give aircraft better access to fuel-efficient altitudes. According to FAA officials, however, additional benefits, such as redesigning other U.S. oceanic airspace, expected by selected airlines are limited by (1) relatively low rates of aircraft equipage with the technology that enables reduced separation and (2) the frequency of disruptive weather patterns in parts of U.S. oceanic airspace.
gao_GAO-20-123
gao_GAO-20-123_0
Background Federal agencies are dependent on information systems and electronic data to process, maintain, and report essential information. Virtually all federal operations are supported by computer systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Federal agencies exchange personally identifiable and other sensitive information with state agencies in the implementation of key federal and state programs. The security of systems and data involved in this exchange of information is vital to public confidence and the nation’s safety, prosperity, and well-being. Since federal agencies face computerized (cyber) threats that continue to grow in number and sophistication, it is imperative that such information is protected. In recognition of this growing threat, we designated information security as a government-wide high-risk area in 1997. We further expanded this area in 2015 to include protecting the privacy of personally identifiable information. Federal Law and Policy Set Roles and Responsibilities for Protecting Federal Systems and Managing Cybersecurity Risk Several federal laws and policies establish requirements for protecting federal systems and managing cybersecurity risks. Specifically, FISMA is intended to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets, as well as the effective oversight of information security risks. The act requires each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency, including those provided or managed by another entity. FISMA also assigns government-wide responsibilities to key agencies. For example, OMB is responsible for developing and overseeing implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. NIST is also responsible for developing standards for categorizing information and information systems, security requirements for information and systems, and guidelines for detection and handling of security incidents. For example, NIST Special Publication 800-53 provides guidance to agencies on the selection and implementation of information security and privacy controls for systems. Further, OMB Circular A-130, Managing Information as a Strategic Resource, establishes minimum requirements for federal information security programs and assigns federal agency responsibilities for the security of information and information systems. It requires agencies to implement a risk management framework to guide and inform the categorization of federal information and information systems; the selection, implementation, and assessment of security and privacy controls; the authorization of information systems and common controls; and the continuous monitoring of information systems. Circular A-130 also requires federal agencies to provide oversight of nonfederal entities—such as state agencies—that use or operate federal information systems, as well as nonfederal entities’ information systems that collect or maintain federal information. In doing so, federal agencies are to ensure that security and privacy controls for such information systems are effectively implemented and comply with NIST standards and guidelines and agency requirements. Federal agencies may share data with one or more individual component agencies within a state, such as agencies that execute a state’s tax administration, law enforcement, or human services functions. The state’s responsibility for protecting data shared by federal agencies may reside within an individual state agency or it may be a shared responsibility with the state’s chief information officer and CISO. For example, a state CISO may help to manage the protections over centralized information technology (IT) resources that store, process, and transmit federal data for multiple component agencies within the state. To protect federal data that are shared with state agencies in the implementation of key federal and state programs, federal agencies have developed cybersecurity requirements for state agencies to follow when accessing, storing, and transmitting federal data. Federal agencies are to obtain assurance that state agencies’ security and privacy controls are effectively implemented through independent evaluations. These evaluations include tests and assessments of the effectiveness of state agencies’ information security policies, procedures, and practices. Such assessments are important inputs to decisions by federal officials to authorize or reauthorize a state agency’s use of information systems that create, collect, use, process, store, maintain, disseminate, disclose, and dispose of federal information. Selected Federal Agencies Have Established Policies and Compliance Assessment Programs to Protect Data Shared with State Agencies To protect federal data that are shared with state agencies, each of the federal agencies in our review have established their own policies that articulate cybersecurity requirements, as well as related compliance assessment programs, based in part on guidance from NIST. Table 1 identifies the types of data that the four selected federal agencies share with state agencies and the cybersecurity policies that they have established to protect that data. Selected Federal Agencies Had a Significant Number of Variances in Cybersecurity Requirements for State Agencies The selected federal agencies’ had a significant number of variances in the cybersecurity requirements that they had established for protecting data exchanged with state agencies. Specifically, our review identified hundreds of instances in which the four agencies either had (1) included a requirement in its cybersecurity policy that was not a requirement of the other three agencies (unique requirement); (2) established a requirement with specific, organization-defined technical thresholds that differed from at least one of the other three agencies for a related control (conflicting parameters); or (3) did not fully address in its requirements the guidelines from NIST for associated controls and control enhancements (did not fully address NIST guidelines). Table 2 summarizes the total number of requirements that each agency had included in its security policy and the extent to which the four agencies’ requirements varied from each other and from the NIST guidance. Selected Federal Agencies Had Unique Cybersecurity Requirements for State Agencies Collectively, the four selected federal agencies’ policies included 86 unique cybersecurity requirements for state agencies with which they exchange data. Specifically, CMS’s policy included 54 requirements that the other three agencies did not include. FBI’s CJIS’s policy included 24 unique requirements, IRS’s policy included five unique requirements, and SSA’s policy included three unique requirements. For example, CMS’s security policy included a requirement that state agencies review their organization-wide information security program plan annually; however, the other three agencies did not have such a requirement in their security policies. As another example, IRS had a requirement for state agencies to employ automated mechanisms to alert security personnel of inappropriate activities, while the other agencies did not have this requirement. Because each agency is addressing different legal requirements and risk management approaches for protecting information shared with states, certain requirements that are unique to an agency may be necessary. Nevertheless, agencies need to ensure that such requirements are necessary by documenting their decisions during the control selection process. Table 3 provides examples of the unique requirements that each agency included in its cybersecurity policies. Selected Federal Agencies Had Conflicting Parameters in Their Cybersecurity Requirements for State Agencies In total, the four federal agencies had identified 390 requirements for state agencies in their policies, where the parameters conflicted with at least one of the other federal agencies. Across the four agencies, CMS had the largest number of requirements that had conflicting parameters, with 139 such requirements. This was followed by IRS with 131, FBI’s CJIS with 72 requirements, and SSA with 48 requirements with conflicting parameters. For example, each of the selected agencies identified a different time frame for the retention of audit logs related to audited events. As another example, CMS required state agencies to annually review and update their access control policies, whereas IRS required this review every 3 years. FBI’s CJIS and SSA did not have this requirement in their policies. Table 4 provides additional examples of cybersecurity requirements for state agencies that the four federal agencies identified in their policies, where the parameters conflicted with those of at least one other of the federal agencies. Selected Federal Agencies Did Not Always Fully Address NIST Guidelines in Their Cybersecurity Requirements for State Agencies The four selected federal agencies did not always fully address guidelines in NIST Special Publication 800-53 (Revision 4) when establishing cybersecurity requirements for related controls, leading to additional differences among the four agencies’ cybersecurity policies. In total, the four agencies did not fully address guidelines from NIST in 141 instances. FBI’s CJIS had the most variances, with 63 requirements that did not fully address NIST guidelines, followed by SSA with 30 variances, CMS with 26 variances, and IRS with 22 variances. For example, FBI’s CJIS’s requirement did not identify the time period to retain individual training records, as called for by NIST guidance. In addition, SSA did not define the frequency of how often agencies should assess the security controls in the information system and its environment of operation. Table 5 provides examples of the cybersecurity requirements for state agencies in which selected federal agencies did not fully address NIST guidelines. Majority of State CISOs Reported Moderate to Very Great Variation in Selected Federal Agencies’ Cybersecurity Requirements and Increased Impacts from the Variances The perspectives of state CISOs who responded to our survey reflected the variation we found among the selected federal agencies’ cybersecurity requirements. The majority (at least 29 out of 50) of the state CISOs that responded to our survey question regarding the ways in which federal cybersecurity requirements vary and the extent of the variation reported moderate to very great variation in the selected federal agencies’ cybersecurity requirements. Specifically, of the 50 state CISOs that responded to this question, 34 reported that the federal agencies had moderate to very great variation with respect to unique requirements, 38 reported that the federal agencies had moderate to very great variation due to conflicting parameters that were established, and 29 reported that the federal agencies had moderate to very great variation with respect to addressing NIST guidelines for security controls and control enhancements. Figure 1 represents state CISOs’ perspectives on the extent of variation among selected federal cybersecurity requirements. State agency officials that must comply with multiple federal agencies’ cybersecurity requirements (and related compliance assessments) viewed variances as problematic and burdensome. For example, in responding to a survey question about challenges or impacts that state officials experienced regarding federal requirements and assessment processes, an official from one state agency explained that addressing variances in cybersecurity requirements reduced the ability of state officials to focus on their primary mission of securing data across their state enterprise. In response to the same survey question, another state official said that addressing the variances in federal agencies’ cybersecurity requirements increased the complexity of automating the state’s monitoring and reporting processes. In addition, the same state official commented that staff were burdened by reports and reviews to ensure that the full range of federal agencies’ requirements were met. In responding to our survey, 46 state CISOs reported the extent to which they had experienced a very great, great, moderate, slight, or no increase in calendar time; staff hours; and costs of acquiring additional materials, software, and equipment to address variances in selected federal agencies’ cybersecurity requirements. The majority (at least 34 out of 46) of the state CISOs that responded to this question in our survey reported moderate to very great increases in these types of impacts. Figure 2 represents the extent of impacts that state CISOs reported as a result of variances in selected federal cybersecurity requirements. Selected Federal Agencies’ Insufficient Coordination Contributed to Variances in Cybersecurity Requirements for State Agencies OMB Circular A-130 requires federal agencies to coordinate with nonfederal entities, such as state agencies, as well as other federal agencies as appropriate, to ensure that security and privacy requirements pertaining to these nonfederal entities are consistent to the greatest extent possible. In addition, GAO and NIST have identified practices that can help federal agencies limit potential variation in security control selection and requirements, such as coordinating to develop compatible policies, procedures, and other means to operate across agency boundaries. For example, according to NIST, agencies can establish a tailored set of baseline requirements to satisfy common security objectives. In addition, by applying practices recommended by GAO for enhancing and sustaining coordination and collaboration, federal agencies could work towards establishing shared requirements with consistent terminology and parameters. However, the four selected federal agencies have not ensured that their cybersecurity requirements for state agencies are consistent to the maximum extent possible through coordination with each other. Officials from IRS, FBI, and SSA acknowledged that they had not coordinated with other federal agencies in establishing their current cybersecurity requirements for state agencies. The agencies had not coordinated, in part, because they have prioritized addressing agency-specific responsibilities from relevant laws and agency policies as well as the needs of relevant communities of interest. CMS officials stated that the agency coordinated with other federal agencies in 2015 when CMS originally established requirements for its security policy, the Minimum Acceptable Risk Standards for Exchanges Document Suite 2.0, Volume III: Minimum Acceptable Risk Standards for Exchanges. CMS officials noted that the agency added controls that IRS and SSA deemed essential to protecting data for which these agencies were responsible. Nevertheless, we found variances between CMS’s requirements and those established by IRS and SSA. Further, CMS last updated its security policy in September 2015 and IRS, SSA, and FBI’s CJIS have each since updated their policies. In addition to the insufficient coordination, the selected federal agencies identified two additional explanations for variances in their cybersecurity requirements for state agencies: (1) agencies’ determination that selected requirements were necessary and therefore, that resulting variances are warranted and (2) agencies’ requirements review processes that resulted in deviations from NIST guidance. Each of the selected agencies noted that they determined the unique controls and competing parameters in their requirements were necessary and warranted. For example, SSA noted that it has been conducting data exchanges with states since the late 1970s, predating NIST Special Publication 800-53. According to SSA officials, the agency’s security requirements retained certain legacy language that state agencies were already familiar with to reduce disruption to them. IRS officials also noted that their security controls incorporate disclosure restrictions from the Internal Revenue Code and internal IRS directives. Agency processes for reviewing their cybersecurity requirements have resulted in deviations from NIST guidance. For example, FBI’s CJIS officials stated that they started with NIST terminology when developing their policy. However, CJIS’s Advisory Policy Board— which recommends the final CJIS policy to the FBI Director— suggested modifications to the wording of requirements during subsequent reviews. As another example, CMS noted that during the review process for its requirements, in certain instances it deviated from NIST guidance to use terminology that would be more familiar to state agency users. Federal agencies may have legitimate reasons for having variances in their cybersecurity requirements. For instance, agencies may need to apply different information security controls, a greater number of controls, or more stringent technical parameters to protect data for which they are responsible in a manner consistent with various security requirements originating in federal laws, executive orders, directives, policies, regulations, standards, or guidelines as well as the agency’s risk assessments. However, according to NIST, organizations should document the relevant decisions taken during the control selection process, and provide a sound rationale for those decisions that is based on agency mission and business needs. Both FBI’s CJIS and IRS had documented the agency’s rationale for unique requirements. SSA stated that their controls were developed before NIST standards were created and they have mapped their current controls to NIST. However, SSA was unable to produce this documentation. CMS officials noted that the rationale for the requirements identified in the agency’s Minimum Acceptable Risk Standards for Exchanges security policy was documented in CMS’s Acceptable Risk Standards. However, the Acceptable Risk Standards did not include all requirements that were included in CMS’s security policy. For example, CMS’s requirements for organizations to review and re-evaluate privileges at least quarterly and for the information system to allocate resources by priority and/or quota were included in the security policy without a defined rationale and were also not included in CMS’s Acceptable Risk Standards. While agencies have identified various reasons for not coordinating on their cybersecurity requirements for state agencies, OMB has not taken steps to evaluate whether agencies are coordinating. OMB officials acknowledged that they could encourage additional coordination among the selected agencies, but said that it is ultimately up to the agencies to set their requirements and determine how best to assess states’ compliance with those requirements. However, without OMB’s involvement and encouragement that federal agencies collaborate to make their cybersecurity requirements for state agencies consistent to the greatest extent possible, federal agencies are less likely to prioritize such efforts. The selected federal agencies will soon have an opportunity to harmonize to the extent possible their requirements as they revisit and potentially update their existing security policies based on anticipated changes in NIST guidance. Until these agencies coordinate, where feasible, to address the variances in their cybersecurity requirements, officials from state agencies may continue to experience cost, time, and other burdens resulting from these variances. Further, without documentation of the rationale for having requirements that are unique or parameters that conflict in comparison to other agencies, it will be more difficult for these agencies to achieve consistent requirements. Selected Federal Agencies’ Policies Addressed a Majority of Activities for Coordinating with State Agencies When Assessing Cybersecurity, but Did Not Address Coordinating with Each Other As previously discussed, OMB Circular A-130 requires federal agencies to assess whether state agencies have implemented effective security and privacy controls on information systems that create, collect, use, process, store, maintain, disseminate, disclose, or dispose of federal information. The circular also encourages federal agencies to coordinate on their approaches to authorizing the use of such systems whenever practicable. For example, the circular notes that multiple agencies are encouraged to jointly plan and execute tasks in NIST’s Risk Management Framework, which includes conducting security assessments. According to the circular, agencies can also leverage information generated by another agency based on the need to use the same information resources (e.g., information system or services provided by the system) by choosing to accept some or all of the information in an existing authorization package, including completed security assessments. As previously stated, NIST and GAO have recommended practices that federal agencies can implement to help with their coordination on cybersecurity assessments, such as assessments of state agencies’ compliance with federal cybersecurity requirements. Those practices fall in two broad areas: (1) coordination with state agencies when assessing states’ cybersecurity and (2) coordination with other federal agencies on the assessment of state agencies’ cybersecurity. In addition, based on the guidance from NIST that pertained to coordination on assessments of cybersecurity and practices recommended by GAO for enhancing coordination among federal agencies, four supporting activities are common to each of these two areas of federal agencies’ coordination on cybersecurity assessments: assessment schedules and time frames; meeting and document requests; security test plans—including testing techniques, location, and tools; and the use of findings from prior assessments. With regard to coordinating with state agencies when assessing their cybersecurity, two of the selected federal agencies—CMS and IRS—had policies that addressed all four of the activities supporting this area of coordination. The two other agencies—FBI’s CJIS and SSA—had policies that addressed some, but not all, of the supporting activities for such coordination. With regard to coordinating with other federal agencies on the assessment of state agencies’ cybersecurity, none of the four federal agencies had policies that addressed the activities supporting this area of coordination. Table 6 summarizes the extent to which selected agencies established policies for coordinating with state agencies and other federal agencies when assessing cybersecurity. See appendix II for details on the extent to which selected agencies addressed individual activities supporting the two areas of coordination. Federal Agencies’ Policies Addressed a Majority of Activities for Coordinating with State Agencies When Assessing Cybersecurity Each of the selected federal agencies addressed at least three of the four activities for coordinating with state agencies when assessing cybersecurity. CMS and IRS fully established policies for coordinating with state agencies by addressing all of the activities supporting such coordination. However, FBI’s CJIS and SSA partially established policies for coordinating with state agencies by addressing some—but not all—of the supporting activities. Specifically, FBI’s CJIS and SSA fully addressed three of the activities: coordinating (1) assessment schedules and time frames, (2) meeting and document requests, and (3) security test plans. For example, FBI’s CJIS policy included instructions for providing the date and time of assessment along with a schedule for the assessment process. Further, the policy stated that assessors should lay out the meetings that need to occur and documentation that state agencies need to provide CJIS, including specifics about the state’s network. SSA’s policy laid out each step of the assessment process, including the anticipated time frames. Further, SSA’s policy identified certain meetings that should be held during the process and documentation to be provided before the assessment. However, FBI’s CJIS and SSA did not fully establish policies for coordinating with state agencies because they did not address the activity associated with coordinating the use of findings from prior assessments. Specifically, while these two agencies’ policies addressed using findings from prior assessments conducted by their individual agency, their policies did not address whether or how assessors should use findings from other security assessments conducted within the state. Officials from FBI stated that in practice they consider findings from independent security assessments conducted within a state, but had not documented this practice in their assessment policies due to the limited instances in which this information is available. Officials from SSA believed that their policy addressed how its assessors were to consider findings from other security assessments that are conducted within a state. However, based on our review of SSA’s policy, this information was not yet addressed. Federal Agencies’ Policies Did Not Address Activities for Coordinating with Other Federal Agencies When Assessing State Agencies’ Cybersecurity None of the four agencies established policies for coordinating with other federal agencies when assessing state agencies’ cybersecurity. Officials from the four selected agencies reported that this is because their priority is to assess compliance with their own security requirements and they are not comfortable relying solely on other federal agencies’ assessments. Officials from each of the selected agencies provided additional perspectives on coordination with other federal agencies. Specifically: CMS officials stated that while they do not coordinate with other federal agencies in conducting compliance assessments, they did coordinate with other federal agencies when establishing their cybersecurity requirements. In addition, CMS officials stated that they do not conduct assessments of compliance with their security policy and that states engage contractors to perform the assessments. Therefore, CMS officials believed that the agency does not have a need to coordinate with other federal agencies. However, CMS did not include, where feasible, additional and detailed guidance to the state that it could use to inform its assessment contractors about coordination with other federal agencies. CMS guidance to the states could encourage additional coordination with other federal agencies such as planning the assessment, leveraging related efforts by other federal agencies, and sharing the state’s documentation and findings with other federal agencies, as appropriate. By not doing so, CMS is not maximizing coordination with other federal agencies to the greatest extent practicable. FBI’s CJIS officials stated that they schedule their security assessments 6 months ahead of time, but would be willing to reschedule the assessment if the state was unavailable due to another assessment being conducted. In addition, CJIS officials noted that while they test for security controls that other federal agencies are testing, they are not assessing the same information as other agencies because the FBI specifically requires criminal justice data to be logically separated from other data. Further, CJIS officials stated their assessment results and audit findings cannot be shared and that other federal agencies would need to refer to a state’s criminal justice agency for such information. IRS officials stated that they previously attempted to review assessment findings from other agencies, but since IRS was not looking at the same systems, the findings were not helpful. IRS officials stated that they would be willing to review recent assessments conducted by other federal agencies to see if information can be leveraged. SSA officials noted that it is their practice to reschedule an assessment if another federal agency has an assessment scheduled around the same time, but acknowledged that this was not in their policies. Further, according to SSA officials, they do not currently examine or consider findings from independent security assessments conducted within a state. While agencies cited various reasons for not coordinating when assessing state agencies’ cybersecurity, taking steps to coordinate, such as leveraging other agencies’ assessments or conducting joint assessments whenever practicable, would be consistent with practices encouraged by OMB. However, OMB has not taken steps to ensure that they do so. OMB officials noted that they believed several of the agencies had begun to coordinate on their own and acknowledged that they could take additional steps to encourage and promote coordination among the agencies. OMB officials further noted that it is ultimately the responsibility of the agencies to determine how they conduct their assessments. Nevertheless, federal agencies may be placing unnecessary burdens on state officials’ time and resources in responding to similar requests and inquiries. Several state CISOs told us that they have identified various instances in which multiple federal agencies’ lack of coordination resulted in requests for similar documentation and interviews with IT officials. For example, according to three state CISOs, the selected federal agencies have asked them to address similar questions regarding physical security controls, network configurations, and password policies in separate interviews. Three state CISOs also noted that they have provided to multiple federal agencies documentation—such as network diagrams and incident response policies—related to the same IT environment and have facilitated multiple federal assessments of the same physical environment. State CISOs Identified Opportunities for Federal Agencies to Further Coordinate and Impacts Related to Federal Cybersecurity Assessments State CISOs identified additional opportunities for further coordination among federal agencies and impacts in dealing with federal cybersecurity assessments. For instance, in response to our survey, 16 states’ officials commented that the four federal agencies in our review could leverage additional opportunities to coordinate on their assessments within their states, particularly where the states had a consolidated data center or other centrally managed IT infrastructure. Further, four state CISOs noted that federal agencies could potentially leverage security compliance assessments and internal audits performed at the state or local level because they included reviews of controls from NIST Special Publication 800-53. In addition, 11 states mentioned “duplication” in their response to a survey question about challenges or impacts related to federal cybersecurity requirements and assessment processes, while two states mentioned “overlap,” and one state mentioned “fragmentation.” For example: One state identified that assessors from different federal agencies generally ask for the same items from the state, requiring state agency officials to reproduce the same response. Another state identified that multiple federal agencies have been assessing the same state agencies with different scope, tools, and documentation requests. In another example, a state concluded that federal assessors’ interpretation of many technical controls was inconsistent and varied from one federal agency to another and across audit cycles. The state noted that there were opportunities for the federal government to streamline how each agency applied different interpretations. State CISOs also identified impacts on their time and costs from responding to federal agencies’ assessments. Seventeen respondents reported impacts to their time and six reported cost impacts. Further, in responding to questions in our survey and an in-depth interview, state CISOs provided additional insights regarding impacts. For example: One state mentioned that, due to the varying requirements from the selected federal agencies, the state is required to stand up multiple virtual and physical environments. In doing so, the state is required to purchase additional software and hardware to maintain such environments. Another state explained that staff manage various state agencies’ data in one central location and spend a considerable amount of time responding to each of the four selected federal agencies’ assessments. Twenty-four states estimated that the four selected federal agencies conducted at least 188 assessments between calendar years 2016 and 2018 and that the states’ best estimates of the total expenditures associated with those assessments ranged from $43.8 million to $67 million. Of 164 instances where states reported an average time spent on assessments by one of the four selected agencies between calendar years 2016 and 2018, in 97 instances the average time expenditure per assessment was reported to be 301 staff hours or more, and in 67 instances it was less than 301 staff hours. Additionally, there were 34 instances in which the state did not know what its average staff hour expenditure was for a particular agency’s assessment or said that it was not applicable to the state. Figure 3 represents the responses from 50 state CISOs on the average state staff hours expended per assessment from across the four selected federal agencies as reported by state CISOs. While state agencies could benefit from additional coordination among federal agencies in conducting their security assessments, increasing coordination may also save the federal government money. For instance, federal agencies may be able to reduce the number of assessments or the scope of the assessment conducted by each agency, the amount of time multiple federal agencies must spend reviewing state systems, and contractor services acquired to assist in performing assessments. The selected federal agencies reported spending close to $45 million in fiscal years 2016 through 2018 on assessments of state agencies’ cybersecurity. Figure 4, an interactive figure, provides the selected federal agencies’ reported spending for fiscal years 2016 through 2018 for assessing state compliance with cybersecurity requirements. (See appendix III for the cost breakdown of selected federal agencies’ reported spending). Until FBI’s CJIS and SSA fully develop policies for coordinating with state agencies and all of the selected agencies develop policies for coordinating with other federal agencies when assessing state agencies’ cybersecurity, as appropriate, they run the risk of spending more than necessary to assess the security of state systems and networks. Further, federal agencies may be placing unnecessary burdens on state officials’ time and resources in responding to overlapping or duplicative requests and inquiries, retesting controls that have already been evaluated, or reporting similar findings multiple times throughout a state. In addition, until OMB takes steps to ensure agencies coordinate on assessments of state agencies’ cybersecurity, it will not have reasonable assurance federal agencies are leveraging compatible assessments where practicable. Conclusions Given that the federal government exchanges personally identifiable and other sensitive information with state agencies, it is critical to have effective coordination across the federal and state agencies to protect this information. While the selected federal agencies have taken steps to secure information exchanged between federal and state agencies, they have not coordinated with each other in establishing cybersecurity requirements for state agencies. The selected agencies’ insufficient coordination has contributed to variances in the agencies’ control selection, terminology, and technical parameters across hundreds of cybersecurity requirements imposed on states. Further, OMB requires agencies to coordinate to ensure consistency among cybersecurity requirements for state entities, but it has not ensured that agencies have done so. While federal agencies may have legitimate reasons for having variances in their cybersecurity requirements, states’ compliance with multiple federal agencies’ cybersecurity requirements has resulted in increased costs. Coordinating to address variances in federal agencies’ cybersecurity requirements could help to significantly reduce these costs. The selected agencies will soon have an opportunity to coordinate on any planned updates of their security policies that affect state agencies when reviewing their security policies against expected revisions in NIST guidance. Accordingly, it is important that OMB ensures that selected federal agencies coordinate with state agencies and each other to establish cybersecurity requirements that are consistent to the greatest extent possible. Selected federal agencies had partially established policies to coordinate with state agencies when assessing their cybersecurity, but did not have policies for coordinating with other federal agencies. Federal agencies have not been coordinating with each other on assessments of state agencies’ cybersecurity, in part, because this has not been a priority for them. Further, federal agencies have been less likely to coordinate in their assessments of state agencies’ cybersecurity without additional involvement from OMB. The lack of coordination among federal agencies has been a concern among state CISOs who described instances of duplication and overlap in their cybersecurity assessments. As with the cybersecurity requirements, coordinating with both state and federal agencies when assessing state agencies’ cybersecurity may help to minimize additional cost and time impacts on state agencies, and reduce federal resources associated with implementing state-based cybersecurity assessments. Until OMB takes steps to ensure federal agencies coordinate on assessments of state agencies’ cybersecurity, it will not have reasonable assurance federal agencies are leveraging compatible assessments to the greatest extent possible. Recommendations for Executive Action We are making a total of 12 recommendations, including two to OMB, two to CMS, three to FBI, two to IRS, and three to SSA. The Director of OMB should ensure that CMS, FBI, IRS, and SSA are collaborating on their cybersecurity requirements pertaining to state agencies to the greatest extent possible and direct further coordination where needed. (Recommendation 1) The Director of OMB should take steps to ensure that CMS, FBI, IRS, and SSA coordinate, where feasible, on assessments of state agencies’ cybersecurity, which may include steps such as leveraging other agencies’ security assessments or conducting assessments jointly. (Recommendation 2) The Administrator of CMS should, in collaboration with OMB, solicit input from FBI, IRS, SSA, and state agency stakeholders on revisions to its security policy to ensure that cybersecurity requirements for state agencies are consistent with other federal agencies and NIST guidance to the greatest extent possible and document CMS’s rationale for maintaining any requirements variances. (Recommendation 3) The Administrator of CMS should revise its assessment policies to maximize coordination with other federal agencies to the greatest extent practicable. (Recommendation 4) The FBI Director should, in collaboration with OMB, solicit input from CMS, IRS, SSA, and state agency stakeholders on revisions to its security policy to ensure that cybersecurity requirements for state agencies are consistent with other federal agencies and NIST guidance to the greatest extent possible. (Recommendation 5) The FBI Director should fully develop policies for coordinating with state agencies on the use of prior findings from relevant cybersecurity assessments conducted by other organizations. (Recommendation 6) The FBI Director should revise its assessment policies to maximize coordination with other federal agencies to the greatest extent practicable. (Recommendation 7) The IRS Commissioner should, in collaboration with OMB, solicit input from CMS, FBI, SSA, and state agency stakeholders on revisions to its security policy to ensure that cybersecurity requirements for state agencies are consistent with other federal agencies and NIST guidance to the greatest extent possible. (Recommendation 8) The IRS Commissioner should revise its assessment policies to maximize coordination with other federal agencies to the greatest extent practicable. (Recommendation 9) The Commissioner of SSA should, in collaboration with OMB, solicit input from CMS, FBI, IRS, and state agency stakeholders on revisions to its security policy to ensure that cybersecurity requirements for state agencies are consistent with other federal agencies and NIST guidance to the greatest extent possible and document the SSA’s rationale for maintaining any requirements variances. (Recommendation 10) The Commissioner of SSA should fully develop policies for coordinating with state agencies on the use of prior findings from relevant cybersecurity assessments conducted by other organizations. (Recommendation 11) The Commissioner of SSA should revise its assessment policies to maximize coordination with other federal agencies to the greatest extent practicable. (Recommendation 12) Agency Comments and Our Evaluation We provided a draft of this report to OMB and the four other selected federal agencies for their review and comment. In response, three of the agencies (Department of Health and Human Services, FBI, and SSA) stated that they agreed with the recommendations; and one agency (IRS) stated that it partially agreed with one recommendation and disagreed with one recommendation. OMB did not provide comments on our report. The following three agencies agreed with the recommendations. The Department of Health and Human Services provided written comments, in which it agreed with our recommendations and identified steps it said CMS had taken or intends to take to address them. For example, the department stated that CMS intends to solicit input from the other federal agencies identified in this report and from state agency stakeholders when making updates to its MARS-E security policy and when updating its assessment guidance to states on how to maximize coordination with other federal entities. The department noted that CMS had developed and implemented its suite of guidance and requirements, known as MARS-E, based on the Patient Protection and Affordable Care Act, FISMA, and NIST. According to the department, variances in security requirements are to be expected because of the flexibility that NIST allows in its guidance. The department added that CMS tailored some of the controls to allow flexibilities for states while keeping the overall intent of the NIST guidance. The department stated that it collaborated with federal agencies, including FBI's CJIS, in developing MARS-E and during subsequent updates of that security policy. However, CMS did not provide us with documentation as evidence of its collaboration with FBI's CJIS on the development of MARS-E. In addition, as noted in this report, CMS had not collaborated with the other agencies included in our review after the development of the most recent version of MARS-E. It is important that federal agencies collaborate to address variances in their cybersecurity requirements; doing so could help to significantly reduce state agencies’ costs in complying with multiple federal agencies’ requirements. The department's comments are reprinted in appendix IV. The department also provided technical comments, which we incorporated as appropriate. In written comments, FBI's CJIS agreed with our three recommendations to the agency. Among other things, the agency stated that it would, to the greatest extent possible, collaborate with OMB and solicit input from the other federal agencies identified in this report, as well as from state agency stakeholders, on revisions to its security policy. With regard to our recommendation that FBI’s CJIS develop policies for coordinating with state agencies on the use of prior findings, the agency stated that it had implemented this recommendation and updated its security policy to include coordinating with state agencies on the use of prior findings from relevant cybersecurity assessments conducted by other organizations. However, the agency did not provide documentation showing that it had updated the security policy. As a result, we did not change our assessment of this practice. We will continue to monitor the agency’s progress in implementing the recommendation. The agency's comments are reprinted in appendix V. The agency also provided technical comments, which we incorporated as appropriate. In its written comments, SSA stated that it agreed with our recommendations. SSA's comments are reprinted in appendix VI. The agency also provided technical comments, which we incorporated as appropriate. One agency partially agreed with one recommendation and disagreed with one recommendation. Specifically, IRS partially agreed with our recommendation to, in collaboration with OMB, solicit input from the four federal agencies identified in this report and state agency stakeholders on revisions to its security policy. Specifically, the agency agreed to participate in collaborative working sessions with OMB and interested stakeholders to discuss the impact of inconsistent standards and the extent to which the standards might be harmonized. However, IRS stated that it must follow Treasury Directives and internal standards for systems that process tax data and, as a result, its ability to harmonize requirements may be limited. As noted in this report, federal agencies may have legitimate reasons for variances in their cybersecurity requirements, such as applying different information security controls and more stringent technical parameters to protect data for which the agencies are responsible in a manner consistent with various security requirements originating in federal laws, directives, and regulations. Nevertheless, we continue to believe that it is important for all of the agencies in our review to identify opportunities where requirements can be streamlined or made more consistent while still achieving each agency's desired security outcomes because doing so may reduce potential burdens on state agencies, as discussed in this report. Thus, we maintain that our recommendation is still warranted. IRS disagreed with our recommendation to revise its assessment policies to maximize coordination with other federal agencies to the greatest extent possible. Specifically, IRS stated that it has sole statutory oversight authority and enforces requirements for agencies subject to Internal Revenue Code § 6103. As such, IRS cannot solely rely on an assessment conducted by another agency. However, as noted in this report, OMB encourages federal agencies to coordinate on their assessments whenever practicable. Doing so would not necessarily require IRS to solely rely on another agency’s assessment nor conflict with its authority to conduct statutory oversight because IRS could leverage and share relevant information and artifacts with other federal agencies while continuing to conduct its own required assessments and oversight. Further, as discussed in this report, state chief information officers identified a number of areas where federal agencies requested similar information through documentation requests and interviews, such as network configurations, password policies, and incident response policies. Leveraging and sharing relevant information that is collected by federal agencies could help those agencies, including IRS, reduce some of their data collection needs while also helping to minimize burdens on state officials’ time and resources. We acknowledge that complete alignment of assessment policies may not be feasible in light of unique statutory responsibilities and requirements; however, agency coordination and simplification of certain assessment logistics may be possible and could result in gained efficiencies from the perspective of the federal government. Thus, we maintain that our recommendation is still warranted. IRS's comments are reprinted in appendix VII. We are sending copies of this report to the appropriate congressional requesters, the Director of OMB, the Administrator of CMS, the Assistant Attorney General for Administration for the Department of Justice, the FBI Director, the IRS Commissioner, and the Commissioner of SSA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6240 or at dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIlI. Appendix I: Methodology and Results of GAO’s Survey of State Officials’ Views We administered a survey to the offices of the Chief Information Officer and Chief Information Security Officer (CISO) in the 50 states, District of Columbia, American Samoa, Guam, Puerto Rico, and the U.S. Virgin Islands. To administer the survey, we emailed each state a fillable PDF questionnaire. We fielded the survey from February 19, 2019, through April 24, 2019. We received usable survey responses from 50 of the 55 states and territories, for a response rate of 91 percent. In developing, administering, and analyzing the survey, we took steps to minimize the five types of errors that may affect survey results— population coverage, sampling, measurement, nonresponse, and data processing. Our results are not subject to either of the first two types of errors—population coverage (under- or over-coverage) error of the study population or sampling error—because we defined all states and five territories as our study population, and sent each a questionnaire. To minimize the third type of error (measurement error), we pretested the questionnaire with CISOs (or their delegates) in four states that varied over two characteristics related to our questions: whether or not the state took a “federated” or “consolidated” management approach to data center and other information technology (IT) infrastructure, and the relative size of the state’s IT budget. Using cognitive interviewing techniques, such as nondirective probing of answers and asking respondents to think aloud when formulating answers, we determined whether (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on state officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. Based on the pretests and interviews with external subject matter experts on questionnaire subjects, we modified the questionnaire. During the survey, we also followed up by email or phone with some respondents to clarify unclear answers and edit them if necessary. Additionally, after the survey, our in-depth interviews with four responding states confirmed their answers to selected questions, or resulted in edits to those answers. To minimize the potential for the fourth type of error (nonresponse error), we emailed or called states that did not respond to the initial notice multiple times to encourage survey participation or provide technical assistance, as appropriate. Also, the follow up contacts made to clarify answers resulted in obtaining some answers to questions that had been left blank in returned questionnaires. While the four states and one territory not returning questionnaires may have differed to an unknown extent in what their answers would have been, compared to the aggregate answers of those who did respond, the overall impact on our results from only five missing members of the population is unlikely to be material. To minimize the possibility for the fifth type of error (data processing error), all data entry, edits, and analysis were verified by a second, independent analyst on the engagement team. To further understand the states’ experiences with and views of selected federal agencies’ cybersecurity assessments, we conducted in-depth interviews with four states. In selecting the four states for in-depth interviews, we considered responses from 44 states that had submitted surveys prior to April 11, 2019. From these states, we analyzed responses to survey questions 4, 7, 9, 10, 11, 12, 13, 14, 15, 16, and 17, and identified whether states’ responses reflected a generally favorable opinion or a generally unfavorable opinion of federal cybersecurity requirements and assessments. Based on this information, we selected two states to interview that had a generally favorable opinion and two states that had a generally unfavorable opinion toward federal cybersecurity assessments and requirements. In selecting states to interview from states that had favorable and unfavorable opinions, we chose to interview states that provided different responses about increases in costs and/or coordination with federal and nonfederal agencies. We sent an email to each of the four states to ask for their participation and conducted follow up interviews with officials from the offices of the state CIO and state CISO, state audit entities, and mission agencies from four states. Our interview questions concerned topics such as challenges states may have faced in complying with federal cybersecurity requirements, the impacts federal requirements and assessments may have had on states, the efficiency and effectiveness of assessments performed by each federal agency, and the nature and extent of any duplication in federal agencies’ cybersecurity requirements. Although the results of these in-depth interviews are not generalizable to all of the states and territories that responded to our survey, they provide richer insight into some of the information we collected through our survey, such as the reasons for certain questionnaire responses or the sources of variation in states’ perspectives. The following identifies the survey questionnaire that we administered and the aggregated results from the responses are below under each question. Not all state CISOs who completed the survey responded to all questions, and some questions were not discussed in the body of our report. Federal Requirements These questions ask about the federal agency cybersecurity requirements that set standards in any of the related general security control categories, and your experiences with those applicable to your state. 1. For how long has the current CISO of your state been in that role? (check one box) 2. Please provide some background on your state’s governance model for cybersecurity. Specifically, how is the responsibility for managing the following aspects of cybersecurity primarily assigned within your state? (check the one box in each row which best represents your answer) 3. Is your state currently required to meet any security requirements by any of the following federal agencies in order to obtain and use federal data? Federal Bureau of Investigation (FBI) (Criminal Justice Information Services (CJIS) Security Policy CJISD-ITS-DOC-08140-5.7, Version 5.7) Centers for Medicare & Medicaid Services (CMS) (Minimum Acceptable Risk Standards for Exchanges, Version 2.0) Internal Revenue Service (IRS) (IRS Publication 1075, Tax Information Security Guidelines For Federal, State, and Local Agencies, September 2016) Social Security Administration (SSA) (Electronic Information Exchange Security Requirements and Procedures for State and Local Agencies Exchanging Electronic Information, Version 8.0) 4. Federal security requirements applicable to states may vary in a number of ways. Considering as a whole all of the federal agencies’ requirements that your state is currently required to meet, how much do you think they vary from each other in each of the following ways? 5. Consider again all the applicable federal cybersecurity requirements required of your state. Do one or more federal agencies have any requirements that most vary from other agencies? Within each of the following families of security controls, check all boxes that apply to tell us in what ways requirements vary, and which agency(s) vary the most from others. (If “Other(s)” varying agencies selected, list in Question 6.) NIST Control Family Access Control (AC) Awareness and Training (AT) Audit and Accountability (AU) Security Assessment and Authorization (CA) Configuration and Management (CM) Contingency Planning (CP) Identification and Authentication (IA) Incident Response (IR) Media Protection (MP) Physical and Environmental Protection (PE) Planning (PL) Personnel Security (PS) Risk Assessment (RA) System and Services Acquisition (SA) System and Communications Protection (SC) System and Information Integrity (SI) Program management (PM) 6. If you indicated above that any other federal agencies have requirements that most vary from others, what are those other agencies and the control categories and way(s) they vary? (Narrative answers not displayed) 7. If you identified any variation in the requirements of multiple Federal agencies in question 5 above, what is your overall estimation of the degree of that variation in each of the following families of controls? Families of controls (Based on NIST 800-53) Access control (AC) Awareness and training (AT) Audit and accountability (AU) Security assessment and authorization (CA) Configuration management (CM) Contingency planning (CP) Identification and authentication (IA) Incident response (IR) Maintenance (MA) Media protection (MP) Physical and environmental protection (PE) Planning (PL) Personnel security (PS) Risk assessment (RA) System and services acquisition (SA) System and communications protection (SC) System and information integrity (SI) 8. Do you have any comments on or explanations of your answers to the question above that would help us appropriately interpret those answers? (itemize your comments by the row letters above, to the extent possible, in the box below) (Narrative answers not displayed) 9. Has your state taken any of the following actions specifically to address variation(s) across agency requirements? Increased coordination with NASCIO and other non-federal agencies outside your state Increased coordination with other agencies within your state Any other action(s) 10. Have the variations increased any of the following types of costs and/or challenges? The following questions ask about assessments performed by federal agencies on your state on its compliance with the federal cybersecurity requirements covered above. For the purposes of this survey, an “assessment” includes only the activities in the period between the date the state is notified of the assessment and the date the federal agency or entity carrying out the assessment (e.g., contractor) completes its on-site work. 11. Approximately how many assessments did each of the following federal agencies perform on your state’s efforts to comply with its requirements in calendar years 2016-2018? (When counting assessments performed by one federal agency on more than one state mission agency or operational entity at the same time, please count each assessment individually.) Any other federal agency(s) 12. Considering up to the last 3 assessments a federal agency performed in 2016-2018, approximately how long in calendar time was taken per assessment, on average, to perform? Any other federal agency(s) 13. Considering up to the last 3 assessments a federal agency performed in 2016-2018, approximately how many of your state’s staff hours were expended per assessment, on average, to comply? Any other federal agency(s) 14. And considering up to the last 3 assessments a federal agency performed in 2016-2018, what is your best estimate of the range of cost in dollars (including staff hour labor, travel, materials, and contract costs) your state expended per assessment, on average, to comply? Estimated lower end of dollar cost (mean value) $77,103 (17 responses) Estimated upper end of dollar cost (mean value) Don’t know 28 (17 responses) $623,650 19 responses) $840,472 (19 responses) $211,574 (21 responses) $418,238 (21 responses) $33,822 (16 responses) $61,719 (16 responses) 15. Considering all the federal assessments performed on your state’s implementation of requirements in 2016-2018, how would you rate those assessments, overall, on the following factors? 16. In summary, how would you rate the efficiency of assessments performed by each federal agency on your state’s implementation of requirements? Any other agency(s) 17. In summary, how would you rate the effectiveness of assessments performed by federal agencies on your state’s implementation of requirements? Any other agency(s) 18. Considering the issues covered in this questionnaire, what challenges or impacts, if any, has your state experienced regarding the federal requirements and assessment processes? (list and describe up to 5) (Narrative answers not displayed) 19. Do you have any additional explanations of your answers or comments on any of the issues in this questionnaire? (Narrative answers not displayed) 20. Who is the person primarily responsible for completing this questionnaire whom we can contact in case we need to clarify a response? If the state CISO did not complete this questionnaire, we recommend that the CISO review these answers. Appendix II: Detailed Assessment of Selected Federal Agencies’ Policies The tables below identify the extent to which each of the four selected federal agencies established policies that addressed individual activities supporting two areas of coordination: (1) coordination with state agencies when assessing states’ cybersecurity and (2) coordination with other federal agencies on the assessment of state agencies’ cybersecurity. Appendix III: Breakdown of Selected Federal Agencies’ Reported Spending for Fiscal Years 2016 through 2018 The following table provides the breakdown of selected agencies’ reported spending during fiscal years 2016 through 2018 associated with assessing states’ compliance with cybersecurity requirements. Appendix V: Comments from the Federal Bureau of Investigation Appendix VI: Comments from the Social Security Administration Appendix VII: Comments from the Internal Revenue Service Appendix VIII: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgments In addition to the individual named above, Josh Leiling (assistant director), Lori Martinez (analyst in charge), Gerard Aflague, Joseph Andrews, David Blanding, Chris Businsky, Rebecca Eyler, Torrey Hardee, Andrea Harvey, Keith Kim, Monica Perez-Nelson, and Carl Ramirez made significant contributions to this report.
Why GAO Did This Study To protect data that are shared with state government agencies, federal agencies have established cybersecurity requirements and related compliance assessment programs. Specifically, they have numerous cybersecurity requirements for states to follow when accessing, storing, and transmitting federal data. GAO was asked to evaluate federal agencies' cybersecurity requirements and related assessment programs for state agencies. The objectives were to determine the extent to which (1) selected federal agencies' cybersecurity requirements for state agencies varied with each other and federal guidance, and (2) federal agencies had policies for coordinating their assessments of state agencies' cybersecurity. GAO reviewed four federal agencies that shared data with states and had assessment programs: CMS, FBI, IRS, and SSA. GAO compared, among other things, each agency's cybersecurity requirements to federal guidance and to other selected agencies' requirements; and reviewed federal agencies' policies for conducting assessments. In addition, GAO examined OMB's efforts to foster coordination among federal agencies. GAO also surveyed and received responses from chief information security officers in 50 out of 55 U.S. states, territories, and the District of Columbia to obtain their perspectives. What GAO Found Although the Centers for Medicare and Medicaid Services (CMS), Federal Bureau of Investigation (FBI), Internal Revenue Service (IRS), and Social Security Administration (SSA) each established requirements to secure data that states receive, these requirements often had conflicting parameters. Such parameters involve agencies defining specific values like the number of consecutive unsuccessful logon attempts prior to locking out the user. Among the four federal agencies, the percentage of total requirements with conflicting parameters ranged from 49 percent to 79 percent. Regarding variance with National Institute of Standards and Technology guidance, GAO found that the extent to which the four agencies did not fully address guidance varied from 9 percent to 53 percent of total requirements. The variances were due in part to the federal agencies' insufficient coordination in establishing requirements. Although the Office of Management and Budget's (OMB) Circular A-130 requires agencies to coordinate, OMB has not ensured that agencies have done so. Further, while federal agencies' variance among requirements may be justified in some cases because of particular agency mission needs, the resulting impact on states is significant, according to state chief information security officers (see figure). The four federal agencies that GAO reviewed either fully or partially had policies for coordinating assessments with states, but none of them had policies for coordinating assessments with each other. State chief information security officers that GAO surveyed reinforced the need to coordinate assessments by identifying impacts on state agencies' costs, including multiple federal agencies that requested the same documentation. Coordinating with state and federal agencies when assessing state agencies' cybersecurity may help to minimize states' cost and time impacts and reduce associated federal costs. Federal agencies reported spending about $45 million for fiscal years 2016 through 2018 on assessments of state agencies' cybersecurity. What GAO Recommends GAO is making 12 recommendations to the four selected agencies and to OMB. Three agencies agreed with the recommendations and one agency (IRS) partially agreed or disagreed with them. OMB did not provide comments. GAO continues to believe all recommendations are warranted.
gao_GAO-20-145
gao_GAO-20-145_0
Background Types of Economic Sanctions Sanctions provide a range of tools that Congress and the President may use to seek to alter or deter the behavior of a foreign government, individual, or entity in furtherance of U.S. national security or foreign policy objectives. For example, sanctions may be used in response to human rights abuses, weapons proliferation, or occupation of a foreign country; ultimately seeking to change the behavior of those perpetrating these offenses. Sanctions may include actions such as limiting trade; blocking assets and interest in assets subject to U.S. jurisdiction; limiting access to the U.S. financial system, including limiting or prohibiting transactions involving U.S. individuals and businesses; restricting private and government loans, investments, insurance, and underwriting; and denying foreign assistance and government procurement contracts. Sanctions can be comprehensive or targeted. Comprehensive sanctions. Generally, comprehensive sanctions include broad-based trade restrictions and prohibit commercial activity with an entire country. Examples of comprehensive sanctions include U.S. sanctions against Iran and Cuba. Targeted sanctions. Targeted sanctions restrict transactions of and with specific persons or entities. For example, the U.S. sanctions program related to Somalia targets persons engaging in acts threatening the peace, security, or stability of Somalia. Sectoral sanctions are a form of targeted sanctions directed at a specified sector, or sectors, of a target’s economy. For instance, Executive Order 13662 authorized sanctions targeting certain sectors of the Russian economy as might later be determined by the Secretary of the Treasury in consultation with the Secretary of State, such as the financial services, energy, mining, and defense and related materiel sectors. The United States also uses supplementary sanctions, known as secondary sanctions, which target third-party actors doing business with, supporting, or facilitating targeted regimes, persons, and organizations. For example, in February 2017, Treasury imposed sanctions against 13 individuals and 12 entities, including persons outside Iran, for their involvement in or support for Iran’s ballistic missile program, as well as for acting for or on behalf of, or providing support to, Iran’s Islamic Revolutionary Guard Corps-Qods Force. There are currently 20 country-based or country-related sanctions programs, according to lists of sanctions programs published by Treasury and State. The sanctions may target the governments of these countries or individuals and entities. Figure 1 shows country-based and country- related U.S. sanctions programs as of July 2019. Implementing Agencies for U.S. Economic Sanctions Treasury, State, and Commerce, as well as various other U.S. agencies, play roles in implementing sanctions. Treasury Treasury implements sanctions by taking actions such as designating entities for the application of sanctions. These sanctions may include blocking entities’ access to U.S.-based assets, prohibiting them from engaging in financial transactions in the United States, and restricting access to U.S. financial services. Treasury’s Office of Foreign Assets Control (OFAC), which is part of the Office of Terrorism and Financial Intelligence (TFI), has primary responsibility for Treasury’s sanctions implementation, according to Treasury. TFI is charged with safeguarding the U.S. financial system against illicit use and combating rogue nations, terrorist facilitators, weapons of mass destruction proliferators, money launderers, drug kingpins, and other national security threats. As part of its implementation of sanctions, OFAC publishes a list, known as the Specially Designated Nationals List, of individuals, groups, and entities whose assets in the United States are blocked and with which U.S. persons are prohibited from dealing. The addition of an individual, group, or entity to this list is referred to as a sanctions designation. Entities or groups listed include those owned or controlled by, or acting for or on behalf of, targeted country governments. OFAC also lists individuals, groups, and entities, such as terrorists and narcotics traffickers, designated under targeted sanctions programs that are not country specific. OFAC may also issue licenses, general or specific, to permit activities that would otherwise be prohibited under a sanction. For example, OFAC has issued a general license to allow nongovernmental organizations to engage in not-for-profit activities in Syria in support of humanitarian projects, democracy-building, education, and noncommercial development projects directly benefitting the Syrian people. According to Treasury, OFAC participates in all aspects of sanctions implementation, including, targeting, outreach to the public, and compliance. OFAC also enforces sanctions by conducting civil investigations of sanctions violators and working with law enforcement agencies. State State implements economic and other sanctions through a variety of actions, such as implementing sanctions-related controls on defense exports, restricting foreign aid, implementing arms embargoes pursuant to United Nations Security Council resolutions, and restricting visas. State’s primary sanctions coordination office is the Office of Economic Sanctions Policy and Implementation (SPI), which is part of the Division for Counter Threat Finance and Sanctions in State’s Bureau of Economic and Business Affairs. According to State, SPI is responsible for developing and implementing foreign policy–related sanctions adopted to counter threats to national security posed by particular activities and countries. In addition, according to State, SPI builds international support for implementing sanctions, provides foreign policy guidance to Treasury’s OFAC and Commerce’s Bureau of Industry and Security on sanctions implementation, and works with Congress to draft legislation that advances U.S. foreign policy goals in these areas. Further, according to State, SPI works to remove sanctions when appropriate to reward and incentivize improved behavior or demonstrate U.S. support for newly established democratic governments. Although SPI is State’s primary sanctions coordinating office, other State bureaus, offices, and overseas posts may have significant roles in sanctions implementation, depending on the sanctions program. Some functional bureaus interact with OFAC within their areas of expertise. For example, according to State, the Bureau of International Security and Nonproliferation has expertise on missile, chemical, and biological proliferation as well as how to counter proliferation. The bureau assists in developing sanctions programs and designating sanctions targets under nonproliferation law, according to State. Also, the Bureau of Counterterrorism and Countering Violent Extremism takes part in developing and evaluating sanctions policy as well as helping target entities for sanctions under various authorities, including an executive order targeting those that commit or support terrorism and the Foreign Terrorist Organization section of the Immigration and Nationality Act, according to State. Additionally, the Bureau of International Narcotics and Law Enforcement Affairs uses its expertise in drug trafficking, corruption, and crime to assist in selecting targets for counternarcotics sanctions, transnational criminal organization sanctions, and corruption-related sanctions under human rights law, according to State. SPI also works with State’s regional bureaus, such as the Bureau of African Affairs; country offices; and overseas posts to develop potential targets for given sanctions programs, such as those in Somalia and Burundi. Intelligence Community Both Treasury and State also have intelligence offices that provide the sanctions-implementing offices with information to facilitate sanctions targeting and enforcement efforts and developing new sanctions policy. Treasury’s Office of Intelligence and Analysis (OIA). TFI’s OIA is responsible for TFI’s intelligence functions as well as for integrating the Treasury Department into the larger Intelligence Community and providing support to both Treasury leadership and the Intelligence Community. State’s Bureau of Intelligence and Research (INR). INR’s primary mission is to provide all-source intelligence and analysis to serve U.S. diplomacy. INR provides independent analysis of events to State policymakers as well as officials throughout the U.S. government and coordinates with other intelligence agencies to obtain relevant information to inform State policymakers. For example, INR’s analytical offices and its Sanctions Support Team, when requested, gather and provide information—both classified and open sourced— on sanctions targets to policy officials at State and Treasury. In addition to OIA and INR, other U.S. intelligence agencies provide support to the sanctions-implementing agencies. Commerce Commerce implements sanctions by restricting licenses for exports, reexports, and transfers (in-country) involving U.S.-origin items— commodities, software, and technology—subject to its jurisdiction and destined for sanctioned persons, entities, and destinations. Through its export licensing process, Commerce’s Bureau of Industry and Security (BIS) restricts sanctioned countries’ and persons’ access to U.S. items. BIS also enforces export controls through its Office of Export Enforcement, which conducts criminal and administrative investigations of potential violations of export regulations. Other Agencies Other U.S. agencies with roles in sanctions implementation include the Departments of Defense, Energy, Homeland Security, and Justice. The agencies involved and the extent of their involvement depend largely on their area of expertise. The following are a few examples of how other agencies are involved with sanctions: The Department of Defense restricts arm sales and other forms of military cooperation and is involved in decisions regarding export licenses. The Department of Energy assists in implementing nonproliferation sanctions. The Department of Homeland Security’s Customs and Border Protection helps assure that shipments to and from sanctioned countries and entities do not leave or enter the United States. The Department of Justice investigates and prosecutes violations of sanctions and export laws and provides legal reviews of sanctions’ designations. Treasury’s, State’s, and Commerce’s Roles in Implementing Sanctions Are Established by Statute, Executive Order, or an Interagency Process The roles of Treasury, State, and Commerce in implementing sanctions are assigned either directly by the statute or executive order authorizing the sanctions or through an interagency process and agreement. Some statutes and executive orders designate an agency to serve as the primary agency for sanctions implementation and also designate one or more agencies to support the primary agency through consultation. For example, Executive Order 13570, Prohibiting Certain Transactions With Respect to North Korea, authorizes Treasury, in consultation with State, to carry out actions to employ all powers granted to the President by specified laws to carry out the purposes of the order. While some statutory authorities may designate specific agencies for implementation, most do not make such designations but rather delegate the authority to do so to the Office of the President, according to State officials. Agency officials also noted that they are often involved in drafting new sanctions legislation and, if the statute will designate specific agency roles, are able to advise lawmakers regarding the selection of the primary agency for implementing sanctions. When a statute or executive order authorizing sanctions delegates authority to the Office of the President, specific agency roles are assigned through an interagency process at the National Security Council (NSC). According to State officials, the NSC’s Principals Coordinating Committee discusses and assigns agency roles in a sanctions program. According to State officials, most of the committee’s decisions about agency roles are made at the staff level, and the actual principals become involved only if there is a disagreement among the agencies’ staffs. State officials told us that each agency’s area of expertise and its available resources factor into the selection of an agency to lead implementation of a particular sanctions authority. For example, according to a State official, Treasury is often the lead for country-based sanctions, because these programs often focus on international financial transactions, while State usually serves as the lead for sanctions requiring more specialized knowledge, such as those relating to weapons of mass destruction and nuclear nonproliferation. State officials added that there is usually very little, if any, disagreement among the agencies regarding whether they should have primary or consultative roles. Once a decision is made, the President typically issues a delegation memo assigning responsibility for implementation of elements of the sanctions authority to each agency involved, according to Treasury officials. Treasury, State, and Commerce each provide publicly available information about the sanctions they implement and the authorities underlying those sanctions. Treasury. OFAC maintains a publicly available list of all sanctions laws and executive orders that Treasury has a role in implementing. The list is organized by sanctioned country and functional program. For each country-based, country-related, or functional program, the entry in the list includes a discussion of statutory authorities, executive orders, and regulations under which the program is implemented. According to Treasury officials, OFAC staff track and update changes in U.S. sanctions policy as needed and post new sanctions information to the agency’s website as soon as a sanction is approved. State. SPI also maintains publicly available lists of the major sanctions laws and executive orders that State has a role in implementing. These lists are organized by sanctioned country and by functional program. According to State officials, SPI typically updates these lists when authorities are established or rescinded and periodically reviews and updates the web pages where it posts the lists. According to State officials, the lists are not intended to be comprehensive and are meant only to give the reader a general understanding of some of State’s high-profile sanctions programs and to provide an initial resource for information and recent actions. Commerce. BIS produces a compilation of legal authorities pertaining to the administration of export controls under the Export Administration Regulations. Unlike Treasury and State’s lists, Commerce’s compilation comprises all of Commerce’s legal authorities to control exports, reexports, and transfers (in-country). These include executive orders, laws, and presidential declarations authorizing controls related to national security, chemical and biological weapons, and nuclear nonproliferation reasons, as well as controls for foreign policy–related sanctions. According to Commerce officials, the compilation is updated annually to reflect additions to, or deletions of legal authorities. BIS also issues rules amending the Export Administration Regulations to implement new executive orders and statutory and other legal authorities on a frequent basis, at times within a few days of the announcement or enactment of the underlying authority. According to Commerce officials, publishing rules amending the Export Administration Regulations provides the public with timely notice of changes to Commerce’s sanctions authorities and actions taken pursuant to these authorities. Agencies Assess Sanctions’ Impacts but Cited Difficulties in Assessing Sanctions’ Effectiveness in Achieving Broader Policy Goals Treasury, State, and Commerce assess potential and observed impacts of specific sanctions, but officials stated they do not conduct agency assessments of the effectiveness of sanctions in achieving broader U.S. policy goals and cited various difficulties in doing so. Each agency’s sanctions implementation offices rely mainly on assessments performed by the Intelligence Community, including Treasury’s OIA and State’s INR. These assessments analyze the impacts of specific sanctions on a particular aspect of the sanction’s target—for example, the sanctions’ impact on the target country’s economy or trade, according to agency officials. However, these assessments do not analyze sanctions’ overall effectiveness in achieving broader U.S. policy goals or objectives, such as whether the sanctions are advancing the national security and policy priorities of the United States, according to Treasury officials. Treasury, State, and Commerce have not conducted such broader assessments on their own, and agency officials cited a variety of difficulties related to doing so. However, according to Treasury, State, and Commerce, agency assessments of sanctions’ impacts often contribute to broader interagency discussions, typically coordinated through the NSC, that examine the effectiveness of sanctions in achieving policy goals. According to agency officials, an NSC-led process allows the U.S. government to draw upon multiple agencies’ inputs and perspectives, and to consider these issues in the larger policy context, because sanctions are often only one element of broader government-wide strategies to achieve U.S. policy goals. Treasury, State, and Commerce Assess Specific Sanctions’ Impacts on Targets and Receive Assessments from Other Intelligence Agencies Treasury Assessments Treasury has assessed both the observed and potential impacts of specific sanctions designations on various aspects of targets, such as a target country’s economy. Treasury’s intelligence component, OIA, conducts the majority of these impact assessments and produces analytic papers on sanctions’ impacts, according to officials. OIA officials stated that the type of analysis varies depending on the purpose or nature of the assessment. For example, some analytic papers focus on the overall economic impact of the sanction on the target country, while others examine the impact on a specific target, such as an entity or group of entities. According to Treasury officials, the office has conducted both short-range and long-range analyses of sanctions’ impacts at both the country-specific and the authority-specific level. Treasury officials said that the frequency of assessments conducted for a particular country or authority varies according to the sanctions program’s size and relative importance to current U.S. policy goals. OIA officials reported that the Under Secretary for TFI requires that impact assessments be conducted prior to an action as part of the targeting process and retrospectively after a designation takes place. According to Treasury officials, TFI, including OIA, considers conducting such impact assessments to be part of OIA’s mission. OIA officials noted that OIA, as well as TFI more broadly, considers understanding sanctions impact to be integral to developing sanctions policy and making targeting decisions. OIA officials stated that their impact assessments are circulated within Treasury and their broader analytic papers are circulated within the Intelligence Community and interagency. In addition, OFAC officials reported that they request impact assessments from OIA whenever new sanctions targets are being considered. OFAC officials stated that OIA’s impact assessments are an integral part of any targeting matrix prepared by OFAC’s Office of Global Targeting. According to OFAC officials, the type of assessment requested depends on the issue, program, and target. The requested assessments may include, for example, determining whether a target has assets in the United States to an extent that sanctions would be impactful, identifying the holdings of a given target globally and its interactions with the United States, or analyzing the second- and third-order effects of a potential sanctions designation. OFAC officials said that these assessments are also used in risk- mitigation planning. For example, if an assessment revealed that a particular sanction would lead to an undesirable consequence, such as blocking important medical supplies or other humanitarian items, OFAC might take preemptive measures to mitigate that undesirable consequence through a general license or other available tools. Treasury’s Office of International Affairs also prepares some assessments of sanctions’ impacts. According to Treasury officials, the Office of International Affairs occasionally conducts macroeconomic assessments of the impact of specific sanctions to inform TFI policymaking. A senior Office of International Affairs official reported that most of the office’s macroeconomic analyses of sanctions’ impacts are focused on the potential impact on economic growth and financial stability in the target country. In addition, OFAC officials stated that the Office of International Affairs often participates in agency discussions and may provide verbal or written assessments of sanctions’ impact on foreign partners’ industries and markets as well as on U.S. companies. State Assessments State conducts some assessments of the impact of sanctions on their intended targets. INR produces most reports on sanctions impact, which are based on all sources of information (i.e., classified and open source). According to INR officials, these reports are often produced at the request of State policymakers, and occasionally coordinated with the broader Intelligence Community. INR facilitates the review of sanctions’ impacts on particular governments or other areas of interest at the request of, or in partnership with, State’s regional and functional bureaus. According to INR, most of INR’s intelligence support responds to specific questions and requests, such as whether a particular company is still operating in a sanctioned country. According to State officials, INR provides responses to requests in written products, such as formal INR or Intelligence Community assessments, or more informally through channels such as oral briefings or email responses. INR officials noted that written products often inform interagency discussions on sanctions at the NSC, since questions asked at State often become relevant to broader policy discussions. Other State entities have also examined the impact of sanctions. In 2016, State’s Office of the Chief Economist, responding to a request from SPI, analyzed the economic impact of targeted sanctions on Russian firms. According to SPI officials, they commissioned the study because they wanted to understand the specific impact of sanctions on a country that was already facing economic challenges, given that sanctions were among several foreign policy tools used to address Russian behavior. According to SPI officials, this was the only analysis of sanctions impact that SPI had requested of the Office of the Chief Economist in the past 5 years. In addition, some embassies have used cables to State headquarters to report on the impact of sanctions. According to State, most such information on a sanction’s impact is captured in a sentence or two as part of a cable focused on other issues. However, embassies in countries where sanctions are imposed on the host government (or nearby governments) often dedicate significant time to reporting on the impact of sanctions and how they affect broader foreign policy, according to State officials. For example, the U.S. embassy in Seoul produced a series of cables in 2017 and 2018 detailing observed impacts of sanctions on the North Korean economy. Commerce Assessments Commerce has conducted some assessments of the prospective impacts of sanctions, according to Commerce officials. According to Commerce officials, the Under Secretary or Deputy Under Secretary communicates requests for analyses of sanctions that originate with the NSC’s Principals Coordinating Committee. According to Commerce officials, these requests are infrequent, with very few received in recent years, and generally related only to Russia and Iran. According to Commerce, the results of these assessments may include two components: (1) a simulation of potential economic impact and (2) background data on trade flows and vulnerabilities. The first component may include a projection of sanctions’ impact on gross domestic product (GDP), consumer prices, production in specific industries, jobs, and trade flows. The second component may include background on the amount and nature of any U.S. trade with countries that might be sanctioned. For example, in March 2015, Commerce produced an analysis to determine the areas of greatest interdependence among the United States, Russia, and U.S. partners that were at risk of being affected by prospective sanctions against Russia. Other Intelligence Agencies’ Assessments Treasury and State officials reported using assessments of sanctions’ impacts provided by intelligence agencies outside Treasury or State. Assessments used by Treasury. OFAC officials reported requesting assessments from other intelligence agencies, in addition to OIA’s assessments. According to OFAC, the type of assessment requested—for example, gauging the reaction of a target or government leadership to sanctions or examining a target’s assets globally—depends on the issue and the program. OFAC also reported requesting analysis of sanctions’ impact on strategic targets and their associates. OFAC officials reported that these assessments are taken into account as Treasury considers developing additional sanctions policies, targets, or both. Assessments used by State. INR and SPI officials stated that they use assessments of sanctions’ impact conducted by intelligence agencies outside State. According to an INR official, the INR sanctions team will obtain Intelligence Community assessments relevant to State policymakers concerns. SPI officials stated that most of the assessments they use are focused on the potential impact of proposed sanctions. According to the officials, the assessments help them design sanctions tools and develop targets to maximize impact. For example, according to SPI officials, the Intelligence Community will assess where the largest impact might be by assessing whether actors are likely to cease particular activities if targeted or will identify points where targets interface with the U.S. financial system. SPI officials stated that the number of assessments conducted depends on multiple variables, including current events in the targeted country and the degree of senior policymaker interest. An INR official stated that most Intelligence Community resources (i.e., intelligence collection and analysis) are focused on just a few sanctions regimes, such as North Korea, Iran, and Russia. Moreover, according to State officials, routine, finished analysis—assessing the impact of sanctions either before or after their imposition—is not always available from the Intelligence Community or is slow in delivery. State officials stated that this type of regular intelligence reporting and analysis is critical to informing sanctions policymaking at all stages (e.g., planning, targeting, implementing, enforcing, and revising). Agencies Have Cited Difficulties in Conducting Assessments of Sanctions’ Effectiveness in Meeting Policy Goals Treasury, State, and Commerce officials identified a range of analytic issues that make it difficult to assess the effectiveness of a sanctions program in meeting broad U.S. foreign policy goals. The difficulties they cited included the following: Isolating sanctions’ effects from other factors is difficult. Agency officials cited the difficulty—or, in some cases, the impossibility—of identifying sanctions as the sole or most significant cause of a target’s action relative to U.S. policy goals. For example, a sanctioned country may decide to cease certain behavior for any number of reasons that may be unrelated to the sanctions or other U.S. policy measures. OFAC officials also stated that behavioral change can be subtle, incremental, and lacking clear correlations with specific causes. In addition, Treasury officials noted that sanctions are often used in conjunction with other policy tools, such as diplomatic engagement with the target, export controls, and visa bans. Distinguishing the impact of each policy tool used is exceedingly difficult due to the limited information available via intelligence and law enforcement channels, according to Treasury officials. Policy goals and objectives often shift. Treasury officials stated that U.S. policy goals and objectives underpinning the sanctions can change over the course of a sanctions program, making it difficult to measure sanctions’ effectiveness in achieving any ultimate policy objective. According to OFAC officials, because sanctions programs are ongoing, any assessments of a sanctions program’s effectiveness would necessarily be interim, not final, and the metrics used to measure effectiveness might change over the program’s duration. Reliable data are sometimes lacking. Agency officials stated that a lack of reliable data on certain targets or countries can also make it difficult to assess the effectiveness of sanctions. According to Treasury, State, and Commerce officials, given these difficulties and limited resources, they do not conduct their own assessments of the overall effectiveness of existing sanctions programs in achieving broad policy goals. Instead, they have directed resources toward the assessments of sanctions’ impacts on targets, such as the impact on a target country’s economy or trade. Agency officials also noted that there is no policy or requirement for agencies to assess the effectiveness of sanctions programs in achieving broad policy goals. However, Treasury and State officials stated that sanctions policy is continuously evaluated informally by those implementing the sanctions, as new information comes in and as new targets are developed. Moreover, Treasury, State, and Commerce stated that agency assessments of sanctions’ impacts often contribute to broader interagency discussions, typically coordinated through the NSC, that examine the effectiveness of sanctions in achieving broad policy goals. According to agency officials, an NSC-led process allows the U.S. government to draw on multiple agencies’ inputs and perspectives, and to consider these issues in the larger policy context, given that sanctions are often only one element of broader government-wide strategies to achieve U.S. policy goals. Studies Suggest Certain Factors Contributed to More- Effective Sanctions, but These Studies May Not Fully Reflect Certain Types of U.S. Sanctions We found strong evidence—based on studies examining factors that contributed to the effectiveness of sanctions in changing targeted countries’ behavior—that sanctions have been more effective when implemented through an international organization, or when targeted countries had some existing dependency on or relationship with the United States. We also found strong evidence—based on studies examining factors that increased the economic impact of sanctions on targeted countries—that sanctions imposed through an international organization were associated with greater impact. In addition, we found strong evidence that the economic impact of sanctions has generally been greater when they were more comprehensive in scope or severity. Sanctions may also have unintended consequences for targeted countries, such as negative impacts on human rights or public health. In some studies, larger economic impacts were associated with more unintended consequences, suggesting an important policy trade-off. Some aspects of U.S. sanctions policy, such as targeted sanctions, were generally not analyzed separately in the studies we reviewed, which could reduce the studies’ applicability to contemporary policymaking. Studies Suggest Sanctions Have Been More Effective When Implemented through an International Organization or When the Target Was Dependent on the United States We found strong evidence, based on studies examining factors that contributed to the effectiveness of sanctions in changing behavior, that sanctions have been more effective when they were implemented through an international organization (e.g., the United Nations) or when the target had some existing dependency on or relationship with the United States (e.g., U.S. foreign aid, military support or alliance, or relatively large bilateral trade relationship). Studies using different methods, datasets, and time periods consistently found that the United States was more likely to achieve its sanctions goals when an international organization was involved or when the target had some existing dependency on or relationship with the United States. We found some evidence, based on a smaller number of studies, that sanctions have been more effective when the target state had low per- capita income, when a country’s threat of imposing sanctions was assessed to be credible, or when sanctions imposed relatively high costs on the target state. For example, one study found that the likelihood of the target’s acquiescing to all of the sanctioning country’s demands increased when sanctions were imposed on a target with low per-capita income. Another study found that targets were more likely to acquiesce in response to threatened sanctions when the United States had not backed down against a resisting target recently. A third study found that more-severe sanctions increased the likelihood that the sanctioning country achieved more of its goals, suggesting that sanctions imposing relatively high costs have been more effective. Our review also suggests that in some circumstances, the risk of sanctions has deterred states from undertaking activities that would likely have resulted in the imposition of sanctions. Factors that have increased the measured effectiveness of sanctions may also increase their deterrent effect. For example, two studies found that the greater the trade flows between the target state and the sanctioning country, the greater the likelihood of sanctions’ success. A separate study demonstrated that this same dependency—greater trade between the target and the United States—led to greater deterrence of nuclear proliferation. More generally, states are likely to consider the risks associated with undertaking activities that could lead to the imposition of economic sanctions, among other factors. These risks include the likelihood of sanctions being imposed or removed, the states’ vulnerabilities to the different types and amounts of pressure that could result from sanctions, and the consequences that the states would experience if sanctions were imposed. See the text box for more detail on the potential risks that states that could be the target of sanctions might consider. (The text box is intended to provide a more general framework for understanding how states may anticipate and respond to sanctions; it reflects, but is not limited to, the specific factors included in the studies we reviewed.) Risk Framework for States That May Be Targets of Economic Sanctions Likelihood of sanctions’ being imposed or removed. States that may be targets of sanctions may assess the credibility of any explicit threats to impose or maintain sanctions and the credibility of any assurances that sanctions will be removed when the activity that motivated the imposition of sanctions ceases. Vulnerabilities to potential pressure from sanctions. States that may be targets of sanctions may assess whether the benefits of withstanding pressure that could result from the sanctions exceed the costs. For example, states may be concerned that higher economic costs from sanctions could be associated with greater impact on the material wellbeing of individuals and firms. Higher economic costs could also make it more difficult to compensate those affected by the sanctions—and those costs could be especially burdensome in states with low per-capita income. However, states likely consider not only the costs from sanctions but also the extent to which they might over time avoid or adapt to these costs. For example, if potential sanctions are likely to disrupt trade and investments from major commercial partners, states that are potential targets may examine whether developing or expanding relationships with third parties could mitigate the loss of these economic relationships. Sanctions imposed via an international organization (e.g., a multilateral approach associated with the United Nations) may make it more difficult for targets to avoid or adapt to sanctions—for example, by finding alternative commercial partners—and may signal a more robust international consensus regarding the objectives of the sanctions. Consequences if sanctions are imposed. States that may be targets of sanctions may assess the direct financial impact as well as future diplomatic, political, or security implications of the potential sanctions. That is, before engaging in activities that could trigger sanctions, states that depend on the United States may consider the possible impact of their actions on their future relationships with the United States in other areas, including military cooperation or the provision of aid. Conversely, states that are less dependent on the United States might anticipate fewer ongoing benefits from acquiescing to U.S. demands. Research on the Effectiveness of Sanctions May Not Fully Reflect Certain Types of U.S. Sanctions Two important types of U.S. sanctions—targeted sanctions and secondary sanctions—were present during the time periods covered by the studies we reviewed. However, the studies generally did not account differently for these two sanctions types than for non-targeted and primary sanctions, respectively. As a result, the studies generally did not reflect differences between the effectiveness of these types of sanctions. This limitation of the available studies could reduce the applicability of this research to contemporary policymaking. Targeted sanctions. Targeted sanctions restrict transactions of and with specific entities and individuals, such as those who may have influence with a state’s government. In response to such sanctions, the targeted actors may in turn influence their government to change its behavior. Targeted sanctions seek to minimize impact on society at large and maintain most trade relationships with non-targeted actors in the country. However, our interpretation of studies of sanctions suggests that the targeted actors may use their influence with their government to extract concessions that compensate them for the impact of sanctions, which could limit the effectiveness of certain targeted sanctions. Secondary sanctions. Secondary sanctions, also known as supplementary sanctions, target third-party actors doing business with, supporting, or facilitating targeted regimes, persons, and organizations. From the perspective of a third-party actor, secondary sanctions likely increase the risk involved in commercially partnering with primary sanctions targets. Thus, secondary sanctions, especially those implemented by a country as large and interconnected as the United States, may make it more difficult for primary targets to avoid or adapt to sanctions. Our interpretation of studies of sanctions suggests that the effects of secondary sanctions imposed by the United States could be similar to the effects of sanctions imposed with a large or multilateral coalition through an international organization, since sanctions imposed through an international organization also increase the difficulty of finding alternative commercial partners. However, our interpretation of the studies suggests that if secondary sanctions were imposed without an international organization they would be unlikely to signal a robust international consensus regarding the sanctions’ objectives, and thus may not as effectively deter their targets, or third parties, from developing alternative commercial arrangements. While the studies we reviewed generally did not separately analyze targeted or secondary sanctions, states remain likely to consider the risks associated with undertaking activities that could lead to the imposition of these sanctions and sanctions in general. With respect to targeted and secondary sanctions, states—both primary targets and third-country actors—are likely to consider, among other things, the risks associated with undertaking activities that could result in targeted or secondary sanctions and the consequences they would experience if targeted or secondary sanctions were imposed. Studies Suggest Comprehensive Sanctions Have Had Larger Economic Impacts but Could Also Yield More Unintended Consequences We found strong evidence, based on studies examining factors that increased the economic impact of sanctions, that sanctions’ economic impacts on targets have generally been greater when the sanctions were more comprehensive or were imposed through an international organization. For example, one study found that UN sanctions had an adverse impact on target countries’ economic growth and that this impact increased with more-comprehensive sanctions. Another study found that imposing sanctions along with other countries led to reductions in both U.S. and other Group of Seven countries’ bilateral trade with targeted countries. Some other studies suggest that sanctions may also have unintended consequences. For example, some studies suggest that sanctions have had a negative impact on human rights, the status of women, public health, or democratic freedoms in target countries. In addition, more frequent and comprehensive use of sanctions could encourage sanctions targets, potential targets, and their commercial partners to develop trade and financial ties that are less dependent on the United States. The extent of these unintended consequences can be proportionate to the comprehensiveness or economic impact of sanctions. As a result, the cost or comprehensiveness of sanctions could entail important policy trade-offs—that is, higher economic costs may be more coercive but may also yield greater unintended consequences. For example, two academic studies, based on data from sanctions implemented between 1972 and 2000, found that the negative impact of sanctions on democratic and press freedoms was generally greater with more comprehensive sanctions. Two other studies found that the public health effect of sanctions depended on the costliness or economic impact of the sanctions. Targeted sanctions could, in principle, reduce the unintended consequences of sanctions by reducing economic impacts on society at large. Agency Comments We provided a draft of this report to Treasury, State, and Commerce for review and comment. We received technical comments from all three agencies, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Secretary of State, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612, or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe how the roles of the Departments of the Treasury (Treasury), State (State), and Commerce (Commerce) in implementing U.S. sanctions authorities are identified; (2) examine the extent to which U.S. agencies assess the effectiveness of sanctions; and (3) identify factors that have been shown by publicly available studies to contribute to the effectiveness of economic sanctions. To describe how Treasury’s, State’s, and Commerce’s roles in implementing U.S. sanctions authorities are identified, we reviewed legal authorities, including statutes and executive orders, that authorize various sanctions programs and interviewed relevant agency officials. We also discussed with Treasury, State, and Commerce officials the interagency process used in determining sanctions roles. To examine the extent to which U.S. agencies assess the effectiveness of sanctions, we interviewed officials and reviewed documents from Treasury, State, Commerce, and the Office of the Director of National Intelligence. We also obtained and reviewed agency assessments for sanctions programs related to Burundi, North Korea, Russia, and Somalia. We selected these country-based sanctions programs to obtain at least one country program with more than 200 current sanctions designations and at least one country program with fewer than 200 but more than 10 current sanctions designations as of September 2018. In addition, we included a mixture of different-size economies, based on annual gross domestic product (GDP). We used the agencies’ assessments of the selected programs to gain insight into the types of analysis conducted. To identify factors that have been shown by publicly available studies to contribute to the effectiveness of economic sanctions, we conducted a literature search for studies that examined: factors that contributed to the effectiveness of economic sanctions in changing behavior, and factors that increased the economic impact of sanctions. To identify existing studies, we used three methods. First, we conducted searches of various databases, which produced 280 studies. Second, we conducted snowball sampling, by identifying additional studies cited in papers we had already identified. Third, we asked several academic experts to validate our list of studies and recommend any additional studies that they felt met our criteria. To focus on recent research on the factors that contributed to the effectiveness or economic impact of economic sanctions and to target articles for detailed review, we included studies that met the following criteria: The study evaluated the factors that contributed to the effectiveness or economic impact of sanctions. The study included quantitative analysis of research data, which aggregated and identified patterns across many sanctions episodes. The study was published in a peer-reviewed journal or was an academic working paper. The study included data on U.S.-imposed bilateral or multilateral sanctions but may also have included sanctions imposed by other countries. The study was in English. The study was published from 2004 through October 2018. As an additional date restriction, we only included studies with at least some data from 2000 through October 2018, though the study could have included earlier data as well, in order to improve the likely relevance of the research. The publication date restriction made it more likely that included studies would be cognizant of an important source of bias in earlier sanctions research. Prior to 2004, researchers tended to examine the impact of implemented sanctions and generally excluded cases where the threat of sanctions might have led a target to change their behavior prior to implementation. More generally, observed outcomes of implemented economic sanctions are not representative of the range of circumstances in which sanctions could be imposed, threatened, or useful for deterrence, and as a result these observed outcomes tend to understate the effectiveness of economic sanctions. Finally, to select the studies to be included in our in-depth review, we evaluated them to determine whether they met additional criteria for methodological soundness. We assessed whether the studies used and clearly described appropriate statistical methods to adjust, or control, for factors that could influence the effectiveness or economic impact of sanctions. Additionally, we included only papers that ascribed statistical precision to modeled estimates. To validate the studies we selected for in-depth review, we requested suggestions regarding our list of studies from the following academic experts: Daniel W. Drezner, Bryan R. Early, and T. Clifton Morgan. We identified these researchers on the basis of the relevance of their publications to our objectives, the methodological impact of their contributions to the literature, and the number of citations of any relevant publications since 2009. Applying the selection criteria and the criteria for methodological soundness and incorporating the academic experts’ suggestions resulted in a list of 17 sufficiently rigorous studies, all of which had appeared in peer-reviewed journals. Ten studies were relevant to the factors that contributed to the effectiveness of economic sanctions and seven studies were relevant to the factors that increased the economic impact of sanctions. To obtain relevant context and background, we also examined additional studies related to the factors that contributed to the effectiveness of economic sanctions. These studies did not meet our criteria for inclusion in our in-depth review but provided insight into issues related to the analysis of effectiveness of sanctions and potential unintended consequences of sanctions. All of the studies that met the criteria for our in-depth review, as well as others we cited, are included in appendix II. To review the 17 studies we selected, we used a data collection instrument (DCI) designed to record each study’s research methodology, including its data, outcome measures, control variables, limitations, and analytic techniques and to summarize its major findings. Analysts then independently reviewed the studies and the information captured in the DCIs, reconciling any differences in their assessments through discussion. Next, we summarized the findings and categorized and aggregated the factors relevant to the effectiveness or economic impact of sanctions. We also shared a summary of our initial findings with the academic experts, who generally concurred with our findings. We characterized factors as being supported by “strong evidence” for the purposes of our review only if at least four studies—including more than half of studies that included this factor—found it to have a statistically significant effect and no studies found a statistically significant effect with the opposite sign. We characterized factors as being supported by “some evidence” for the purposes of our review only if at least two studies— including at least half of studies that included this factor—found it to have a statistically significant effect and no studies found a statistically significant effect with the opposite sign. The studies we examined varied in the quality of their methodologies, and as a result, we could not confidently report on precise estimates of the impact of different factors on the effectiveness or economic impact of sanctions. While the statistical models used in the studies we reviewed controlled for factors that could influence the success or failure of sanctions in different circumstances, these models are also subject to some biases and imperfections. For example, studies may not have accounted for all factors that might influence the success of sanctions or may not have recognized or controlled for selection biases that influenced when and how sanctions were imposed. Finally, sanctions datasets include variables for which researchers exercised varying degrees of judgment to code accurately and consistently and which therefore may be measured with imprecision or error. We conducted this performance audit from May 2018 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Publicly Available Studies Reviewed Studies Included in Literature Review Bapat, Navin A., Tobias Heinrich, Yoshiharu Kobayashi, and T. Clifton Morgan. “Determinants of Sanctions Effectiveness: Sensitivity Analysis Using New Data.” International Interactions, vol. 39, no. 1 (2013): pp. 79- 98. Bapat, Navin A., and T. Cliff Morgan. “Multilateral Versus Unilateral Sanctions Reconsidered: A Test Using New Data.” International Studies Quarterly, vol. 53, no. 4 (2009): pp. 1075-1094. Biglaiser, Glen, and David Lektzian. “The Effect of Sanctions on U.S. Foreign Direct Investment.” International Organization, vol. 65, no. 3 (2011): pp. 531-551. Caruso, Raul. “The Impact of International Economic Sanctions on Trade: Empirical Evidence over the Period 1960-2000.” Rivista Internazionale di Scienze Sociali, vol. 113, no. 1 (2005): pp. 41-66. Early, Bryan R. “Unmasking the Black Knights: Sanctions Busters and Their Effects on the Success of Economic Sanctions.” Foreign Policy Analysis, vol. 7, no. 4 (2011): pp. 381-402. Early, Brian R., and Robert Spice. “Economic Sanctions, International Institutions, and Sanctions Busters: When Does Institutionalized Cooperation Help Sanctioning Efforts?” Foreign Policy Analysis, vol. 11, no. 3 (2015) pp. 339-360. Hatipoglu, Emre, and Dursun Peksen. “Economic Sanctions and Banking Crises in Target Economies.” Defence and Peace Economics, vol. 29, no. 2 (2018): pp. 171-189. Krustev, Valentin L., and T. Clifton Morgan. “Ending Economic Coercion: Domestic Politics and International Bargaining.” Conflict Management and Peace Science, vol. 28, no. 4 (2011): pp. 351-376. Lektzian, David, and Dennis Patterson. “Political Cleavages and Economic Sanctions: The Economic and Political Winners and Losers of Sanctions.” International Studies Quarterly, vol. 59, no. 1 (2015): 46-58. Major, Solomon. “Timing Is Everything: Economic Sanctions, Regime Type, and Domestic Instability.” International Interactions, vol. 38, no. 1 (2012): pp. 79-110 Miller, Nicholas L. “The Secret Success of Nonproliferation Sanctions.” International Organization, vol. 68, no. 4 (2014): pp. 913-944. Neuenkirch, Matthias, and Neumeier, Florian. “The Impact of UN and US Economic Sanctions on GDP Growth.” European Journal of Political Economy, vol. 40, part A, (2015): pp. 110-125. Nooruddin, Irfan, and Autumn Lockwood Payton. “Dynamics of Influence in International Politics: The ICC, BIAs, and Economic Sanctions.” Journal of Peace Research, vol. 47, no. 6 (2010): pp. 711–721. Peksen, Dursun. “Autocracies and Economic Sanctions: The Divergent Impact of Authoritarian Regime Type on Sanctions Success.” Defence and Peace Economics, Vol. 30, No. 3 (2017): pp. 253-268. Peksen, Dursun and Byunghwan Son. “Economic Coercion and Currency Crises in Target Countries.” Journal of Peace Research, vol. 52, no. 4 (2015): pp. 448-462. Peterson, Timothy M. “Sending a Message: The Reputation Effect of US Sanction Threat Behavior.” International Studies Quarterly, vol. 57, no. 4 (2013): pp. 672-682. Shin, Geiguen, Seung-Whan Choi, and Shali Luo. “Do Economic Sanctions Impair Target Economies?” International Political Science Review, vol. 37, no. 4 (2016): pp. 485-499. Other Studies Cited Allen, Susan Hannah, and David J. Lekztian. “Economic Sanctions: A Blunt Instrument?” Journal of Peace Research, vol. 50, no. 1 (2013): pp. 121-135. Drezner, Daniel W. “Sanctions Sometimes Smart: Targeted Sanctions in Theory and Practice.” International Studies Review, vol. 13, no. 1 (2011): pp. 96-108. Drezner, Daniel W. “The Hidden Hand of Economic Coercion.” International Organization, vol. 57, no. 3 (2003): pp. 643-659. Drury, A. Cooper, and Dursun Peksen. “Women and Economic Statecraft: The Negative Impact Economic Sanctions Visit on Women.” European Journal of International Relations, vol. 20, no. 2 (2014): pp. 463-490. Forrer, John. “Economic Sanctions: Sharpening a Vital Foreign Policy Tool.” Atlantic Council Issue Brief. Washington, D.C.: Atlantic Council, June 2017. Harrell, Peter. “Is the U.S. Using Sanctions Too Aggressively? The Steps Washington Can Take to Guard Against Overuse.” Foreign Affairs. September 11, 2018. Licht, Amanda A. “Hazards or Hassles: The Effect of Sanctions on Leader Survival.” Political Science Research and Methods, vol. 5, no.1 (2017): pp. 143-161. Marinov, Nikolay. “Do Economic Sanctions Destabilize Country Leaders?” American Journal of Political Science, vol. 49, no. 3 (2005): pp. 564-576. Peksen, Dursun. “Coercive Diplomacy and Press Freedom: An Empirical Assessment of the Impact of Economic Sanctions on Media Openness.” International Political Science Review, vol. 31, no. 4 (2010): pp. 449-469. Peksen, Dursun. “Economic Sanctions and Human Security: The Public Health Effect of Economic Sanctions.” Foreign Policy Analysis, vol. 7, no. 3 (2011): pp. 237-251. Peksen, Dursun, and A. Cooper Drury. “Coercive or Corrosive: The Negative Impact of Economic Sanctions on Democracy.” International Interactions, vol.36, no. 3 (2010): pp. 240-264. Wood, Reed M. “‘A Hand upon the Throat of the Nation’: Economic Sanctions and State Repression, 1976–2001.” International Studies Quarterly, vol. 52, no. 3 (2008): pp. 489-513. Appendix III: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgements In addition to the contact named above, Drew Lindsey (Assistant Director), Michael Maslowski (Analyst in Charge), Eugene Beye, Nisha Rai, Michael Hoffman, Reid Lowe, Christopher Keblitis, Grace Lui, Justin Fisher, Leia Dickerson, Michael Simon, and Julia Robertson made key contributions to this report.
Why GAO Did This Study The United States maintains dozens of economic sanctions programs to counteract activities that threaten U.S. national interests. There are currently 20 country-based or country-related sanctions programs, according to lists of sanctions programs published by Treasury and State (see map). Additional countries may also be affected by sanctions programs that target entities regardless of their geographic location, such as counter-narcotics sanctions. Treasury, State, and Commerce, among other agencies, coordinate to implement these programs. Sanctions may place restrictions on a country's entire economy, targeted sectors of the economy, or individuals or corporate entities. Reasons for sanctions range widely, including support for terrorism, narcotics trafficking, weapons proliferation, and human rights abuses. Economic restrictions can include, for example, denying a designated entity access to the U.S. financial system, freezing an entity's assets under U.S. jurisdiction, or prohibiting the export of restricted items. GAO was asked to review issues related to the implementation and effectiveness of economic sanctions. Among other things, this report (1) examines the extent to which U.S. agencies assess the effectiveness of sanctions, and (2) identifies factors that have been shown by publicly available studies to contribute to the effectiveness of economic sanctions. GAO reviewed documents and interviewed officials at Treasury, State, and Commerce and in the U.S. Intelligence Community. GAO also reviewed academic studies that used rigorous statistical methods to analyze the impact and effectiveness of economic sanctions across many sanctions programs. What GAO Found The Departments of the Treasury (Treasury), State (State), and Commerce (Commerce) each undertake efforts to assess the impacts of specific sanctions on the targets of those sanctions. For example, Treasury and State both analyze or compile information on sanctions programs' impacts, such as on a target country's economy. In addition, Commerce assesses prospective impacts of some sanctions on targeted countries and others. According to Treasury and State officials, the agencies also use Intelligence Community assessments to gauge sanctions' impacts. However, agency officials cited several difficulties in assessing sanctions' effectiveness in meeting broader U.S. policy goals, including challenges in isolating the effect of sanctions from other factors as well as evolving foreign policy goals. According to Treasury, State, and Commerce officials, their agencies have not conducted such assessments on their own. However, they stated that agency assessments of sanctions' impacts often contribute to broader interagency discussions that examine the effectiveness of sanctions in achieving policy goals. The academic studies GAO reviewed suggest that several factors have contributed to more-effective sanctions. Studies examining factors that contribute to the effectiveness of sanctions in changing targeted countries' behavior provided evidence that sanctions have been more effective when (1) they were implemented through an international organization (e.g., the United Nations) or (2) the targeted countries had some existing dependency on, or relationship with, the United States, such as a trade or military relationship. In addition, studies examining factors that increased sanctions' economic impact provided evidence that the impact has generally been higher when the sanctions were more comprehensive in scope or severity, or—similar to the findings on effectiveness in changing behavior—were imposed through an international organization. Sanctions may also have unintended consequences for targeted countries, such as negative impacts on human rights or public health. In some studies, larger economic impacts were associated with more unintended consequences.
gao_GAO-19-522
gao_GAO-19-522_0
Background Federal Programs that May Help College Student Parents Afford Child Care We have previously reported that multiple federal programs provide or support early learning and child care, but the CCAMPIS program is the only one designed specifically to support the participation of low-income parents in postsecondary education by funding child care services. Education awards CCAMPIS competitive grants for up to 4 years to colleges to either support existing campus-based child care programs or establish new programs. Grant funds are primarily intended to help students who receive or are eligible to receive federal Pell Grants, but grantees may also serve low-income graduate students or low-income foreign students. Education reported that CCAMPIS grantees received about $15 million in fiscal year 2017 and about $33 million in fiscal year 2018. HHS administers other key federally funded programs that subsidize child care that may assist college students: the Child Care and Development Fund (CCDF), Temporary Assistance for Needy Families (TANF), and Head Start. CCDF is the primary source of federal funding dedicated to helping low-income families pay for child care. Parents must generally be working or attending a job training or education program to receive CCDF child care subsidies. States have flexibility to establish program eligibility criteria and other priorities within the program’s broad federal requirements. According to the HHS fiscal year 2020 budget justification, the CCDF program provides about $8.2 billion in federal funds per year for child care. In fiscal year 2017, the latest year for which preliminary data were available, CCDF provided child care assistance to about 1.3 million children each month. TANF is a federal block grant to states that supports cash assistance and a variety of other benefits and services to low-income families with children. States may use their TANF funds to directly fund child care, both for families receiving TANF cash assistance and for other low-income families in the state. In 2017, 9 percent of federal TANF funds used—or $1.5 billion—was spent directly for child care, while states spent $2.3 billion in maintenance of effort funds directly on child care, according to the HHS fiscal year 2020 budget justification. In addition, states transferred $1.3 billion in federal TANF funds to CCDF in fiscal year 2017. Head Start grants are awarded directly to public and private nonprofit and for-profit preschool and child care providers. The purpose of the Head Start program is to promote the school readiness of low-income children through the provision of educational, health, and other services. Most Head Start participants are 3- and 4-year-old children, but through the Early Head Start program, many infants and toddlers also receive early education and child care services. In fiscal year 2017, Head Start provided about $9.6 billion in grants and other services, and the program served over 1 million children. Federal Student Aid and the Dependent Care Allowance Under Title IV of the Higher Education Act of 1965, as amended, the federal government offers students financial assistance to help pay for their education. To be eligible for most federal student aid, a student must demonstrate financial need. Students are eligible for federal need based aid if the cost of attending a school is more than a family’s expected financial contribution. A family’s expected contribution is an approximation of the financial resources a family has available to help pay for a student’s postsecondary education expenses. The cost of attendance is calculated by each school using elements set forth in federal law. In addition to expenses such as tuition, fees, and room and board, the cost of attendance may include a dependent care allowance for students who incur such costs—including for child care— while in school. Being eligible for a dependent care allowance increases the student’s total cost of attendance, which could make the student eligible for additional financial assistance. Federal student aid is awarded primarily through grants and loans. Grants: Federal Pell Grants are the primary federal grant aid available to low-income undergraduate students with financial need. The maximum allowable Pell Grant was $6,095 for the 2018-2019 school year. A student’s expected family contribution is a key determinant of Pell Grant eligibility. Federal Direct Loans: Education provides loans to undergraduate and graduate students both with and without financial need. The maximum amount an undergraduate student may borrow in federal student loans is based on the student’s year in school and dependency status (see table 1). Students are classified as either financially dependent on their parents or financially independent. Students with dependent children are categorized as independent students for the purpose of calculating federal student aid. In addition, the total amount of grants and scholarships plus the total amount of federal student loans a student receives cannot exceed the total cost of attendance at his or her school. As a result, some students may be eligible for a lower federal loan amount than the maximum allowable amount, after grant and scholarship aid are factored in. For example, if an independent, first-year student’s total cost of attendance is $20,000, and the student receives $12,000 in grant and scholarship aid, the student can take out no more than $8,000 in federal student loans, which is less than the first-year limit of $9,500. Under federal law, schools participating in federal student aid programs are required to disclose certain consumer information, including information about college costs and the availability of federal student aid. We have previously reported that schools are increasingly using their websites to share consumer information, according to Education officials. Schools must also post a tool on their websites to help students estimate their cost of attendance based on their individual circumstances. Approximately 20 Percent of Undergraduate Students Were Parents and About Half Left School without a Degree About 22 Percent of Undergraduate Students in 2015-2016 Were Raising Children and Many Were Single Women Working Full-Time Student parents comprised about 20 percent of undergraduate students, and many had characteristics that Education has reported can affect their likelihood of staying enrolled in school and completing a degree, such as being a single parent and working full time. In 2015-2016, an estimated 22 percent of undergraduate students (4.3 million of 19.5 million) were parents, according to our analysis of Education’s nationally representative NPSAS data. This percentage has remained close to one-quarter since 2003-2004, peaking at nearly 26 percent in 2011-2012. In addition, about 55 percent (2.4 million) were single parents and 44 percent (1.9 million) were working full-time while enrolled (see fig. 1). About 23 percent (nearly 1 million) were single parents working full-time while enrolled. In addition, undergraduate student parents in 2015-2016 were older than other students and mostly female, and a higher percentage were African- American compared to students without children. The average age of undergraduate student parents was 33, compared to 24 for all other undergraduates. A relatively small proportion of undergraduate student parents—15 percent—was age 23 or younger. Most student parents were female (71 percent). An estimated 23 percent of undergraduate student parents were African-American, compared to 13 percent of all other undergraduates (see app. II for additional information on student parent characteristics). A Lower Percentage of Undergraduate Parents Completed Degrees Compared to Other Students Education data indicate that a lower percentage of undergraduate student parents earned a degree compared to students without children. According to our analysis of the 2009 BPS data—a 6-year follow-up survey of the cohort of first-time students in the 2003-2004 school year— an estimated 52 percent of undergraduate student parents left school without a degree within 6 years, compared to 32 percent of students without children (see fig. 2). Compared to students without children, a higher percentage of undergraduate student parents were enrolled in private for-profit schools, programs of two years or less, and online programs, according to NPSAS data for 2015-2016. An estimated 25 percent of undergraduate student parents were enrolled in programs taught entirely online, compared to 7 percent of all other undergraduates (see fig. 3). Undergraduate Parents Had Fewer Financial Resources for Their Education than Students without Children Undergraduate student parents had fewer financial resources available to fund their education than students without children, according to NPSAS data for 2015-2016. An estimated 67 percent of undergraduate student parents in 2015-2016 had an expected family contribution of zero, compared to 31 percent of students without children. Student parents also had an average expected family contribution of $9,180, compared to $17,506 for students without children. Accordingly, about half of student parents received a federal Pell Grant, compared to 35 percent of all other undergraduates. In addition, a higher percentage of student parents rely on federal student loans compared with other students. Approximately 62 percent of undergraduate student parents used federal student loans for their education, compared to 50 percent of students without children. About half of student parents had childcare expenses, in addition to their education and other living expenses. An estimated 45 percent reported paying for child care in 2015-2016, paying an average of about $490 per month (see fig. 4). An estimated 56 percent of student parents had a child age 5 or younger. However, about 60 percent of undergraduate student parents were enrolled in schools that did not offer on-campus childcare for students. CCAMPIS Grants Helped Some Low- Income Students Pay for Child Care, but Education Reported Unreliable Program Outcome Information CCAMPIS Grants Helped about 3,300 Students at 85 Schools Pay for Child Care during the 2016- 2017 School Year CCAMPIS grantees reported that about 3,320 student parents received subsidized child care services for at least one academic term during the 2016-2017 school year, the most recent year for which performance data were available. The 85 schools that submitted CCAMPIS program data for this time period were about evenly split between 2-year (42) and 4- year (43) schools. The average amount awarded to each school for the year was approximately $182,000. Grantees reported that there were more children of CCAMPIS-eligible parents on waiting lists to receive child care services (over 4,200 children) than the number of children served by the 85 schools (about 4,000). Many of the children on waiting lists were infants and toddlers (65 percent). Most CCAMPIS participants in 2016-2017 were female and low-income undergraduate students, according to data reported by grantees. Further, most participants were undergraduates who either received or were eligible to receive federal Pell Grants (85 percent). About 10 percent were low-income graduate students. Almost 80 percent of CCAMPIS participants were female. A majority of female CCAMPIS participants attended 2-year schools (53 percent). In contrast, most male participants were enrolled in 4-year schools (70 percent). Grantee reported data also indicate that about half of CCAMPIS participants were single parents, although most male students served by the grant were married (78 percent). Just under half of CCAMPIS participants were white, 25 percent Hispanic or Latino, and 15 percent were Black or African-American. Grantees reported using CCAMPIS funds to subsidize a variety of child care services, either provided on-campus or in the community. Almost all grantee schools (84) reported using CCAMPIS funds to subsidize full-time child care, while 72 funded part-time child care (see fig. 5). Fewer schools funded before- or after-care or care during the evening (18 schools) or weekends (5 schools). Many grantees also reported funding parenting classes (e.g., workshops on time management and family nutrition) and meetings (e.g., student parent advisory board meetings). Grantees funded other activities with their CCAMPIS grants, such as student advising, free finals week child care, and child health screenings, according to grantee data. While some schools paid for the entire cost of child care for CCAMPIS participants, most provided partial subsidies using a sliding fee scale. Among the students that grantees reported as receiving a CCAMPIS- funded child care subsidy, over 75 percent had some out-of-pocket child care expenses (2,091 of 2,754). The median amount students paid out- of-pocket each month was about $160, after receiving about $385 per month in grant-funded subsidies. CCAMPIS grants can help schools address the demand for child care that their on-campus child care centers have not been able to accommodate. For example, prior to receiving the CCAMPIS grant, the on-campus child care center at one 2-year school on the West Coast served children age 2.5 to 5 years, according to school officials. With CCAMPIS grant funding, officials said they were able to expand on-campus child care for school- age children (ages 5-13). They said the grant also allows the school to offer drop-in child care when local elementary schools are closed. In another case, to help meet demand for child care among student parents that its on-campus child care center could not accommodate, an official from a 4-year school in the Rocky Mountain region said the school has established relationships with approximately 20 community-based child care centers and used CCAMPIS funds to help students pay for child care provided by these off-campus centers. These CCAMPIS grantees told us they also used grant funds to offer students supportive services in addition to subsidized child care. For example, the 2-year school on the West Coast runs a family resource center that provides free baby clothes, diapers, wipes, college textbooks, and school supplies for students and their children. The 4-year school has used CCAMPIS funds to pay for a graduate student to provide home visits for student parents who have concerns about their children's development or behavior. These schools also relied on funding from other sources to support student parents. For example, officials from the 2-year school we spoke with said the school uses local funds to host weekly faculty-led playgroups and state funding to increase student parent access to food pantries and housing assistance and to host evening parenting workshops led by a marriage and family counselor. Education Reported Unreliable Persistence and Graduation Rates among CCAMPIS Participants In its budget justification to Congress, Education reports on the progress that CCAMPIS grantees make toward meeting the program’s performance goals; however, flaws in its calculations prevented Education from reporting reliable results. Education reports information on three performance measures for CCAMPIS participants: their persistence in school, the federal cost for each persistent student, and their graduation rate. The persistence rate for students participating in the CCAMPIS program is the percentage of program participants who receive child care services that remain in postsecondary education at the end of the academic year, according to Education’s published definition. To calculate this measure Education’s explanation states that it includes any student that has remained enrolled in school at the end of the school year, transferred from a 2-year to a 4-year school during the school year, or graduated during the school year. However, Education’s calculations did not produce results that align with this definition of persistence; specifically, the agency’s calculations did not identify students who remained enrolled until the end of the school year. Education counted a student as persisting if the grantee reported the student as enrolled and participating in the CCAMPIS program in either the fall or the winter terms and did not consider whether students were also enrolled in another term during the year. As a result, a student who was enrolled and participating in CCAMPIS during the fall term and withdrew from school during the spring term was counted as having persisted in school. Further, while Education’s calculation included students who graduated at some point during the school year, it did not include students who transferred from a 2-year to a 4 year school. Using Education’s definition, we recalculated the percentage of CCAMPIS participants who persisted until the end of the 2016-2017 school year. Specifically, we limited our analysis to students who grantees reported as having participated in CCAMPIS during either the fall or winter term and persisted to the spring term. While Education reported a persistence rate of about 74 percent in its fiscal year 2020 budget justification to Congress, our recalculation indicated that the persistence rate was an estimated 82 percent. The flaws in Education’s persistence rate calculation meant the agency also reported unreliable results for the federal cost per CCAMPIS participant who persisted in school. Given our recalculation of the persistence rate for students enrolled in both 2-year and 4-year schools, we calculated that the cost per CCAMPIS participating student who persisted during the 2016-2017 school year was about $7,550. Education reported this cost as $5,625 in its fiscal year 2020 budget justification to Congress. Education defines its graduation rate measure as the percentage of CCAMPIS program participants enrolled in 2-year schools who graduate from postsecondary education within 3 years of enrollment. According to Education’s published definition of this measure, it is intended to be consistent with Education’s standard graduation rate reported by all 2- year schools that receive federal student aid funds. Education does not calculate or report the graduation rate for CCAMPIS participants enrolled in 4-year schools. However, Education’s calculations did not produce results that aligned with its published graduation rate definition. To correctly calculate the graduation rate, based on its definition, Education would need to track the enrollment of a cohort of CCAMPIS participating students in 2-year schools who started school in the same year. This would allow Education to follow these students over 3 years to identify how many of them graduated during this time period. Instead, Education included in its calculation students that participated in CCAMPIS at any point during a 3- year period regardless of when they first enrolled in school. Education does not currently collect data from CCAMPIS grantees that indicate when students first enrolled in school, which it would need to accurately calculate the percentage of CCAMPIS program participants enrolled in 2- year schools who graduate within 3 years of enrollment. Education officials said that they were concerned that collecting such student enrollment information could be overly burdensome for grantees. Education officials acknowledged that they had not accurately defined this performance measure in the fiscal year 2020 budget justification to Congress. Specifically, Education officials said that although the published definition of the CCAMPIS graduation rate states it is consistent with the agency’s standard graduation rate measure, program officials actually calculate something different. Officials said that because they do not collect data on when students first enroll in school, they calculated the percent of CCAMPIS participants who graduated within 3 years of receiving CCAMPIS subsidies instead. While this alternative could be used as a CCAMPIS outcome measure, Education’s calculations did not align with this definition because they did not organize students into cohorts based on when they first started receiving CCAMPIS subsidies. Education has the data to do this, but would need to revise its calculations. Without either collecting the student enrollment data needed to calculate a standard 3-year graduation rate or accurately defining and calculating a different metric, Education is unable to report reliable college completion results for CCAMPIS participants. Having accurate performance measures is critical to assessing the effectiveness of the CCAMPIS program. Federal standards for internal controls state that management should ensure that measurements achieve the appropriate level of precision and accuracy for their reporting purposes. These federal standards also state that when communicating with external parties, managers should share quality information to help the entity achieve its objectives. However, Education has not calculated a persistence rate or graduation rate that accurately reflects the CCAMPIS program’s performance measures, as the agency has publicly defined them. As a result, the agency is unable to give a reliable accounting of CCAMPIS performance in its budget justification to Congress. Reporting unreliable performance information about the CCAMPIS program affects Education’s ability to manage the program and Congress’ ability to make informed funding and program decisions. Little Is Known about the Extent to Which Students Access Other Key Federal Programs that Help Low-Income Families Pay for Child Care College students may benefit from other key federal programs that fund child care services for low-income families—CCDF, TANF, and Head Start—but little is known about the extent to which they benefit. Child Care and Development Fund: HHS does not track how many families use CCDF child care subsidies specifically to pursue postsecondary education, as this is an optional program activity, according to HHS officials. HHS tracks and reports on child care subsidy use for training and education as a broader category. For fiscal year 2016, in about 6 percent of families receiving child care subsidies a parent was enrolled in training or education, and in an additional 7 percent of families a parent was enrolled in training or education while also employed, according to state reported data. These data also show that states differed in the extent to which parents pursuing training or education received such subsidies. For example, three states provided CCDF subsidies during an average month in 2016 to only a small number of families where a parent was not employed while pursuing education or training (less than one-half of one percent). In contrast, two states provided CCDF subsidies to about 20 percent of families where a parent was pursuing education or training while not employed. Some states have established policies that restrict postsecondary students’ access to CCDF funds, according to our analysis of an HHS report containing information on key state CCDF policies as of 2017. Specifically, four states have policies that limit students who are pursuing postsecondary education from receiving child care subsidies. Nine additional states do not allow access to child care subsidies for full-time students, unless they also meet work requirements. For example, Arizona, Kentucky, Pennsylvania, and Washington require full-time students to work 20 hours each week in addition to attending school. States have implemented other policies that affect CCDF subsidy access for postsecondary students. Program length: Eight states limit the length of time students may receive child care subsidies for enrollment in a postsecondary program. For example, Alabama, Kansas, New Hampshire, and Wisconsin limit postsecondary programs to 24 months. Program type: Ten states place restrictions on the type or nature of the postsecondary program students may pursue. For example, states may limit approved programs to vocational programs. Almost all states exclude graduate level programs. Academic Achievement: Four states have policies related to the minimum grade point average students must maintain to receive child care subsidies. For example, Illinois requires that students that do not work 20 hours per week maintain a 2.5 average. In 2016, HHS issued an informational memorandum with examples of policies and practices that could help states support parents who need child care assistance to participate in education programs. Such strategies included limiting the number of hours students were required to work and ensuring student parents are aware of child care services. See the text box for an example of how one school reported it is using CCDF funds to assist student parents. Example of using the Child Care and Development Fund (CCDF) to subsidize child care for student parents New York state uses CCDF funds to offer child care subsidies to students enrolled in its State University of New York (SUNY) and City University of New York (CUNY) schools. These colleges partner with nonprofit child care providers and receive CCDF funds to provide child care subsidies to income-eligible students. Schools receive additional state funds to help pay for child care operating costs, such as staff salaries, supplies, and meals for children. A school official from one of the state’s community colleges told GAO that CCDF subsidizes care during time students are in class. School officials submit students’ class schedules to the state when applying for benefits on students’ behalf. Students pay out of pocket for any time they elect to enroll their children in care that is in addition to scheduled class time. However, eligible families may not receive a CCDF child care subsidy, as states often do not have sufficient funds to serve all eligible families. HHS officials said that states must prioritize three types of eligible families: families with very low incomes, families with children with special needs, and families who are experiencing homelessness. Temporary Assistance for Needy Families: Student parents may also be eligible to receive child care subsidies from their state’s TANF program, but it is unclear how many students benefit from these subsidies. HHS officials said that although they track the amount of TANF funds states use to help families pay for child care, HHS does not collect information that would allow it to identify how many families are using child care to pursue postsecondary education. According to NPSAS data, an estimated 4 percent of undergraduate student parents reported that a member of their household received TANF assistance during either 2013 or 2014. For an example of how one school reported it is using TANF funds to assist student parents, see the text box. Example of using Temporary Assistance for Needy Families (TANF) to subsidize child care for student parents A TANF-funded program in Arkansas called the Career Pathways Initiative assists student parents with child care costs. This program also offers financial assistance for school-related expenses and a number of other supportive services. To access child care assistance from the Career Pathways Initiative, student parents must have an income at or below 250 percent of the poverty level or receive another state service, such as Medicaid. According to an official from one community college in the state, in order to receive a child care subsidy at that school, students must also work at least one hour per week. Research on the Career Pathways Initiative found that, of the nearly 30,000 low-income participants enrolled in Arkansas community colleges between 2006 and 2013, more than 52 percent graduated with a degree or certificate. This is more than double the 24 percent completion rate of all Arkansas community college students who did not participate in the program. Metis Associates and the Arkansas Research Center, “College Counts Evidence of Impact: A Research Analysis of the Arkansas Career Pathways Initiative.” January 2018. Head Start: Student parents may also enroll their children in Head Start programs, and some colleges have received Head Start grants or partnered with local Head Start programs to connect eligible student parents with services. HHS officials said they do not, however, collect information from Head Start grantees to identify how many grantees partner with colleges or how many Head Start grantees themselves are colleges. They also said they cannot quantify the number of student parents with children enrolled in Head Start programs because that information is not collected by the Office of Head Start, as this is not a primary purpose of the program. At many Head Start programs— particularly those located in early learning or child care centers—services are only available on a part day or part week basis, which may not align with a student’s school or work schedules. See the text box for an example of how one school reported it is supporting its student parents with Head Start funds. Example of using Head Start to subsidize child care for student parents A community college district in the Northwest that comprises two campuses, has received a Head Start grant for approximately the past 25 years. According to school officials, in 2018, the district managed 9 Head Start and Early Head Start centers located across the county, including centers on each of the district’s community college campuses. According to school officials, the district used Head Start funding to offer family well-being services for student parents, including helping families find housing, providing referrals for mental health counseling, and providing bus passes. In addition, the program connected families with medical and dental services. For the 2017-2018 school year, the district reported that over 88 percent of children enrolled in its Head Start centers were up-to-date on dental and medical screenings. Websites at Many Selected Schools Did Not Publicize Information about the Option to Increase Federal Student Aid to Pay for Child Care Students May Receive Additional Federal Student Aid to Help Pay for Child Care in Certain Circumstances In certain circumstances, a student parent may be eligible to receive additional federal student aid to help pay for child care. Students with dependent children in paid child care are allowed to request a dependent care allowance as part of their financial aid calculation, but whether it provides them with additional financial aid depends largely on other school costs. For example, at higher-cost schools, these students may already be eligible for the maximum amount of federal student loans before adding this allowance. In these situations, requesting a dependent care allowance would not increase the amount of federal student loans available to the student because they have already reached the maximum. At lower-cost schools, such as community colleges, costs may be low enough to allow student parents to access additional federal student loans by adding a dependent care allowance. According to our analysis of 2016 NPSAS data, an estimated 2.6 million student parents nationwide were eligible for a lower federal loan amount than the maximum allowable loan amount, so that adding a dependent care allowance might make them eligible for a higher federal loan amount. Figure 6 illustrates how adding a dependent care allowance can affect a student’s federal student loan amount at a school with a relatively low cost of attendance. In this example, adding a $3,000 dependent care allowance to a student’s cost of attendance increases the amount of federal student loans the student can borrow without exceeding the maximum amount available ($9,500 for a first-year, independent undergraduate student). At a higher-cost school, however, a student may already be eligible for the maximum possible loan amount, so adding a dependent care allowance would not affect how much the student could take out in federal student loans. Officials from seven of the 13 schools we interviewed said that adding a dependent care allowance would more likely increase the amount of federal student loans a student can borrow, rather than increase a student’s access to grant or scholarship aid. However, school officials we interviewed who recently added dependent care allowances to students’ financial aid calculations said that students with a dependent care allowance may, in some cases, receive additional grants from the state or school. Officials at most of the 13 schools we contacted said they receive relatively few requests for a dependent care allowance, generally ranging from zero to 47 in 1 year. Officials at the eight schools that had included this allowance in recent years reported different ways of determining the amount of the allowance. At two of these schools, officials said they allot a fixed amount for the dependent care allowance. Officials at the other six schools said allowance amounts are flexible and based on students’ documented child care expenses, and can vary depending on the number of children in child care. Fixed. One school in the Northwest surveys local child care providers annually to determine the community standard rate each year and bases its dependent care allowance amount on the average market value in the area, according to a school official. This rate is two tiered. The first tier is for children ages 0-5 and is $552 per month and the second is for children ages 6-12, with a monthly allowance of $276. At a school in the Midwest, an official said that the school provides a fixed allowance amount to all students who indicate they have a dependent child on the Free Application for Federal Student Aid (FAFSA). The allowance amount is based on the student’s enrollment status (e.g., $900 per school year for a student enrolled full-time). The school included a dependent care allowance for 30 percent of students who received financial aid in 2017-2018, according to a school official. Flexible. At one school in the South, students can request a dependent care allowance based on their actual child care expenses, according to a school official. Financial aid officials at the school use their judgment to determine if the request is reasonable for the community and may request documentation for requests exceeding $2,500 per semester. For example, students with more than one child may spend more than $2,500 per semester on child care. An official at another school in the West said it does not set limits for the allowance, but financial aid counselors use their judgment to counsel students if the requested amount looks too high for the student’s circumstances. The average allowance amount at this school is between $600 and $1,000 per month. Not all students may want to increase their student loans to finance their child care costs while in school, but access to additional federal student loans could be a useful option for those students who need it. We previously reported that officials at a national association of community colleges said that low-income students often use federal loans to help them pay for basic living expenses. These loans can be a valuable resource for some students who need additional funds to support themselves while in college, but some school officials cautioned that loans may not be the best choice for all students, and may worsen the financial position of already vulnerable students. However, two recent studies of 2-year students examined how federal financial aid improved students’ college outcomes. One study found that federal financial aid helped reduce the drop-out risk for some students, while another study found that students who received federal student loans had completed more college credits and earned higher grades than those who did not. Selected Schools Generally Did Not Provide Information on their Websites as of December 2018 about the Option to Increase Federal Student Aid to Help Pay for Child Care About two-thirds of the college websites we reviewed (40 of 62) did not include information on their websites about the option to include a dependent care allowance in financial aid calculations. While schools are required to post certain college cost information on their websites and inform students about the availability of financial aid, they are not required to inform students about the dependent care allowance. At 29 of these 40 schools, the average net price for a low-income student is low enough that some students may qualify for additional loan amounts with the addition of a dependent care allowance. We reviewed the websites of schools that were CCAMPIS grant recipients. As CCAMPIS grant recipients, these schools serve students with a demonstrated need for child care services, and have shown an interest in helping students with their child care needs. Given that most of these schools do not provide information online about the option to include a dependent care allowance, other schools without the same focus on student parents may be even less likely to make information about this option available. If schools are not consistently informing students about the option to access additional federal student aid, student parents who could benefit may not be aware the option exists, and therefore not apply for additional aid that could help them pay for child care. Among the 22 schools that did provide information about the dependent care allowance on their websites, we found that the details they provided varied. They ranged from a general statement on the existence of the allowance to explicit instructions on how to request it, and, in some cases, the specific dependent care allowance amounts the school would provide. Three of the 22 schools that discussed the dependent care allowance on their websites did not post any instructions on how to add the allowance to financial aid calculations, and the instructions posted on the other 19 school websites varied. Such instructions included directing students to contact the financial aid office and submitting a financial aid award appeal. Among the 13 schools at which we conducted interviews, six schools included information about the dependent care allowance on their websites and seven did not. Officials at two schools that publicized the allowance on their websites said that the schools also took other steps to inform current students about the dependent care allowance. For example, one school official said the school references the allowance in emails to students about the on-campus child care center. Officials at the other schools—including the seven schools that did not include information on their websites—said that they did not use any other method to inform current students about the dependent care allowance. Further, none of the 13 schools made information publicly available to prospective students using anything other than the schools’ website, according to school officials. Additionally, although not generalizable, there was a relationship between those schools that used their websites to inform student parents about the option to include a dependent care allowance and whether they had provided this allowance in recent years. All six schools that provided dependent care allowance information online also reported including this allowance in the financial aid calculations of some students in recent years. Of the seven schools that did not include dependent care information online, just two of them reported that they had provided any dependent care allowances in recent years. Education uses its Federal Student Aid (FSA) Handbook—a comprehensive annual guide to regulatory and administrative requirements for federal student aid programs—to instruct school financial aid officials on how to incorporate the dependent care allowance in a student’s financial aid calculations. However, the handbook does not encourage schools to make information readily available to students via school websites about the option to increase federal student aid to help pay for child care or what steps they need to take to request it. Posting this information on school websites would make it more easily accessible to students, including prospective students who may not have access to publications located on campus. Education has used its handbook to encourage schools to adopt other suggested practices, such as informing students about how to save money on textbooks by either renting them or purchasing used copies. Moreover, Education officials said that they could include language in the handbook encouraging schools, as a best practice, to include information about the dependent care allowance on school websites along with other college cost information. Federal standards for internal control state that management should consider the availability of information and the extent to which information is readily available to the intended audience. Because the dependent care allowance can affect how much financial aid a student can access, making this information accessible on school websites would help ensure enrolled and prospective students are aware of all of their financial aid options. Conclusions Student parents face many obstacles to completing college, including paying for child care, and are less likely to complete school than students without children. The CCAMPIS program offers financial assistance that can provide key support to help student parents complete college. However, because Education is not accurately calculating its CCAMPIS performance measures, the agency is not reporting reliable information on program outcomes. As a result, it is difficult for Education and Congress to evaluate the effectiveness of the program and make informed funding and program decisions. Federal student aid can be an important resource available to help student parents—who have fewer financial resources than other students—pay for child care while enrolled in school, but only if students are aware of the option to increase aid to help cover child care costs. Without information made widely available on school websites, student parents who could benefit may not know they can obtain additional aid. Moreover, the challenges this population faces in completing college make it especially important that they know about the types of assistance available to them. This information is particularly important for prospective students as they consider costs among different schools. Encouragement from Education for schools to provide information about the dependent care allowance on their websites could offer student parents more complete information about the financial aid resources available to them and how to request additional aid that could ultimately help them remain in school and graduate. Recommendations for Executive Action We are making the following three recommendations to Education. The Assistant Secretary for Postsecondary Education should correctly calculate its CCAMPIS program persistence rate and cost per persisting student measures. (Recommendation 1) The Assistant Secretary for Postsecondary Education should either collect the CCAMPIS participant enrollment data needed to calculate a standard 3-year graduation rate or accurately define and calculate a different college completion measure. (Recommendation 2) The Chief Operating Officer of Federal Student Aid should encourage schools—through appropriate means, such as the FSA Handbook—to inform students via school websites about the availability of the dependent care allowance and how to request the allowance. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to HHS and Education for review and comment. HHS provided technical comments, which we incorporated as appropriate. Education provided written comments, which are reproduced in appendix III. In its comments, Education stated that the report inaccurately characterizes the CCAMPIS performance data as unreliable and disagreed with the recommendation to correctly calculate its CCAMPIS persistence rate and cost per persistent student measures (recommendation 1). Regarding this first recommendation, Education acknowledged one error in its persistence rate calculation that affected the accuracy of both the persistence rate and cost per persistent student measures that it reported in its fiscal year 2020 budget justification to Congress. The agency noted that it plans to correct this error in its fiscal year 2021 budget justification. In addition, Education stated that it would explore a different model for calculating the persistence rate. While we support Education’s plans to correct the error it acknowledged and explore another model for calculating the persistence rate, Education’s persistence rate calculation has additional errors that the agency needs to correct to accurately calculate the CCAMPIS program’s persistence rate. For example, as we stated in the report, Education’s calculations did not include students who transferred, which the agency has reported should be included in its persistence rate measure. Moreover, we identified other technical errors in the numerator and denominator of Education’s formulas. For example, when calculating the persistence rate for CCAMPIS participants, Education counted students who declined to participate in the CCAMPIS program. We continue to believe that it is important for Education to report reliable program information to oversee and monitor the program and to provide accurate information to Congress. To do this, Education needs to take additional action to address all of the errors in its persistence calculations. Education disagreed with the recommendation to collect the CCAMPIS participant enrollment data needed to calculate a standard 3-year graduation rate or accurately define and calculate a different college completion measure (recommendation 2). Education stated that it could address our concerns with a modification to the description of the measure published in the agency’s budget justification. Specifically, Education said it plans to clarify that, for graduation rate data published for fiscal year 2020 and prior years, the term “within 3 years of enrollment” means within 3 years of enrolling in the CCAMPIS program. However, as we stated in the report, Education’s calculations do not align with this measure either. As for future years, the agency stated that it will explore transitioning to a new model of tracking CCAMPIS students over time, which, as described, would be consistent with Education’s standard graduation rate. However, Education noted that it must carefully balance the need to collect more informative and reliable data from grantees with the need to avoid adding unnecessary reporting burdens. We recognize that collecting the enrollment data needed to calculate the standard graduation rate could place a burden on grantee schools. Our recommendation therefore gives Education the option to define a different college completion measure and calculate it correctly. We continue to believe that Education needs to take steps to either collect the necessary enrollment data to calculate a standard 3-year graduation rate or correctly calculate a modified college completion measure. Education disagreed with the recommendation to encourage schools— through appropriate means, such as the FSA Handbook—to inform students via school websites about the availability of the dependent care allowance and how to request the allowance (recommendation 3). Education stated that it believes it would be inappropriate to indiscriminately encourage all schools to encourage student parents to borrow additional loans without considering a student’s individual financial circumstances. We did not suggest that schools should encourage all student parents to borrow additional loans to pay for child care. Instead, we recommended that Education encourage schools to make students aware of this potential option—which federal law makes available to students—to allow them to make informed financial decisions based on their personal circumstances. We made this recommendation because we found that schools were not consistently sharing information with students about the dependent care allowance or how to request one. We further recognized in the report that not all students may want to increase their student loans to finance their child care costs while in school; however, access to additional federal student loans could be a useful option for those students who may need it, so we believe students should be aware of this potential option. Education also stated that it would be inappropriate for the agency to require schools to take actions that could erode their student loan repayment and default rates. We did not recommend that Education require schools to take any action; rather, we recommended that Education encourage schools to inform students about a potentially available federal resource. In addition, Education did not provide any evidence that being aware of or using the dependent care allowance would negatively affect student loan repayment or default rates. Further, access to additional financial resources can help some students succeed in school if it allows them to work less and study more. For example, as cited in the report, recent research suggests that additional federal financial aid, including student loans, can lead to improved academic outcomes for some students. Education also expressed concerns about students borrowing more and noted there are numerous federal, state, local, and private options that offer low-income students affordable or no-cost child care. Education noted that the federal Child Care and Development Fund (CCDF) provides significant resources for student parents. However, as we noted in the report, some states either fund very few families pursuing education or training or have implemented policies that restrict access to CCDF subsidies for college students. Education also noted that many colleges, as well as countless faith-based organizations offer affordable or no-cost child care to low-income students. However, we found that nearly half of student parents reported paying for child care, with costs averaging about $490 per month. Moreover, even colleges that received a CCAMPIS grant had significant waiting lists for assistance and reported more children on waiting lists for CCAMPIS assistance than children receiving subsidized care from the CCAMPIS program. Finally, Education noted that the Federal Student Aid Handbook already contains information about the dependent care allowance and its inclusion in students’ financial aid calculations. While the handbook does include information to help school financial aid administrators implement a dependent care allowance appropriately, it is not a resource directed at student parents. For this reason, we recommended that Education encourage schools to take steps to inform students about the dependent care allowance and how to request one. We continue to believe that it is important for Education to encourage schools to inform student parents about the availability of the dependent care allowance and how to request it. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Secretary of Education, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines 1) What is known about the characteristics and degree completion of undergraduate students with dependent children? 2) What is known about the Child Care Access Means Parents in School grant program and how reliable is Education’s reported outcome information? 3) What is known about student parent access to other key federal programs that help low-income families pay for child care? 4) To what extent do selected schools that serve student parents publicize information about the option to increase federal student aid to help pay for child care? This appendix provides details of the data sources used to answer these questions, the analysis we conducted, and any limitations to our analysis. Student Parent Characteristics and Degree Completion To examine the characteristics and degree completion of undergraduate student parents, we analyzed data from the Department of Education’s (Education) National Postsecondary Student Aid Study (NPSAS) from the 2015-2016 school year, the most recent year available. NPSAS data contain nationally representative, detailed demographic and financial aid data for college students enrolled in postsecondary education programs. These data come from institutional records, government databases, and interviews with students. We also analyzed Beginning Postsecondary Students Longitudinal Study (BPS) data from 2008-2009. BPS tracks students over a 6-year period and collects both survey and transcript data. The most recently completed BPS cohort first enrolled in postsecondary education in the 2003-2004 school year. We assessed the reliability of the NPSAS and BPS data by reviewing existing information about the data and the system that produced them. We also interviewed agency officials knowledgeable about the data. We determined these data to be reliable for our purposes. Because the NPSAS and BPS data are based on probability samples, estimates are calculated using the appropriate sample weights provided, which reflect the sample design. Unless otherwise noted, all percentage estimates from the NPSAS data analysis have 95 percent confidence intervals within plus or minus 3.8 percentage points of the percent estimate, and all number estimates from the NPSAS data analysis have 95 percent confidence intervals within plus or minus 9 percent of the estimate. Similarly, all percentage estimates from the BPS data analysis have 95 percent confidence intervals within plus or minus 3.7 percentage points of the percent estimate. We compared 95 percent confidence intervals for both NPSAS and BPS data to identify statistically significant differences between specific estimates and the comparison groups. The information collected from the interview portions of the NPSAS and BPS studies, such as the variables measuring whether students have children in paid child care and a student’s monthly child care costs, is self-reported and is not based entirely on federal determinations or cross- verified with outside sources. Students’ monthly child care costs may be prone to more error than simpler yes/no questions. Child Care Access Means Parents in School Grant Program To determine what is known about the Child Care Access Means Parents in School (CCAMPIS) grant program, we reviewed relevant program information and federal laws and regulations, and interviewed Education officials knowledgeable of the program. To provide illustrative examples of how selected colleges and universities use CCAMPIS grant funding to help students pay for child care, we interviewed officials from two schools. We selected these two schools based on expert and agency recommendations and research. We also considered level of degree program (2-year and 4-year) and geographic diversity. We also conducted descriptive analysis of the performance data that CCAMPIS grantees reported to Education for the 2016-2017 school year, the most recently available performance data at the time of our review. Education provided us with performance information from the 85 colleges and universities that received their first year of grant funding in fiscal years 2013 and 2014. At the time of our review, Education had not yet collected performance data for the 2017-2018 school year, which would be the first project year for the 62 schools that were awarded CCAMPIS grants in fiscal year 2017. Education collects annual performance information from CCAMPIS grantees using annual performance reports. Grantees report both summary information for all participating students as well as detailed information—listed separately—for each participating student. Education officials said that they do not use the summarized participant data for performance calculations because of inconsistencies they identified in grantees’ reported data. Instead, Education uses the detailed information grantees provide for each student. This student-level data includes student demographic information, the number of children served, CCAMPIS child care subsidies received, and child care fees paid. Grantees also report each student’s CCAMPIS participation and academic enrollment during four academic terms (fall, winter, spring, and summer). To assess the reliability of the CCAMPIS performance data, we reviewed related program documentation, interviewed knowledgeable agency officials, and conducted electronic data testing for missing data, outliers, and logical errors. When we reviewed the student-level data, we identified instances of incomplete and inconsistent data that affected which students could be identified as participating in the CCAMPIS program. To address these concerns, we excluded from our analysis students that grantees reported as having 1) declined to participate in CCAMPIS for each of the four academic terms, 2) no enrollment information for any of the four academic terms, and 3) an enrollment code not included in Education’s report instructions. We discussed our methodology for identifying program participants with Education officials, who agreed with our approach. We also omitted from our analysis any student for whom grantees reported duplicate information. After these corrections, we determined that CCAMPIS student-level performance data were sufficiently reliable for the purpose of describing participant characteristics. We determined that selected summary variables reported elsewhere in grantees’ performance reports were similarly reliable for the purpose of describing child care services funded and number of children on waiting lists. We also examined Education’s calculations underlying the CCAMPIS program’s performance measures that the agency reported in its fiscal year 2020 budget justification to Congress and assessed them against federal internal control standards related to data quality. Because of the flaws we identified in Education’s calculations, we developed our own calculations of Education’s performance measures using the 2016-2017 CCAMPIS program performance data. Student Parents Access to Other Key Federal Child Care Programs To examine student parents’ access to other key federal programs that assist low-income families with child care costs, we focused on the Child Care and Development Fund (CCDF), Temporary Assistance for Needy Families (TANF), and Head Start and reviewed relevant federal laws and regulations, agency guidance, and program documents. To describe the extent to which states have established CCDF program policies that limit postsecondary students’ access to child care subsidies, we summarized information published in the CCDF Policies Database Book of Tables: Key Cross-State Variations in CCDF Policies as of October 1, 2017. To provide illustrative examples of how selected schools use these programs to help college students pay for child care, we interviewed school officials from three colleges and universities that also received CCAMPIS grants. We selected schools based on expert and agency recommendations and research. We also considered level of degree program (2-year and 4- year) and geographic diversity. To assess the extent to which selected schools are publicizing information about the option to increase federal student aid to help pay for child care, we reviewed the websites of the 62 schools that first received a CCAMPIS grant in fiscal year 2017. These were the most recently awarded CCAMPIS grants at the time of our review. In order to review comparable information across all schools, we developed a standardized data collection instrument that we used to examine the availability of information on the option to include a dependent care allowance. We developed the instrument after reviewing the websites of 22 schools and interviewing officials from four schools to learn more about their practices for informing students about the dependent care allowance. We selected these four schools because they did not include information about the dependent care allowance on their websites, students attending these schools borrowed federal student loans, and at least one-third of enrolled students were age 25 or older. We conducted our review from October through December 2018. One analyst recorded information in the data collection instrument and another analyst checked and verified it. We collected complete information for all 62 schools and analyzed the information across schools. We did not, as a part of our review of school websites, assess the schools for compliance with any laws or regulations. Instead, this review was intended to understand what information is made available to students on school websites. To better understand these 62 schools and their practices, we examined additional federal data and interviewed financial aid officials from 13 of the 62 schools to obtain additional information about school practices for incorporating the dependent care allowance into students’ financial aid calculations. The results from our website reviews and school interviews are not generalizable. We selected these schools to achieve a mix of schools that did and did not publicize the availability of the dependent care allowance on their websites, as well as degree levels (2-year and 4- year), and geographic diversity. We also considered the cost of attendance for the average low-income student, after grant or scholarship aid. We also analyzed the characteristics of all 62 schools using 2016-2017 data, the latest available, from Education’s Integrated Postsecondary Education Data System (IPEDS), and examined the characteristics in the context of our website analysis. We also interviewed federal officials from Education about the information the agency provides to schools about the dependent care allowance. We assessed the reliability of the IPEDS data by reviewing existing information about the data and the system that produced them, and determined they were reliable for our purposes. We assessed Education’s practices against federal internal control standards for communicating with external parties. We conducted this performance audit from April 2018 to August 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Estimated Percentages of Selected Demographics for Student Parents and All Other Undergraduate Students, 2015-2016 Appendix III: Comments from the U.S. Department of Education Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Michelle St. Pierre (Assistant Director), Karissa Robie (Analyst-in-Charge), Jennifer Cook, and Marissa Jones Friedman made key contributions to this report. Also contributing to this report were, Deborah Bland, Kevin Daly, Nisha Hazra, Gina Hoover, Michael Kniss, Sheila R. McCoy, Jean McSween, Brittni Milam, John Mingus, Jessica Orr, Joshua Paul, Benjamin Sinoff, and Adam Wendel.
Why GAO Did This Study Student parents face many challenges, including paying for child care, that can make it difficult for them to complete a degree. The federal government supports student parents through Education's CCAMPIS program, which provides colleges funding for child care services, and federal student aid, which can also help students pay for child care. GAO was asked to provide information on student parents and the federal programs that support these students. This report examines, among other objectives, what is known about the characteristics and degree completion of undergraduate students with children; what is known about the CCAMPIS program and how reliable Education's reported outcomes are; and to what extent selected schools publicize the option to increase federal student aid to help pay for child care. GAO analyzed 2009 and 2016 federal student data (the most recent available) and CCAMPIS program performance data, reviewed how the 62 schools that were awarded CCAMPIS grants in 2017 publicized the student aid option to help pay for child care, and reviewed relevant federal laws and regulations and agency documents. GAO interviewed officials from Education and selected schools. What GAO Found More than one in five undergraduate students were raising children, and about half of student parents left school without a degree, according to Department of Education (Education) data. In 2015-2016, an estimated 22 percent of undergraduates (4.3 million of 19.5 million) were parents. An estimated 55 percent of student parents were single parents, 44 percent were working full-time while enrolled, and 64 percent attended school part-time. Undergraduate student parents had fewer financial resources to fund their education than students without children. Nearly half of student parents reported paying for child care, with monthly costs averaging about $490. A higher percentage of student parents left school without a degree (52 percent) compared to students without children (32 percent) as of 2009 (the most recent data available). Education's Child Care Access Means Parents in School (CCAMPIS) program helped about 3,300 students pay child care costs for about 4,000 children in 2016-2017. Another 4,200 children were on waiting lists to receive assistance. Most CCAMPIS participants paid some child care fees after receiving subsidies—the median payment each month was about $160. Education measures participants' persistence in school and graduation rate to assess the performance of the CCAMPIS program. However, flaws in its calculations of these two measures prevented Education from reporting reliable results, making it difficult for Education and Congress to evaluate the program's effectiveness. Some student parents could be eligible to increase their federal student loans to help pay for child care by asking their schools to include an allowance for dependent care expenses in their financial aid calculations. However, schools do not always publicize this allowance to current and prospective students. GAO reviewed the websites—where schools post other college cost information—of schools serving student parents and found that about two-thirds of these websites did not mention the allowance. Schools are not required—and Education does not encourage them—to inform student parents about the allowance. As a result, eligible student parents may be unaware of this option to request additional financial support to help them complete their degree. What GAO Recommends GAO is making three recommendations to Education to correct its CCAMPIS persistence and graduation rate calculations and to encourage schools to inform students about the option to increase federal student aid to help pay for child care. Education disagreed with GAO's recommendations, but described plans to improve its performance calculations. GAO continues to believe additional actions are warranted. (617) 788-0534 or emreyarrasm@gao.gov .
gao_GAO-19-268
gao_GAO-19-268_0
Background DHS’s 2012 Memorandum on Addressing Allegations of Unlawful Profiling In 2012, the Secretary of Homeland Security issued a memorandum directing TSA to take a number of actions in response to allegations of profiling by behavior detection officers. These actions included, among others, working with the DHS Office of Civil Rights and Civil Liberties to (1) review, and revise as necessary, behavior detection officer training policies, training curriculum, and supervisory guidance to ensure they adequately address and train against profiling; (2) enhance data collection to facilitate appropriate supervision and monitoring of behavior detection activities; and (3) ensure passengers are aware of complaint mechanisms and ensure complaints are appropriately handled. TSA has taken some actions to address these directives. For example, TSA has revised its standard operating procedures and training materials to more clearly instruct personnel trained in behavior detection and other TSA personnel on how to avoid unlawful profiling; initiated a study to collect data on the race and national origin of passengers referred for behavior detection screening and examine whether disparities exist in the referral trends, and if so, whether these differences suggest discrimination or bias in the referral process; and issued a Management Directive establishing TSA policy and procedures for receiving, documenting, and referring passenger screening complaints resulting from the application of TSA security screening policies and procedures, including processes for all involved offices in headquarters and the field that handle passenger complaints. TSA’s Use of Behavior Detection The Aviation and Transportation Security Act established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes the screening of all passengers and property transported by commercial passenger aircraft. At the approximately 440 TSA-regulated airports in the United States, all passengers, their accessible property, and their checked baggage are screened prior to boarding an aircraft or entering the sterile area of an airport pursuant to statutory and regulatory requirements and TSA-established standard operating procedures. TSA began using behavior detection in 2006 as an added layer of security to identify potentially high- risk passengers. Through the end of fiscal year 2016, TSA’s behavior detection screening process was a stand-alone program that used specially trained behavior detection officers to observe passengers at the screening checkpoint and engage them in brief verbal exchanges. During this period, behavior detection officers had brief interactions with passengers in the queue leading up to the screening checkpoint. If the behavior detection officers determined during this interaction that a passenger exhibited a certain number of behavioral indicators, the behavior detection officer was to refer the passenger for additional screening or, if circumstances warranted, contact a law enforcement officer. According to TSA procedures, if a passenger was referred for additional screening, one behavior detection officer conducted a pat-down of the passenger and search of his or her personal property while another checked documents and conversed with the passenger, attempting to understand why the behavioral indicators were being displayed and continuing to look for additional behavioral indicators. If a passenger did not exhibit a certain number of additional indicators, he or she was allowed to proceed to the boarding gate. If the passenger did exhibit a certain number of additional indicators, or other events occurred, such as the discovery of a fraudulent document, the behavior detection officer was to call a law enforcement officer. The law enforcement officer then would determine next steps, which could include questioning the passenger or conducting a criminal background check. The law enforcement officer then determined whether to release the passenger, refer the passenger to another law enforcement agency, or arrest him or her. In fiscal year 2017, consistent with the Aviation Security Act of 2016, TSA eliminated the stand-alone behavior detection officer position. TSA transferred the former behavior detection officers to serve as part of the screener workforce and began assigning them to the checkpoint to screen passengers. According to TSA officials, when screeners trained in behavior detection are assigned to a position, TSA policies and procedures permit them to use behavior detection when applicable. Furthermore, TSA’s checkpoint standard operating procedures do not currently include the use of behavior detection, as behavior detection’s use continues to be guided by its own policies established in 2016. However, some screeners trained in behavior detection continue to use behavior detection to support passenger screening canine teams as part of expedited screening. As part of this process, screeners trained in behavior detection work in conjunction with canine teams to observe passenger behavior and identify passenger behaviors that may indicate that a passenger poses a higher risk to the aviation system. Overview of Optimized Behavior Detection Training The Training and Development Division (Training Division), within TSA headquarters, oversees the development, delivery, and evaluation of training programs for TSA employees. The National Training Plan, developed annually by the Training Division and Security Operations, contains the core curriculum for screeners to meet their yearly training requirements. In addition, Security Operations works with the Traveler Engagement Division to develop and deliver specific training on topics such as disability profiling, racial profiling, and screening transgender persons. In August 2017, TSA began training screeners on its new behavioral indicators. TSA revised the behavioral indicators by eliminating and combining some of the indicators used to observe passenger behavior, which TSA refers to as Optimized Behavior Detection. According to TSA officials, Optimized Behavior Detection includes 36 revised behavioral indicators—which TSA pared down from a list of 96 indicators. As of January 2019, TSA officials told us out of the approximately 43,000 screeners nationwide, a total of 2,541 screeners had been trained at 117 airports in Optimized Behavior Detection. Screeners must be trained in passenger and accessible property screening before they are eligible to attend Optimized Behavior Detection training. Upon successful completion of Optimized Behavior Detection training, screeners are permitted to utilize behavior detection in accordance with the standard operating procedures, such as when operating in conjunction with canine teams or screening airport and airline workers. In addition, screeners must complete all requirements in the National Training Plan which includes elements of training on TSA’s mechanisms for preventing unlawful profiling. TSA’s Oversight of Behavior Detection TSA’s Security Operations is responsible for overseeing the use of behavior detection. TSA’s behavior detection policies and procedures prohibit screeners from selecting passengers for additional screening based on race, ethnicity, religion, and other factors, whether through behavior detection or other security measures. This responsibility includes overseeing officers trained in behavior detection to ensure they conduct behavior detection without regard to race/ethnicity, color, gender/sex, gender identity, religion, national origin, sexual orientation, or disability, in accordance with constitutional, statutory, regulatory, and other legal and DHS policy requirements to protect the civil rights and civil liberties of individuals. Although the stand-alone behavior detection officer position was eliminated and the program ended in 2017, the requirement to conduct oversight and verify compliance with TSA policies still applies when behavior detection is used, such as when behavior detection is used in conjunction with passenger screening canine teams. According to TSA’s policies and procedures, supervisors must conduct oversight observations of behavior detection activities a minimum of 8 hours every 14 days to verify and document compliance with behavior detection policies, standard operating procedures, the handbook, and training, among other things, and submit a compliance checklist documenting the review to TSA Security Operations. Passenger Complaint Review and Referral Process The TSA Contact Center (TCC) is the primary point of contact for collecting, documenting, and responding to public questions, concerns, or complaints regarding passengers’ screening experience; reports and claims of lost, stolen, or damaged items; and complaints submitted by TSA employees. The TCC may refer screening complaints for resolution to other TSA headquarters offices, depending on the specific allegation. For example, complete complaints alleging violations of civil rights and civil liberties, which include allegations implicating color, race, ethnicity, gender, genetic information, national origin, religion, sexual orientation, and parental status, must be referred to the Multicultural Branch. Figure 1 describes the TCC’s complaint review process. TSA’s Multicultural Branch is responsible for collecting, monitoring, and adjudicating passenger complaints alleging civil rights and civil liberties violations at the passenger screening checkpoint, including complaints alleging unlawful profiling and discrimination, among other things. The Multicultural Branch receives complaints alleging civil rights and civil liberties violations from several sources within TSA including the TCC. When TCC officials determine a complete complaint involves a potential civil rights or civil liberties violation, they are to forward the complaint to the Multicultural Branch where staff are to input the complaint into a database and track the resolution of each complaint they receive. The Multicultural Branch, in consultation with Security Operations, determines whether a screener followed standard operating procedures while screening the complainant by reviewing available video of an incident or interviewing witnesses who saw the incident. Depending on the nature and severity of the allegation, TSA airport staff may also elevate the complaint and evidence to the airport’s Assistant Federal Security Director (FSD) for Screening. If the investigation finds fault with the screener, the screener’s supervisor or manager is to determine the corrective action to be taken. Corrective actions specified in TSA’s guidelines for disciplinary actions to address misconduct range from mandating that the screener take additional training to correct the behavior to terminating the screener’s employment for multiple repeat offenses or a single egregious action. Following the outcome of the complaint review and any resulting corrective actions, the TSA headquarters unit or the TSA customer support manager at the airport is to communicate the status of the resolution, if any, to the complainant— such as by using a template letter that explains TSA’s policies and procedures or issuing an apology. According to Multicultural Branch protocols for reviewing passenger complaints, complaints may be resolved in three ways: Closed-Administratively: If the complainant does not respond within 10 days to the Multicultural Branch’s first contact for additional information, such as a request for additional information on the alleged civil rights and civil liberties violation, the complaint is to be closed. Closed-No Jurisdiction: Complaints that are not within the Multicultural Branch’s jurisdiction, such as complaints involving rude and unprofessional conduct that are not related to allegations of civil rights and civil liberties violations, are to be closed and referred to other TSA offices or the TSA designated point of contact at the airport for further handling. Closed-Resolved: Following the outcome of the investigation, the Multicultural Branch is to send a letter to the complainant summarizing the allegations reviewed, explaining whether TSA procedures were followed, and in some cases, issuing an apology or informing the complainant of the type of training offered to the screener(s). The Multicultural Branch may recommend training and provide refresher training materials for distribution at the airport to the screener(s) involved, if identified, or for all screeners at the airport’s checkpoint at which the complaint originated. According to TSA officials, the Multicultural Branch recommends training when standard operating procedures for screening were not followed or when it determines that the proactive measure of refresher training would be useful. According to TSA, the designated TSA point of contact at the airport is required to verify when the training is completed. Screeners Using Behavior Detection Receive Basic and Recurrent Training Related to Profiling, and TSA Evaluates Training Effectiveness Using the Kirkpatrick Model Screeners Conducting Behavior Detection Receive Training on TSA’s Policies and Procedures That Prohibit Unlawful Profiling Before screeners are eligible to conduct any behavior detection activities, they must first complete a 5-day Optimized Behavior Detection Basic Training course, and undergo on-the-job training at their local airport. This course includes an overview of DHS and TSA policies that prohibit unlawful profiling, and trains screeners to apply behavioral indicators to passengers without regard to race/ethnicity, color, gender/sex, gender identity, religion, national origin, sexual orientation, or disability. Participants must complete the Optimized Behavior Detection Basic Training course and pass a 40 question job knowledge test at the end of the class, in addition to completing 32 hours of on-the-job training under the supervision of an officer already trained in behavior detection. If a participant fails the job knowledge test, he or she is to receive 1 hour of remedial training before retaking the test. Screeners must pass the test in two attempts to be eligible to conduct behavior detection activities. In the four Optimized Behavior Detection Basic Training courses we attended, the training instructors covered TSA’s policies on prohibiting unlawful profiling on day one of the course, and explained that profiling passengers based on discernible traits was not only illegal, but that such practices are ineffective at identifying potentially high-risk passengers. In addition, the course manual included a copy of DHS’s 2013 memorandum defining racial profiling, which all participants were required to review. To test their understanding of TSA policy and the Optimized Behavior Detection Standard Operating Procedures, the instructors presented various scenarios to engage participants in practicing how they would apply behavior detection at the checkpoint. The 2018 National Training Plan required behavior detection–trained screeners to complete four recurrent technical training courses related to behavior detection, including two that contain material reinforcing DHS’s and TSA’s policies prohibiting unlawful profiling. Screeners participate in each of the four interactive training courses using a computer and the courses contain knowledge checks that the participant must answer correctly before completing the training. Table 1 describes the training courses screeners trained in behavior detection are required to complete and appendix I includes a list of additional training related to unlawful profiling. TSA Evaluates Training Courses Using the Kirkpatrick Evaluation Model TSA determines the effectiveness of particular training programs using the Kirkpatrick Evaluation Model, a commonly accepted training evaluation model endorsed by the Office of Personnel Management and used throughout the federal government. In May 2018, TSA updated its training standards based on the ADDIE model, a methodology comprising five phases: Analysis, Design, Development, Implementation, and Evaluation (ADDIE). TSA uses the Kirkpatrick model as part of the evaluation stage of ADDIE. The Kirkpatrick model consists of a four-level approach for soliciting feedback from training course participants and evaluating the impact the training had on individual development, among other things. TSA conducts Levels 1 and 2 evaluations on selected training courses. Table 2 provides an overview of the Kirkpatrick model and the evaluation levels for courses related to behavior detection and unlawful profiling. TSA officials told us they will continue to evaluate the Optimized Behavior Detection Basic Training course and Level 3 evaluations are under development, as they roll out their training evaluation process. According to TSA’s Training Standards, a review team determines the frequency of curriculum review, which should occur at least once every 5 years. As part of this review, TSA plans to leverage data reported in evaluations at Kirkpatrick Levels 1 through 3. TSA Has Oversight Policies for Behavior Detection and Prohibits Unlawful Profiling but Does Not Specifically Assess Whether Profiling Occurs TSA’s 2016 Optimized Behavior Detection Program Handbook and Operational Oversight Compliance Guidance require supervisors to conduct routine checks of behavior detection operations to monitor compliance with standard operating procedures. TSA’s behavior detection Operational Oversight Compliance Guidance outlines seven specific assessments of behavior detection operations and includes a checklist for each assessment for managers to document completion of these routine oversight tasks. According to TSA officials, these assessments should occur when screeners use behavior detection in conjunction with canine operations and while screening airline and airport workers, among other activities. When conducting these assessments, supervisors are to conduct 1-hour observations and use detailed checklists to document how screeners trained in behavior detection perform the behavior detection in practice. For example, one checklist requires supervisors to observe how screeners trained in behavior detection monitor passenger flow and communicate with passengers while observing for behavioral indicators, such as ensuring screeners using behavior detection do not ask passengers intrusive or offensive questions, among other activities related to the use of behavior detection. However, our review of the oversight checklists found that they do not specifically instruct supervisors to monitor for compliance with procedures intended to prohibit unlawful profiling. According to TSA officials, TSA’s guidance and checklists do not include this type of monitoring for unlawful profiling because officials believe that the training screeners receive, adherence to the standard operating procedures, and the general supervisory oversight in place are sufficient to prevent unlawful profiling and could alert supervisors to situations where unlawful profiling happens. However, the 2013 DHS memorandum on DHS’s policy on unlawful profiling states that each component, including TSA, should both implement specific policy and procedures on racial profiling, and ensure all personnel are trained and held accountable for meeting the standards set forth in DHS policy. In addition, Standards for Internal Control in the Federal Government states that management should establish and implement activities to monitor the internal control system and evaluate the results, as well as remediate identified internal control deficiencies. Such a mechanism could be an item added to a checklist for supervisors to document, based on their observations, whether screeners selected individuals for additional scrutiny in a manner consistent with policies and procedures. Another oversight mechanism, as noted in DOJ’s guidance on the use of race and other factors, could be studying the implementation of policies and procedures that prohibit unlawful profiling through targeted, data- driven research projects. As previously discussed, in 2013, TSA initiated a study and collected data through October 2017 on passengers referred for secondary screening to monitor compliance with policies that prohibit unlawful profiling. TSA discontinued the study and did not analyze the data collected because the stand-alone behavior detection program ended in November 2017. As a result of not conducting the analysis, TSA does not know what the data would have shown regarding compliance with policies that prohibit unlawful profiling. TSA officials said they plan to update the behavior detection and checkpoint screening policies, procedures, and guidance during fiscal year 2019. As a part of this update, TSA officials told us they plan to include language in the standard operating procedures reinforcing the use of behavior detection simultaneously with other checkpoint duties, such as the document checker position. However, TSA officials told us they are not planning to add an oversight mechanism specific to profiling as part of the updates because, as previously noted, they believe screener training, adherence to the standard operating procedures, and general supervisory oversight are sufficient. Developing a specific oversight mechanism, such as a checklist or a data-driven study, to monitor screeners’ compliance with policies that prohibit unlawful profiling would provide TSA with greater assurance that its personnel are adhering to these policies when using behavior detection, and better position TSA to identify potential incidents of unlawful profiling. TSA Received About 3,700 Complaints Alleging Violations of Civil Rights and Civil Liberties from October 2015 to February 2018 and Recommended Screener Training to Address Complaints The TCC Received 3,663 Complaints Related to Passenger Screening and a Majority of the Complaints Alleged Discrimination or Profiling Based on Personal Attributes and Characteristics The TCC received 3,663 complaints related to passenger screening alleging violations of civil rights and civil liberties from October 2015 through February 2018. These complaints are not specific to behavior detection activities and generally reflect alleged conduct occurring at the screening checkpoint through the application of screening measures. We analyzed the 3,663 complaints and found that the majority (2,251 of 3,663) of the complaints alleged discrimination or profiling based on personal attributes and characteristics. For example, the TCC received complaints alleging discrimination that involved assertions by passengers that they had been selected for pat-downs based on race and ethnicity, among other reasons, when the passengers believed they did not trigger an alarm prompting the pat-downs. The TCC also received complaints related to passengers’ transgender identity alleging selection for additional screening because of their transgender status. Additionally, the TCC received passenger complaints alleging that screening procedures were aggressive or inappropriate for senior citizens. Table 3 provides a list of complaint types based on our analysis. In addition, appendix II provides additional detail about our content analysis of complaints alleging civil rights and civil liberties violations, and appendix III provides a list of 10 airports most often identified in the complaints. As TSA’s primary point of contact for passenger complaints, the TCC is responsible for the initial review and referral of all complaints that involve allegations of civil rights and civil liberties violations to the Multicultural Branch. According to the TCC standard operating procedures, TCC analysts review the complaints to ensure that they contain the necessary information to be considered complete, including the airport, passenger’s name, date of the incident, and description of the alleged civil rights and civil liberties violation. In addition, complaints reported over the phone or made on behalf of another person without the person’s consent are initially considered incomplete. For complaints that are not complete, the TCC sends the passenger a document request for information when the passenger has provided correct contact information. According to TCC officials, passengers often do not provide the correct contact information or do not respond with the necessary information to complete the complaint. TCC officials said that incomplete complaints are typically sent to the Multicultural Branch for informational purposes. Multicultural Branch officials told us that they consider information from incomplete complaints to inform its policy and training initiatives, and to improve how TSA engages with the public. From October 2015 through February 2018, the TCC referred 51 percent (1,865) of the 3,663 complaints it received to the Multicultural Branch for review. The TCC reported that 48 percent (1,764) of the 3,663 complaints did not have complete information necessary for further review, such as the airport and date of the incident. According to TSA officials, these complaints were sent to the Multicultural Branch for informational purposes. TCC’s passenger complaint data show that the remaining 1 percent (34) of the complaints were from TSA employees and were referred to other TSA offices for review. TSA’s Multicultural Branch Reviewed More Than 2,000 Complaints and Recommended a Range of Screener Training TSA’s Multicultural Branch receives and reviews complete complaints related to allegations of violations of civil rights and civil liberties that are referred to it from the TCC, DHS’s Office of Civil Rights and Civil Liberties, TSA’s Disability Branch, and TSA personnel at airports. From October 2015 through February 2018, the Multicultural Branch received 2,059 complaints alleging violations of civil rights and civil liberties, as shown in figure 2. Multicultural Branch officials stated that the majority of these complaints were referred from the TCC. As shown in figure 2, for 1,066 (52 percent) of the complaints, Multicultural Branch staff found indications of potential discrimination, such as instances of rude or unprofessional conduct that included the use of race or other protected characteristics. According to Multicultural Branch staff, to resolve the 1,066 complaints, they recommended a range of refresher training. Multicultural Branch staff explained that when issues are identified, their policy is to address the issues through screener training. Multicultural Branch officials reported that these trainings were provided through National Shift Briefings, which were circulated across TSA, or through training provided at a particular airport. For example: In one of the complaint cases we reviewed, a passenger alleged profiling based on headwear. Multicultural Branch officials used camera recordings and statements from officers involved in the encounter to substantiate that screening procedure violations had occurred. As a result, Multicultural Branch officials recommended refresher training to the airport on headwear screening protocols for all screeners at the airport to review. In another complaint case we reviewed, a passenger alleged profiling based on the use of a tribal-issued photo identification card. In response, Multicultural Branch officials sent refresher training on verifying tribal identification and the screening of Native American passengers to the TSA designated point of contact at the airport involved for distribution to TSA personnel identified in the complaint. In a third complaint reviewed, a passenger alleged being profiled at the screening checkpoint, without including any additional details. According to TSA officials, based on the particular allegations of the complaint and the lack of details, TSA was unable to substantiate the allegations made in the complaint. As a result, Multicultural Branch sent National Shift Briefings on TSA’s policies and procedures that prohibit unlawful profiling and inappropriate comments to the TSA designated point of contact at the airport involved for distribution to TSA personnel identified in the complaint. As shown in figure 2, there were 993 complaints that the Multicultural Branch reviewed but did not address through training. The Multicultural Branch closed 121 of these complaints because it determined that the complainant did not provide sufficient information about the alleged civil rights and civil liberties violation for Multicultural Branch review and the complainant did not respond with additional information requested by the Multicultural Branch within 10 days. The Multicultural Branch determined that the remaining 872 complaints were not substantiated based on its review of the camera recording of the alleged incident, or were not within its jurisdiction. For the complaints not within its jurisdiction, the Multicultural Branch referred them to other TSA offices, to TSA officials at the airport or airports identified in the complaints for review, or to other federal agencies (e.g., U.S. Customs and Border Protection, Department of Transportation, or the Federal Aviation Administration) as appropriate. These complaints involved allegations of unprofessional conduct and other issues that did not involve allegations of civil rights and civil liberties violations. According to Multicultural Branch guidance, the designated TSA point of contact at the airport along with the Multicultural Branch analyst are to determine appropriate next steps for resolving complaints, such as preparing a briefing for screeners that is tailored to address the concerns raised by the complainant. TSA officials stated that resolutions to the complainant are tailored to reflect the allegation, type of inquiry conducted, and investigation of the facts and evidence underlying the complaint. TSA’s responses to the complainant include, but are not limited to, apologizing for the screening experience or informing the complainant about the next steps such as the agency’s plans to address the complaint or underlying conduct that gave rise to the complaint. For example, in a letter we reviewed, TSA apologized for the “unprofessional and inappropriate personal questions” the passenger experienced during screening, and stated that refresher training would be distributed to screeners at the airport involved. According to documentation we reviewed related to this complaint, the Multicultural Branch sent refresher training materials on avoiding inappropriate comments to the designated TSA point of contact at the airport involved. In addition, TSA’s office of Human Capital Employee Relations reported that it took a range of disciplinary actions—from letters of reprimand to termination—for 100 screeners from October 2015 through February 2018, in part in response to passenger complaints alleging civil rights and civil liberties violations. TSA’s Multicultural Branch Analyzes and Shares Passenger Complaint Data to Inform Screener Training TSA’s Multicultural Branch regularly collects and analyzes data on passenger civil rights and civil liberties and discrimination complaints and their resolution status, and shares this information with TSA executive leadership, TSA airport customer service managers, and screeners in the field, among others. Multicultural Branch officials told us their staff are assigned to specific airports based on geographic region, and they continually analyze passenger complaints referred to their office from the TCC to identify trends. Staff members meet weekly to discuss trends in complaints for their geographic regions, and they review weekly, quarterly, and annual reports on the number and category of complaints referred to their office by the TCC. In addition, Multicultural Branch officials track the resolution of the cases for which they have jurisdiction and submit this information to their senior leadership each week. Specifically, the Multicultural Branch uses a database to track complaints by type, airport, submission date, and resolution status, such as how many cases are open, closed, or whether they have been resolved. Multicultural Branch officials share trends in complaints throughout TSA in several ways, including conference calls, monthly briefings, reporting metrics to TSA executive leadership, and on-site training events at airports each year. For example, Multicultural Branch officials hold monthly conference calls with customer service managers at airports to review complaint trends, upcoming on-site airport trainings, and job aids they have developed to help screeners understand issues, such as screening passengers wearing religious headwear. Multicultural Branch officials stated they also share information with screeners and supervisors through National Shift Briefings that are distributed at all airports, and focus on bringing awareness to screeners on events they need to be aware of when screening passengers, such as religious observances occurring that month. According to TSA officials, the Multicultural Branch uses its analysis of passenger complaints and the results of complaint investigations to develop training aids and materials on areas where they determine screeners need more training, such as multicultural awareness or screening of transgender passengers. For example, the Multicultural Branch has developed briefings focusing on unlawful profiling and unconscious bias which reiterated that unlawful profiling is against TSA policy, defined unconscious bias, and provided scenario-based examples. Additionally, members from the Multicultural Branch hold on-site training for screeners at selected airports each year based on complaint data analysis and other factors. These training sessions last three days, include topics stemming from complaint data TSA has analyzed, and can include webinars, role-playing, and other forms of instruction. Conclusions DHS and TSA have policies prohibiting unlawful profiling—using race, ethnicity, gender, or other protected characteristics to identify passengers for additional screening—when using behavior detection, as well as other screening measures. While TSA has oversight guidance and checklists to monitor screeners’ use of behavior detection, these policies and procedures do not include a specific mechanism to monitor whether screeners may be using behavior detection to unlawfully profile passengers. Although TSA officials report that they are working to update the standard operating procedures in 2019, they currently have no plan to add a specific mechanism to monitor compliance with policies that prohibit unlawful profiling. Developing a specific oversight mechanism would provide TSA with greater assurance that screeners are adhering to such policies and help TSA identify any potential incidents of unlawful profiling. Recommendation for Executive Action We are making the following recommendation to TSA. The TSA Administrator should direct Security Operations to develop a specific oversight mechanism to monitor the use of behavior detection activities for compliance with DHS and TSA policies that prohibit unlawful profiling. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. DHS provided written comments which are reproduced in appendix IV. In its comments, DHS concurred with our recommendation and described actions planned to address it. Security Operations, TCC, and the Multicultural Branch also provided technical comments, which we incorporated as appropriate. DHS correctly noted in its letter that GAO’s analysis of civil rights and civil liberties complaints related to every aspect of TSA’s passenger and baggage screening and is not specific to behavior detection. We agree with DHS’s observation, as this analysis provides information on what passengers alleged in their complaints and how TSA addressed them. It is important to note that the complaint data provided by TSA did not preclude behavior detection activities as a potential contributing factor to any number of the complaints submitted. With regard to our recommendation, that the TSA Administrator should direct Security Operations to develop a specific oversight mechanism to monitor the use of behavior detection activities for compliance with DHS and TSA policies that prohibit unlawful profiling, DHS stated that TSA plans to take additional steps to continue to ensure behavior detection activities adhere to polices that prohibit unlawful profiling. In fiscal year 2019, TSA plans to modify existing oversight checklists used by managers and supervisors to include specific terminology for monitoring unlawful profiling. DHS estimated that this effort would be completed by September 30, 2019. This action, if fully implemented, should address the intent of the recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact William Russell at (202) 512-8777 or RussellW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Additional Training Related to Unlawful Profiling The Transportation Security Administration (TSA) provided examples of refresher training materials that are provided to screeners on TSA’s prohibition on the use of unlawful profiling at the passenger screening checkpoint. Table 4 provides information on these materials, including the methods used to distribute the materials to screeners. Appendix II: GAO Analysis of Complaints Submitted to the Transportation Security Administration Contact Center From October 2015 through February 2018, the Transportation Security Administration (TSA) Contact Center (TCC) received 3,663 complaints that it classified as alleging violations of civil rights and civil liberties. Of the 3,663 complaints, the TCC received 707 complaints, or about 19 percent, by phone. Table 5 summarizes our analysis of the complaints the TCC received. Appendix III: GAO Analysis of Complaints Submitted to the Transportation Security Administration Contact Center by Airport From October 2015 through February 2018, the Transportation Security Administration (TSA) Contact Center (TCC) received 3,663 complaints that it classified as alleging violations of civil rights and civil liberties. The TCC received 707 of these complaints, or about 19 percent, by phone. Of the 3,663 complaints, Los Angeles International Airport was identified most often in the complaint data. Table 6 lists the 10 airports most often identified in these complaints. Appendix IV: Agency Comments Appendix V: GAO Contact and Staff Acknowledgments GAO Contact William Russell, (202) 512-8777 or RussellW@gao.gov. Staff Acknowledgments In addition to the contact named above, Ellen Wolfe (Assistant Director), Natalie Maddox (Analyst in Charge), Saida Hussain, and Brendan Kretzschmar made key contributions to this report. Also contributing to the report were Alyssa Bertoni, David Dornisch, Ben Emmel, Eric Hauswirth, Susan Hsu, Tom Lombardi, Amanda Miller, Sam Portnow, Rachel Stoiko, and Adam Vogt.
Why GAO Did This Study In 2016, TSA began using behavior detection in a more limited way to identify potentially high-risk passengers who exhibit certain behaviors it asserts are indicative of stress, fear, or deception, and refer them for additional screening or, when warranted, to law enforcement. TSA's policies and procedures prohibit unlawful profiling, i.e., screeners are prohibited from selecting passengers for additional screening based on race, ethnicity, or other factors. Allegations of racial profiling have raised questions about TSA's use of behavior detection. GAO was asked to review TSA's measures to prevent behavior detection activities from resulting in unlawful profiling. This report examines, among other things, (1) TSA's oversight of behavior detection activities and (2) the number of complaints alleging violations of civil rights and civil liberties related to passenger screening and actions taken by TSA to address them. GAO reviewed TSA policies and procedures; analyzed passenger complaint data received by TSA from October 2015 through February 2018 and actions taken to address them; and interviewed TSA officials. Complaint data we analyzed alleged conduct that occurred at the screening checkpoint and was not specific to behavior detection activities. What GAO Found Transportation Security Administration (TSA) policy requires managers to ensure behavior detection is conducted without regard to race or ethnicity, among other factors. TSA uses seven oversight checklists to assess whether behavior detection activities are conducted in accordance with TSA policy, such as monitoring whether screeners trained in behavior detection observe and engage passengers correctly. However, these checklists do not instruct supervisors to monitor for indications of profiling. TSA officials stated that the training screeners receive, adherence to operating procedures, and general supervisory oversight are sufficient to alert supervisors to situations when unlawful profiling may occur. However, developing a specific mechanism to monitor behavior detection activities for compliance with policies prohibiting unlawful profiling would provide TSA with greater assurance that screeners are adhering to such policies. From October 2015 through February 2018, TSA received about 3,700 complaints alleging civil rights and civil liberties violations related to passenger screening. These complaints are not specific to behavior detection activities. The TSA Contact Center (TCC), the office that initially receives these complaints, reported that about half of the complaints did not have complete information from passengers necessary for further review, such as the airport and date of the incident. According to TCC officials, they attempt to obtain the additional information from passengers, but often the complaint does not include the correct contact information or the passenger does not respond to the TCC's request for additional information. The TCC complaint data show that the remaining 51 percent (about 1,900) of complaints were referred to the TSA Multicultural Branch, the office responsible for reviewing complaints alleging civil rights and civil liberties violations. The Multicultural Branch reported reviewing 2,059 complaints, including approximately 1,900 complaints from TCC, as well as complaints referred from other TSA offices. For about half of the complaints (1,066) the Multicultural Branch reviewed, it found indications of potential discrimination and unprofessional conduct that involved race or other factors and recommended a range of refresher training across airports or for screeners at individual airports identified in the complaints. TSA's Multicultural Branch Reviewed 2,059 Complaints Alleging Violations of Civil Rights and Civil Liberties from October 2015 through February 2018 What GAO Recommends TSA should develop a specific oversight mechanism to monitor behavior detection activities for compliance with policies that prohibit unlawful profiling. DHS concurred with GAO's recommendation.
gao_GAO-20-131
gao_GAO-20-131_0
Background Treasury borrows money by issuing Treasury securities to finance the federal deficit (i.e., the difference between current spending and revenues), which includes paying interest on outstanding debt, and refinancing maturing debt. According to Treasury’s Strategic Plan, the primary objective of its debt management strategy is to finance the government’s borrowing needs at the lowest cost over time. Treasury reports that it achieves this objective by issuing marketable debt with a regular and predictable framework— meaning Treasury debt managers provide the market clear and transparent information about planned issuance, and set a standard calendar of auctions of each security type. managing its debt portfolio to mitigate “rollover risk”—the risk that it may have to refinance its debt at higher interest rates; fostering a healthy and liquid secondary market—the marketplace in which Treasury securities are traded; and promoting a broad and diverse investor base. To this end, Treasury issues securities in a wide range of maturities to appeal to a broad range of investors, and in sufficient amounts to promote liquid markets so investors can easily buy and sell Treasury securities. Treasury’s regular and predictable auction framework also provides investors greater certainty and better information to plan their investments. Treasury regularly issues nominal securities that range in maturity from 4 weeks to 30 years, inflation protected securities with 5-, 10-, and 30-year maturities, and floating rate notes (see table 1). A nominal security returns the face value of the security at maturity; an inflation-indexed security repays the principal adjusted for inflation. Floating rate notes pay interest quarterly at a rate that varies with changes in the indexed rate, such as the discount rate on the 13-week Treasury bill. The interest rates associated with the range of maturities of the nominal securities issued by Treasury creates a “yield curve” which represents the relationship between the maturity of an asset and its yield (the interest rate paid by Treasury or cost of borrowing). Each security has different cost and risk features for Treasury. Generally, Treasury must pay a higher interest rate for longer-dated securities to compensate buyers for waiting longer for principal to be repaid and accepting increased risk due to uncertainty about future market conditions. But longer-dated securities offer more certainty for budget planning because they lock in interest rates for the duration of the security. Similarly, as Treasury offers more of any given security, it may have to pay more interest to attract investors. However, if Treasury offers too little of a specific security given changing market demand, it could reduce the security’s liquidity in the secondary market, which would increase the interest cost Treasury must pay to compensate investors for less liquidity. The mix of securities changes regularly as Treasury issues new debt and funding needs change. Figure 1 shows the outstanding marketable debt held by the public by security type between 2005 and 2019. Treasury typically responds to long-term increases in borrowing needs by taking the following steps: Increasing the amount of securities offered at scheduled auctions. In 2018, Treasury increased auction sizes for securities at all maturities as borrowing needs increased. For example, Treasury increased the average size of auctions for floating rate notes by 15 percent (from about $16.2 billion in 2017 to $18.6 billion in 2018) and 3-year notes by 32 percent (from about $25.9 to $34.1 billion). Increasing the frequency of scheduled auctions. For example, in 2003 and 2008, Treasury adjusted the auction calendar to include additional reopenings of 10-year notes. More recently, Treasury added an October 5-year TIPS issue, with the first auction held on October 17, 2019. Introducing new types of securities to offer at its auctions. For example, in 2014, Treasury introduced a 2-year floating rate note. In October 2018, Treasury began auctioning a 2-month bill. According to Treasury officials, the addition of the 2-month bill allowed Treasury to issue more bills without increasing auction sizes for existing bills beyond maximum sizes recommended by market participants. In taking these steps, Treasury announces expected auction sizes each quarter and publicly discusses the changes well in advance. The Treasury Market Has a Diverse Investor Base Treasury securities are held by a wide range of investors for a variety of different reasons, including cash and liquidity management, collateral, hedging, speculation, arbitrage, and as long-term “buy and hold” investments. As shown in figure 2, these investors can be grouped into three categories: The Federal Reserve System (Federal Reserve), the U.S. central bank, conducts monetary policy to promote maximum employment, stable prices, and moderate long-term interest rates. As part of this role, the Federal Reserve banks may buy and sell Treasury and other securities in the secondary market and roll over holdings of Treasury securities at auction as a noncompetitive bidder. The Federal Reserve is the largest individual holder of Treasury securities, and as of June 2019, held approximately $2.3 trillion in Treasury securities— or 14 percent of marketable debt held by the public. International investors include both private investors and foreign official institutions, including central banks and government-owned investment funds. As of June 2019, foreign holdings represented 41 percent of marketable debt held by the public; about $6.6 trillion. Most foreign holdings are from official sources (63 percent according to available data), such as foreign central banks. Domestic investors include banks, investment funds, pension funds, insurance companies, state and local governments, and individuals. As of June 2019, domestic investors held 45 percent of marketable debt held by the public; more than $7 trillion. Figure 2 shows the sectors that represent the domestic investor category. Key Characteristics of Treasury Securities Support Reliable Demand but Changes in Policies or Market Conditions Pose Risks Low Risk and the Ability to Easily Buy and Sell Large Volumes of Treasury Securities Support Reliable, Broad-Based Demand The combination of the liquidity, depth, and safety of the Treasury market is unmatched in global markets. These characteristics make Treasury securities a unique and critical asset for a broad range of investors. Market participants and subject matter experts we interviewed and surveyed identified liquidity, depth, and safety as the most important characteristics of Treasury securities. As shown in figure 3, 63 of 67 market participants we surveyed from across 10 domestic sectors reported that liquidity is one of the most important characteristics, followed by depth and safety. Moreover, 55 of the 67 survey respondents cited at least two of these characteristics as the most important. Liquidity, depth, and safety are interrelated characteristics of Treasury securities (see fig. 4). For example, liquidity and depth are both related to the size of the market and the willingness of market participants to buy and sell securities at low cost. In addition, liquidity is enhanced by safety, for example by minimizing the risk that trading could be disrupted by default. Treasury securities are considered one of the safest assets in the world because they are backed by the full faith and credit of the U.S. government. The importance of these characteristics was consistent across sectors, as liquidity, depth, and safety support a variety of business practices and needs. For example, Treasury securities serve as a close substitute to cash for financial institutions and corporate treasurers, are one of the cheapest and one of the most widely used forms of collateral for financial transactions, and are a benchmark for pricing many other financial products, such as corporate bonds, derivatives, and mortgages. In addition, international investors and experts we interviewed said that both foreign official sector and foreign private sector investors value the liquidity, depth, and safety of the Treasury market. For example, foreign central banks value the ability to buy and sell large quantities of securities to assist in managing their exchange rates and, in times of economic stress, provide foreign currency credit to their country’s businesses that borrow or trade in U.S. dollars. Officials from a foreign central bank we spoke with told us that Treasury securities are well suited for their investment needs because of the combination of the large and deep market—which accommodates high-volume transactions—and their safety and liquidity. The combination of liquidity, depth, and safety supports reliable demand for Treasury securities through changing market conditions. A diverse investor base helps to protect Treasury from large swings in interest costs due to shifts in demand from particular sectors. After liquidity, depth, and safety, the fourth most cited characteristic of Treasury securities (25 of 67 survey respondents) was the ability to purchase across the yield curve—that is, purchasing securities of various maturities to match investment needs. In addition to issuing securities at various maturities, Treasury’s strategic plan includes a goal to develop new products to increase the investor base. As previously noted, Treasury began issuing 2-month bills in October 2018. Market participants we surveyed said there is potential demand for (1) a new nominal security; (2) expansion of the floating rate note offerings; and (3) a zero-coupon bond. (For more information on the survey results, see appendix II.) “An increase in global risk (political or economic) will determine flight to quality and higher allocation to Treasuries.” Many investors are willing to accept a lower yield on Treasury securities in exchange for the liquidity, depth, and safety they provide. For example, only 14 of the 67 market participants we surveyed cited the yield of Treasury securities as one of the top three characteristics. Market participants we surveyed and interviewed emphasized that there is no true substitute for Treasury securities because other assets come with additional risks or do not have the liquidity and depth of the Treasury market. As a result, in times of economic uncertainty or stress, investors often move quickly into Treasury securities—known as a “flight to quality”—which increases demand and drives down yields. Changes in U.S. Monetary Policy Operations, Financial Regulation, and Foreign Central Bank Needs Have Affected the Composition of Demand While a broad and diverse investor base helps promote stability for the Treasury market as a whole, demand for Treasury securities by different types of investors fluctuates over time, reflecting changes in the investment needs of particular sectors. Since the 2007-2009 financial crisis, changes in monetary policy operations, financial regulation, and foreign central bank needs have changed the composition of demand for Treasury securities across different sectors. Figure 5 shows the overall changes in holdings of Treasury securities by the three primary investor groups—domestic investors, international investors, and the Federal Reserve. As part of its response to the 2007-2009 financial crisis, the Federal Reserve substantially increased its purchases of longer-term Treasury securities. In turn, these purchases substantially increased the overall size and duration of the Federal Reserve’s holdings of Treasury securities (see fig. 6). From 2008 to 2014, its holdings of Treasury securities increased by 475 percent; from roughly $480 billion in 2008 to $2.7 trillion in 2014. The average duration of the holdings also increased from 2.7 years in 2007 to a high of 7.8 years in 2013. This substantial shift in the size and composition of the Federal Reserve’s holdings began in late 2008 when the Federal Reserve undertook the first of a series of large-scale asset purchase programs, often referred to as quantitative easing, to better reduce long-term interest rates and improve economic conditions. The Federal Reserve’s purchases of long-dated Treasury securities, and other assets, substantially increased the size of its balance sheet and meaningfully reduced interest rates on long-term Treasury securities.One study estimated that quantitative easing reduced interest rates on 10-year Treasury securities as much as 160 basis points (or 1.6 percentage points) (see sidebar). Federal Funds Rate A market determined interest rate that banks charge each other to borrow reserves overnight. The Federal Reserve needed a new approach to managing short-term interest rates while maintaining a large balance sheet. Therefore, in 2014, the Federal Reserve outlined a new framework it intended to adopt for implementing monetary policy when it began to increase interest rates for the first time since the financial crisis. The new operating framework entails setting two short-term interest rates to manage the federal funds rate (see sidebar). Changes in these rates are intended to influence other short-term interest rates (including rates on Treasury securities), the availability of credit, and the economy as a whole to assist the Federal Reserve in achieving its monetary policy objectives. In response to the improving economy the Federal Reserve, in October 2017, began a process to slowly shrink its balance sheet by limiting the reinvestment of proceeds from maturing securities, intending to return to a smaller balance sheet and lower holdings of Treasury securities. In January 2019, however, the Federal Reserve announced that it intended to continue to operate with its post-crisis framework and would therefore evaluate the appropriate time to stop shrinking its balance sheet. In October 2019, the Federal Reserve announced that it would expand its balance sheet, through purchases of Treasury bills, to satisfy increases in the market’s demand for cash and keep the federal funds rate in its target range. As a result of these announcements, the Federal Reserve will continue to hold a much larger portfolio of Treasury securities and will therefore continue to purchase much larger quantities of Treasury securities on an ongoing basis. If economic and financial conditions warrant, the Federal Reserve has stated that it may again buy specific maturities of Treasury securities in significant amounts to influence prevailing long-term interest rates to improve economic conditions and thereby aid in achieving its monetary policy objectives. The possibility of these purchases during future periods of economic stress could increase current demand for Treasury securities among market participants, even during normal times. This could keep interest rates on Treasury securities somewhat lower than they would be otherwise. Some Financial Institutions Changed Their Holdings of Treasury Securities in Response to Regulations Issued after the 2007-2009 Financial Crisis The implementation of recent financial regulations and reforms in the wake of the 2007-2009 financial crisis resulted in changes in certain domestic sectors’ holdings of Treasury securities, including money market funds and banking institutions. Money Market Fund A money market fund is a type of mutual fund that is required by law to invest in low-risk securities. Money market funds act as intermediaries between investors seeking highly liquid, safe investments and corporate and government entities that issue short-term debt to fund operations. Money market funds typically invest in short-term, highly liquid securities, such as Treasury bills, and pay dividends that generally reflect short-term interest rates. Money market fund reforms that took effect in 2016 resulted in a significant increase in this sector’s holdings of Treasury securities (see sidebar). This sector experienced significant volatility during the 2007- 2009 financial crisis as large numbers of investors rapidly withdrew from these funds. To address this risk, the Securities and Exchange Commission (SEC) placed a number of restrictions on prime money market funds. Prime funds invest primarily in taxable short-term corporate and bank debt. The SEC regulations exempted government money market funds— which invest only in cash and U.S. government securities, including Treasury securities—from certain requirements because these assets are less risky and more liquid than other investments. Since these exemptions make government funds particularly attractive, many investors replaced prime money market fund investments with government money market fund investments (see fig. 7). Money market funds now represent one of the largest shares of Treasury securities holdings among domestic investors, holding approximately 8 percent (around $743 billion) of the domestic total as of June 2019 (excluding the Federal Reserve). The five money market funds we surveyed all reported that one of the top three ways they use Treasury securities is to comply with regulations. Following the financial crisis, U.S. and international regulators implemented reforms intended to promote a more resilient financial sector, including reforms aimed at the banking sector. Overall, these reforms increased demand from large banking institutions for Treasury securities. The reforms strengthened global capital and liquidity standards to make banking institutions more resilient and better able to lend in the event of an economic shock. For example, through the “Liquidity Coverage Ratio,” large banking institutions are now required to ensure they can cover short-term cash needs by holding a proportionate amount of high-quality liquid assets—cash reserves, Treasury securities, or Ginnie Mae securities. Since Treasury securities are classified as part of the group of most liquid assets, they are attractive for banks looking to meet these requirements. “Changes in bank liquidity regulations steered us to use more Treasuries in recent years.” Overall, bank holdings of Treasury securities increased from less than 1 percent of the sector’s total assets in 2008 (just over $100 billion) to more than 3 percent (over $800 billion) as of June 2019. The five banks we surveyed all reported that one of the top three ways they use Treasury securities is to comply with regulations. Foreign Central Bank Holdings of Treasury Securities Have Changed over Time Based on the Need to Manage Their Exchange Rates Foreign official demand for Treasury securities—which includes foreign governments and central banks as well as government-owned investment funds—has fluctuated based on economic conditions, especially the need for foreign central banks to manage their exchange rates. After the 2007- 2009 financial crisis, foreign governments increased holdings of Treasury securities from $1.5 trillion in 2007 to $4.1 trillion in 2015. In recent years, foreign governments’ accumulation of Treasury securities has slowed substantially. As of December 2018, they held about $4 trillion, or about 25 percent of all marketable Treasury securities. According to market participants and subject matter experts we interviewed, this slowdown does not imply a change in the nature of foreign demand for Treasury securities, but rather is a consequence of foreign central banks’ changing need for foreign reserves—many of which are held in the form of Treasury securities—to assist in managing their currencies. The U.S. dollar is the dominant currency used by foreign central banks in their official foreign exchange reserves, referred to as a reserve currency (see sidebar). As the reserve currency, foreign central banks buy and sell U.S. dollars to influence the value of their currencies to help manage their exchange rates, among other uses. To this end, foreign central banks hold Treasury securities in part because they can be converted to U.S. dollars quickly and in great quantity. Foreign central banks often act to limit the impact of exchange rate fluctuations and maintain the stability of their own currency. For example, a fall in U.S. interest rates tends to reduce the demand for dollars as private investors seek higher yielding assets abroad. In response, foreign central banks buy dollars—often investing those dollars in Treasury securities—and sell their own currency on foreign exchange markets which reduces the demand for—and hence the value of—their own currency relative to the dollar (see fig. 8). Conversely, when U.S. interest rates began increasing in 2015, dollar- denominated assets became more attractive to private investors seeking higher yields, which increased the value of the dollar relative to other currencies. In response to this and other events, experts we spoke with highlighted the role of China in particular— the largest foreign official holder of Treasury securities—in selling Treasury securities during that time period to help stabilize its exchange rate. Because U.S. interest rates are cyclical, foreign central bank interventions will also be cyclical, which implies their demand for Treasury securities will continue, to some extent, to vary over time so long as the U.S. dollar is a dominant reserve currency. Treasury Market Faces Risks from Debt Limit Impasses, Rising Debt, and Changing Market Conditions That Could Compromise the Safety or Liquidity of Treasury Securities Future changes in market conditions or policies—especially to the extent those changes significantly affect the combination of liquidity, depth, and safety of Treasury securities—could raise new and important risks to the Treasury market. Market participants we interviewed and surveyed across various sectors have raised concerns about risks that could affect demand for Treasury securities: risks from a future debt limit impasse, the sustainability of the federal debt, the dollar’s status as the primary reserve currency, and changes in the structure of the market which might affect liquidity, all of which could degrade the unique advantages of the Treasury market. Debt Limit Impasses Debt Limit The debt limit is a legal limit on the total amount of federal debt that can be outstanding at one time. (31 U.S.C. §§ 3101, 3101A.) It is not a control on debt but rather an after- the-fact measure that restricts the Department of the Treasury’s authority to borrow to finance the decisions already enacted by Congress and the President. Many market participants from all 10 sectors we surveyed and interviewed identified delays in raising (or suspending) the debt limit as potentially undermining the perceived safety of Treasury securities (see sidebar). During these times, Treasury departs from normal cash and debt management operations and takes extraordinary actions to avoid breaching the limit. Once all of the extraordinary actions are exhausted, Treasury may not issue debt without further action from Congress and could be forced to delay payments until sufficient funds become available. Treasury could eventually be forced to default on legal debt obligations. We previously reported that delays in raising the debt limit can lead to increased borrowing costs and significant disruptions in the Treasury market. For example, there were lengthy impasses over the debt limit in 2011 and 2013. During the 2013 impasse, investors reported taking the unprecedented action of systematically avoiding certain Treasury securities (i.e., those that would mature around the dates when Treasury projected it would exhaust the extraordinary actions available). Consequently, interest rates for these securities increased dramatically and liquidity declined in the secondary market where securities are traded among investors. “Treasury securities are held for liquidity management. It is critical that we have confidence in the timely payment of principal and interest on U.S. Treasury securities. Gamesmanship by political parties that impacts the confidence in timely payment on U.S. Treasury securities simply is not acceptable. We therefore are forced to invest in other forms of liquid securities, or to modify our participation in T-bills to avoid key dates around debt limits.” Overall, 48 of the 67 (72 percent) investors we surveyed reported that they anticipated they would take similar action—such as avoiding purchases of securities that would mature around the affected dates and requiring higher yields for purchasing those securities—to manage potential market disruptions caused by any future debt limit impasses. A default would have devastating effects on U.S. and global economies and the public. It is generally recognized that a default would prevent the government from honoring all of its obligations to pay for such things as program benefits; contractual services and supplies; employees’ salaries and wages and retirement benefits; and principal on maturing securities. Any disruption of these payments would have cascading effects on the economy. A default would call into question the full faith and credit of the U.S. government, and therefore immediately and significantly decrease demand for Treasury securities. Those investors who did purchase Treasury securities would demand a premium in the form of higher interest rates, to compensate for this increased risk. We have reported numerous times that the full faith and credit of the United States must be preserved. We have recommended that Congress consider alternative approaches to the current debt limit to avoid seriously disrupting the Treasury market and increasing borrowing costs. Experts have suggested replacing the debt limit with a fiscal rule imposed on spending and revenue decisions. As previously reported, Congress could consider this change as part of a broader plan to put the government on a more sustainable fiscal path. Sustainability of the Federal Debt Some market participants we interviewed and surveyed expressed concern that continued deterioration of the federal government’s fiscal position could negatively affect the safety of Treasury securities. We have reported that the federal government is on an unsustainable fiscal path. Over the last 10 years, debt held by the public has more than doubled; increasing from about $7 trillion in 2009 to $16 trillion in 2019. We, the Office of Management and Budget, and the Congressional Budget Office estimate that federal debt will continue to grow, surpassing its historical high of 106 percent of gross domestic product within 13 to 20 years. Congress and the administration face serious economic, security, and social challenges that require difficult policy choices in the near term in setting national priorities and charting a path forward for economic growth. We have reported that a broad plan is also needed to put the federal government on a sustainable long-term fiscal path and ensure that the United States remains in a strong economic position to meet its security and social needs, as well as to preserve the flexibility to address unforeseen events. In August 2011, one of the major credit rating agencies, Standard & Poor’s, lowered its long-term sovereign credit rating on the U.S. from AAA to AA+, citing the United States’ rising public debt burden and greater policymaking uncertainty. The other major rating agencies have not lowered their rating of U.S. debt but continually monitor fiscal conditions and the political climate. If market participants perceive that the deteriorating fiscal outlook of the federal government could undermine the credit quality of Treasury securities, some investors could seek out alternative investments or demand a risk premium. This could further increase yields and therefore costs to Treasury. In general, larger deficits are likely to increase the yields on Treasury securities that are required by market participants, all else equal. U.S. Dollar’s Status as Reserve Currency Market participants and subject matter experts we interviewed emphasized the importance of the U.S. dollar’s status as the dominant global reserve currency in supporting demand for Treasury securities. So long as the U.S. dollar remains the dominant reserve currency worldwide, Treasury securities are likely to remain in high demand by foreign central banks and other investors. However, events that undermine the liquidity, safety, or depth of the Treasury market—such as debt limit impasses or concerns about fiscal sustainability—could reduce the share of U.S. dollar assets in foreign central bank reserves. Furthermore, reduced openness of the U.S. economy in global trade or financial markets would reduce the advantages of holding U.S. dollar reserves and could similarly precipitate a shift away from the U.S. dollar toward other currencies. Such a shift would likely reduce foreign official holdings of Treasury securities and could potentially reduce demand from other sectors that use U.S. dollars for global trade and other transactions. Consequently, Treasury’s cost to borrow would likely increase. Changing Market Structure Secondary market trading in Treasury securities is increasingly conducted on electronic platforms. The resulting changes and innovations have led to a number of benefits for market participants, but could also introduce new risks. For example, the Treasury Market Practices Group reported in 2015 that electronic trading had arguably improved overall liquidity through enhanced order flow and competition, reducing trading costs and allowing market participants to more effectively manage risk. Many market participants we surveyed agreed. For example, a market participant we surveyed reported that increased electronification of the Treasury market made it easier to price, trade, and settle holdings. However, market participants we surveyed and interviewed also told us that there is a potential risk of reduced liquidity and increased volatility in the Treasury secondary market. Market participants attributed these potential risks to a number of different factors related to the changing structure of the market: (1) increased use of automated trading; (2) increased role of principal trading firms; and (3) post-crisis financial reforms. Automated Trading A subset of electronic trading that relies on computer algorithms—advanced mathematical models—to make decisions about the timing, price, and quantity of the market order. High-frequency Trading A subset of automated or algorithmic trading in which the trading opportunities are identified and acted upon algorithmically and executed through technology at high speeds. Market participants we surveyed and interviewed said that automated trading—particularly high-frequency algorithmic trading (see sidebar)— may introduce operational risks that could interfere with market functioning. Automated trading relies on speeds that are beyond manual detection and intervention. Consequently, the Treasury Market Practices Group pointed out that internal controls may not be sufficient to counteract malfunctioning algorithms or algorithms reacting to inaccurate or unexpected data. For example, a malfunctioning algorithm could interfere with market functioning by creating sharp, short-lived spikes in prices as a result of other algorithms responding to an initial incorrect order. “Our Treasury trading desk is about 50 percent smaller than it was a decade ago, and we now have nearly as many traders devoted to algorithmic and electronic market- making as traditional market-making activity.” Market participants also noted that this type of trading may lead to more frequent episodes of volatility, making it more difficult to buy or sell Treasury securities at predictable or stable prices, particularly during periods of market stress. In one notable example, on October 15, 2014— in what has been referred to as a “flash rally”—the Treasury secondary market experienced record-high trading volumes and significant intraday volatility that could not be explained by external policy announcements or other factors. A 2015 interagency report examining the events of that day observed that as the speed of market activity increases, the Treasury market could continue to experience more frequent variations in market liquidity than in the past. Increased Role of Principal Trading Firms Advancements in technology, and the associated growth in high-speed electronic trading, have contributed to a shift in the composition of the types of firms actively trading and making markets in Treasury securities. Market-makers serve a crucial role in financial markets by providing liquidity to facilitate market efficiency and functioning (see sidebar). The 2015 interagency report examining the “flash rally” found that principal trading firms—proprietary trading firms that almost exclusively use automated trading strategies—conducted more than half of the trading activity on certain electronic platforms on the days reviewed. Market participants we spoke with expressed concern that some of the principal trading firms might not continue to provide liquidity in times of stress. According to the 2015 interagency report, principal trading firms tend to buy and sell frequently in small amounts, rarely holding Treasury securities beyond a day, and generally not trading on behalf of clients. Additionally, the extent of these firms’ presence in the Treasury market and the role they play is less well understood in part because they are not required to report their Treasury holdings and other financial information to the SEC that other financial institutions, such as broker-dealers and investment companies, are required to report. These firms’ holdings of Treasury securities are reflected in the Federal Reserve’s “household” category; the largest category of Treasury securities holdings among all domestic investors (excluding the Federal Reserve). As of June 2019, “households” held roughly $2 trillion in Treasury securities, up from $565 billion at the beginning of 2009—a 249 percent increase. According to Treasury, its 2018 market outreach revealed that data on the size of trades (market volume) are not transparent, which may hinder liquidity for certain securities. In September 2019, Treasury announced that the Financial Industry Regulatory Authority, Inc. (FINRA) expects to publicly release aggregate trading volume data for the Treasury secondary market in 2020. At the same time that the number of principal trading firms increased, market participants we surveyed and interviewed told us that broker- dealers are holding a smaller inventory of Treasury securities, which they attributed to certain post-crisis financial reforms that increased the cost of holding a large inventory of securities, including Treasury securities, for broker-dealers that are part of the larger banking institutions. As discussed above, these reforms were introduced to promote a more resilient financial sector. One set of reforms requires that large banking institutions hold a certain amount of high-quality liquid assets, including Treasury securities, to cover short-term cash needs. Another bank capital regulation—the supplementary leverage ratio—requires an institution to hold a supply of capital proportionate to total assets, which includes both low-risk assets (e.g., Treasury securities) and higher-risk assets. Because there are costs for holding capital, these institutions may prefer to reduce the size of their Treasury securities portfolio for the purpose of making markets and instead expand other lines of business that offer higher returns for the same amount of capital under the supplementary leverage ratio. Broker-dealers have traditionally been the predominant market makers for customers, including foreign central banks, mutual funds, hedge funds, pension funds, and insurance companies; buying and selling Treasury securities to meet customer trading needs, which could involve maintaining a large balance sheet to be able to buy and sell in large amounts and across days. According to market participants, broker-dealers’ smaller balance sheets have resulted in reduced liquidity for certain securities and could lead to additional risks during periods of secondary market stress or volatility. A well-functioning secondary market is important to Treasury in part because rates in the secondary market ultimately affect Treasury’s borrowing costs, as investors generally demand similar rates at auction to those in the secondary market. Market Outreach and Analysis Inform Treasury Debt Issuance Decisions but Policies Governing Key Inputs Could Be Strengthened Treasury must regularly make important debt issuance decisions—such as what type of Treasury security to issue and in what quantities—to maintain broad-based demand and support its goal of borrowing at the lowest cost over time. Treasury officials described the steps the Office of Debt Management takes to make decisions about Treasury’s debt issuance strategy (see fig. 9). Treasury officials told us that they rely on three key inputs to help analyze financing options and inform these decisions: (1) market outreach, (2) auction and market metrics, and (3) analytical models. This is consistent with World Bank-IMF guidelines for public debt management. These guidelines highlight the importance of communicating regularly with investors, monitoring market activity, and having a strong analytical framework to inform decisions about the timing and amount of each type of security to issue. However, we found Treasury lacks policies governing some of these key inputs. Specifically, Treasury’s draft policy for bilateral market outreach does not include guidance on systematically selecting and documenting these interactions. Furthermore, Treasury does not have a policy governing important aspects of its analytical modeling, including requiring that analyses are documented and that Treasury staff follow and document appropriate quality assurance steps. Treasury Conducts Market Outreach but Does Not Have a Policy for Bilateral Outreach Primary Dealers A group of banks and broker-dealers designated by the Federal Reserve Bank of New York (FRBNY) to serve as trading counterparties to the FRBNY in the implementation of monetary policy. They are also required to participate in all Treasury auctions. meets with half of them in person on a rotating basis to obtain estimates on borrowing, issuance, and the federal budget deficit (see sidebar). Treasury also uses the survey and meetings to obtain input on a variety of debt management discussion topics, posed in advance. For example, in April 2018 Treasury officials asked the primary dealers to comment on foreign private and official demand for Treasury securities over the short to intermediate term. Treasury Borrowing Advisory Committee An advisory committee composed of 15 senior officials from broker-dealers, asset managers, banks, and hedge funds. Treasury Borrowing Advisory Committee (TBAC). Treasury and TBAC meet quarterly as part of Treasury’s quarterly refunding process (see sidebar). At these meetings, Treasury officials and the committee members discuss economic forecasts, federal borrowing needs, debt management issues, and market dynamics. For example, in January 2019, Treasury asked TBAC to examine any products or debt management practices that might expand the investor base for Treasury securities, among other things. TBAC also provides Treasury with technical assistance intended to complement Treasury’s internal analyses. For example, in 2016, TBAC members began work to develop a debt issuance model to help guide the committee’s recommendations to Treasury about how to finance the government’s borrowing needs. In November 2017, based on the modeling framework as well as other factors, TBAC recommended that Treasury increase issuance of 2-, 3-, and 5-year notes to meet higher funding needs. Bilateral market outreach. To reach a broader range of investors, Treasury officials and staff also communicate directly—via email, telephone, conferences, and in-person meetings—with other market participants, such as foreign central banks, asset managers, investment banks, life insurance companies, pension funds, hedge funds, principal trading firms, and trading platforms. According to Treasury, staff use this bilateral outreach to discuss new products or distribution channels; assess investor needs; determine the drivers of market demand; and guide market perception about Treasury policy. Treasury officials said they select individuals for bilateral outreach using a combination of qualitative and quantitative information, such as data on specific investors’ participation in the Treasury market. According to Treasury, the bilateral market outreach helps mitigate an over-reliance on a subset of market participants that might not represent the full spectrum of views of Treasury market investors. However, we found that Treasury does not have an official policy to ensure that its bilateral market outreach is conducted or documented in a systematic manner. This is consistent with our reporting from 2010. In May 2010, Treasury officials told us that one of Treasury’s priorities was to improve investor outreach and collect information more systematically. Treasury acquired a customer relationship management tool, but Treasury officials said they only use it to store contact information. Treasury also drafted a policy document in November 2017 for Office of Debt Management staff that specifies the nature, restrictions on, and expectations for bilateral discussions with market contacts, but the policy is not final. While Treasury’s 2017 draft policy includes some guidance on documenting the bilateral outreach, Treasury officials told us they did not systematically produce formal documentation of these meetings. Treasury officials said that one reason Treasury did not have formal documentation of market outreach is because the staff who conduct the outreach also make the policy recommendations. Treasury officials also said direct outreach can sometimes cover market-sensitive information and that confidentiality is important to ensure candid exchange of information. However, the discreet nature of the outreach does not preclude Treasury staff from taking steps to document summary level information that would meet their needs and still maintain confidentiality. For example, Treasury officials and staff are experienced at managing market sensitive information for TBAC and primary dealers and communicating appropriate information to the public. While the level and nature of documentation can vary based on the materiality to decision-making, documentation is a necessary part of an effective internal control system. Documentation provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. In 2017, Treasury conducted market outreach—through the primary dealers, TBAC, and bilateral discussions with market participants—about demand for a potential Treasury ultra-long bond (50- or 100-year bonds). At that time, Treasury decided not to proceed with introducing ultra-long bonds in part because its analysis indicated that the bond would be too costly to issue relative to other Treasury securities, such as the 30-year bond. In August 2019, Treasury announced that it was conducting broad market outreach to update its understanding of market demand for an ultra-long bond. Federal standards for internal control direct agencies to design and implement control activities—policies, procedures, and mechanisms—to achieve program objectives and respond to risks. A policy governing the selection of individuals for bilateral outreach could help Treasury ensure it is systematically obtaining market views from investors across various sectors. A policy for documenting bilateral outreach would also ensure that the information that Treasury staff obtains is available to help inform future deliberations. Treasury officials said that they are considering updating and finalizing the 2017 draft outreach guidance based on our review. Treasury Uses Auction and Market Metrics to Analyze Issuance Decisions and Is Working to Develop Improved Data on the Secondary Market In addition to market outreach, Treasury calculates and monitors metrics that summarize important aspects of the debt portfolio, Treasury auctions, and the secondary market. Treasury officials stated they monitor metrics to understand changing market dynamics and highlighted some of the key metrics they use to inform decisions (see table 2). According to Treasury officials, the percent of debt maturing in a given period is among the better indicators of rollover risk (see sidebar). 2. market access risk—the operational risks inherent in coming back to the market to refinance the debt. As of September 2019, more than half of the $16.3 trillion marketable debt held by the public will mature in the next 3 years; about 27 percent will mature in the next 12 months (see fig. 10). A significant share of that maturing debt will need to be refinanced at prevailing interest rates. Treasury publishes a number of key auction metrics that provide insight into auction demand for Treasury securities as well as which sectors purchase securities at auction (see table 3). Treasury also analyzes more granular data on bidders that are not publicly available. According to Treasury officials, one indicator of demand for Treasury securities at auction is the bid-to-cover ratio. When the ratio is greater than one, buyers submitted bids for more securities than were offered. Figure 11 shows weighted average bid-to-cover ratios for the 4-week bill, 2-year note, and 10-year note from 2000 to 2019. Treasury regularly engages with the Federal Reserve, SEC, and the U.S. Commodity Futures Trading Commission regarding secondary market activity, including significant price movements and their causes, trends in market structure (such as changes in venues, participants, and trade protocols), liquidity conditions, and market functioning. Treasury officials reported that they routinely review data relevant to secondary market activity (see table 4). Figure 12 shows the average daily trading volumes between primary dealers for Treasury bills; this is a measure of liquidity of the market. In the past, Treasury has had limited data on transactions in the secondary market. As a result, it has had limited real-time information on secondary market trading activity, which, as discussed earlier, has changed significantly in recent years, and has experienced abrupt changes in liquidity conditions, such as the October 2014 “flash rally” event. In July 2017, Treasury and other agencies gained access to more granular data on secondary market transactions as reported to the Financial Industry Regulatory Authority, Inc. (FINRA) by its broker-dealer members through the Trade Reporting and Compliance Engine (TRACE). Currently, the TRACE data are available to Treasury, the SEC, the Federal Reserve, and other official entities. According to Treasury officials, analyzing the raw TRACE data can provide insight into pricing in the market, patterns of trading activity, and the timing of trades. Treasury officials stated no other data source offered such detailed and reasonably comprehensive information on secondary market transactions in Treasury securities. However, there are limitations to the TRACE data, and Treasury is continuing to work with FINRA and the SEC to improve the quality of the data. Treasury has made policy recommendations supportive of expanding the scope of TRACE data reporting. Treasury reported that in April 2019, FINRA made enhancements to the Treasury transaction data that are reported through TRACE. For example, FINRA now requires more detailed transaction reporting to better understand the firms that are trading with each other. These identifying data will be available only to Treasury and regulators, such as the SEC and the Federal Reserve. According to Treasury, this will provide them with a better understanding of principal trading firm activity in the Treasury secondary market. Treasury Uses Analytical Models to Illustrate Costs and Risks of Issuance Strategies, but Does Not Have a Quality Assurance Policy Treasury’s analytical models are another source of information for the department’s financing decisions, but Treasury lacks a policy governing important aspects of these activities. According to Treasury officials, they use a number of analytical approaches, from fully specified models to simple illustrative analyses. Some models are more complex, combining information on the debt portfolio along with assumptions about future financing needs, economic conditions, and interest rates. Other models perform relatively simple calculations based on market data. Treasury officials told us they use these analyses to illustrate trade-offs, test potential financing options, and understand long-term dynamics of the Treasury market. These kinds of analytical tools can play an important role in good debt management decisions. According to Treasury officials, the bulk of modeling is completed by the Office of Debt Management’s Quantitative Strategies Group. Treasury officials told us that the group, which was formed in 2011, has two full- time-equivalent employees. Treasury officials provided examples of some internal analysis and modeling they have used in the last few years. Portfolio simulation models of the Treasury debt portfolio. These simulations produce estimates of future costs and risks—among other potential outputs—arising from the debt portfolio and potential issuance strategies. For example, the simulation can produce a cost metric that represents Treasury’s interest cost for a particular issuance strategy. In addition, the simulation can produce a risk metric that represents the amount of debt maturing over various periods (e.g., in 1 year, 3 years, 5 years) given a specific issuance strategy. One use of such a model is to represent an issuance strategy as one cost-risk choice among a range of options associated with alternative issuance strategies (see fig. 13). As assumptions about the economy or financial markets change, or as issue sizes or maturities are adjusted, the cost and risk outcomes change. In August 2018, Treasury officials stated that model output, along with market outreach and analysis of historical auction data, supported Treasury’s decision to increase issuance at all maturities with a focus on the intermediate range of 2, 3, and 5years. Stress testing to examine how the debt portfolio might perform in challenging environments. For example, Treasury staff examined projections of future borrowing needs and interest rates and analyzed how a strategy might perform under different interest-rate assumptions. Calculations to estimate the yields on potential new securities. For example, in 2017, Treasury used several analytical approaches to create a range of potential prices for an ultra-long bond. One approach estimated the additional yield for an ultra-long bond, assuming it would be proportionate to the difference between 30-year and 10-year bond yields. Analytical models can improve decisions, but they also come with risks, including possible adverse consequences of decisions based on models that are incorrect or misused. These risks can be managed through appropriate documentation and quality assurance. In our previous work, we identified the elements of economic analyses that are relevant for federal agency decision-making, including transparency and documentation of the analyses for internal stakeholders. Analyses should be transparent by describing and justifying the analytical choices, assumptions, and data used. Transparency allows internal stakeholders to understand the implications of these analytical choices and their associated risks. Sufficient documentation ensures that analytical choices, data, assumptions, limitations, and uncertainties are clear and available to future model developers and users. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Documentation of quantitative analyses and models should be clearly written, with a plain language summary and clearly labeled tables that describe the data used and results, and a conclusion that is consistent with these results. Documentation should also indicate that analyses comply with a robust quality assurance process. The Federal Reserve outlines a quality assurance process intended to verify that models are performing in line with their design objectives and business uses and also identifies potential limitations and assesses their possible impact. The degree of quality assurance required should be commensurate with the level of complexity, risk, and materiality to decision-making. Federal standards for internal control also direct agencies to design and implement control activities—such as documentation and quality assurance—through policies to achieve program objectives and respond to risks. Treasury provided information on its analytical models which included some key elements relevant to the documentation and transparency of Treasury’s analyses, including: Internal Treasury presentations that described the purpose, rationale, and certain analytical choices and results for a portfolio simulation model. Internal presentations detailing results and some analytical choices related to pricing estimates for an ultra-long bond. A code repository that can facilitate replication of some models and examples of code used to operate models. While Treasury’s documentation of its analytical models contained useful information for internal stakeholders, the documentation did not fully characterize the analytical choices, data, assumptions, limitations, and uncertainties associated with the analyses. For example: Treasury’s internal presentations on its portfolio simulation models did not fully justify analytical choices or describe the limitations of the models. Treasury’s internal presentations on pricing estimates for an ultra-long bond contain estimates from six different analytical approaches developed by Treasury but only detail a subset of the assumptions needed to arrive at the estimates. For example, there is no description of the precise structure of the approaches or the necessary sources of uncertainty that would lead to the range of estimates that Treasury presents for each approach. Treasury officials did not have documentation indicating that analytical models had been subject to quality assurance or that quality assurance activities had been commensurate with the level of complexity, risk, and materiality to decision-making. These issues arise in part because Treasury does not have a policy governing important aspects of the Office of Debt Management’s analytical modeling activities, including requiring that analyses are documented and that Treasury staff follow and document appropriate quality assurance steps. Treasury officials told us that they take steps to ensure that analytical work is appropriately reviewed. They stated that the review process is based on the nature of the work, and according to Treasury officials, quality assurance generally entails cross checks among staff and review by office leadership. One model was also shared with external contacts for feedback. Treasury officials emphasized that models are only one input of many into Treasury’s decision-making and explained that their practices are sufficient for the more straightforward analyses that typically inform decisions. However, the analyses that Treasury relies on—both relatively straightforward and more complex—to inform important decisions should be documented and subject to quality assurance to ensure that decision makers receive quality information based on appropriate analytical approaches. Treasury relies on a range of analytical methods, all of which require some degree of technical expertise to develop, implement, and evaluate, despite varying degrees of complexity. A policy requiring appropriate documentation and quality assurance would help Treasury ensure that analytical methods, data, assumptions, limitations, and uncertainties are transparent, appropriate, and available to future model developers and users. Conclusions U.S. Treasury securities play a vital role in U.S. and global financial markets because of their deep and liquid market and because investors are confident that debt backed by the full faith and credit of the U.S. government will be honored. This combination of characteristics has helped support reliable demand for Treasury securities through ever changing market conditions, which, in turn, has helped minimize Treasury’s borrowing costs. Changing investment needs across different sectors and fluctuations in demand for Treasury securities are a normal part of economic cycles. Treasury and Congress need to be alert to risks that could compromise these key characteristics to preserve Treasury securities’ unique advantages. These risks include changing dynamics of the secondary market, including new participants using high-frequency trading strategies that could reduce liquidity, particularly in times of market stress. Treasury’s recent efforts to coordinate with the SEC and FINRA to obtain detailed information on the secondary Treasury market are an important step. In addition, as we have previously reported, Congress needs to consider taking action to address the unsustainable long-term fiscal path as well as alternative approaches to managing the debt limit that would ensure the continued safety of U.S. Treasury securities. Treasury has a critical role to play through its management of the federal debt portfolio to support its goal to borrow at the lowest cost over time. Treasury must promote strong demand for its securities from a diverse group of investors while making debt issuance decisions that appropriately balance risks and interest costs. Therefore, it is important that Treasury make these decisions based on the best information possible. Consistent with good debt management practices, Treasury uses a range of qualitative and quantitative inputs to inform its decision-making. It does not, however, have policies governing important aspects of two of these inputs: bilateral market outreach and analytical modeling. Until Treasury has designed and implemented policies around these key activities, it cannot be certain that needed information for debt issuance decisions is available, complete, and appropriately reviewed. Moreover, without appropriate documentation of important market outreach or analytical models, Treasury risks losing critical organizational information as staff leave the agency. Given the size and importance of the Treasury market, ensuring the quality of information available to decision-makers is essential to Treasury’s efforts to reduce risk and cost to taxpayers. Recommendations for Executive Action We are making the following two recommendations to Treasury. The Secretary of the Treasury should finalize the Office of Debt Management’s policy for conducting bilateral market outreach and ensure it includes guidance on selecting market participants and documenting and sharing relevant information throughout the office while safeguarding the confidentiality of discussions. (Recommendation 1) The Secretary of the Treasury should establish a policy for the documentation and quality assurance of the Office of Debt Management’s analytical models. At a minimum, this policy should require (1) appropriate and sufficient documentation of analytical models, and (2) documented quality assurance of analytical models commensurate with the level of complexity, risk, and materiality to decision-making. (Recommendation 2) Agency Comments We provided a draft of this report to Treasury and the Federal Reserve for review and comment. In its comments, reproduced in appendix III, Treasury agreed with our recommendations and said it would work to implement them over the coming months. Treasury and the Federal Reserve also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Federal Reserve, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. For questions about this report, please contact Tranchau (Kris) T. Nguyen at (202) 512-6806 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Survey Population and Sample Design To address both of our objectives, we surveyed market participants regarding (1) factors that affect demand for Treasury securities, (2) experiences interacting with the Department of the Treasury (Treasury), and (3) evolution of the Treasury market. In March 2019, we administered an online survey to 109 institutions. We selected the 10 largest institutions by total assets (or other equivalent financial indicator) in nine sectors that hold Treasury securities and the 15 largest mutual funds and exchange-traded funds by total assets under management (see table 5). We also sent the survey to four market participants we interviewed in September that did not meet our top 10 criterion for its sector. The survey results are not generalizable to all investors in Treasury securities. To define the sectors for our sample, we reviewed data from the Federal Reserve’s Financial Accounts of the United States, (table L.100 to L. 133, first quarter 2018) to identify sectors holding Treasury securities. We excluded some sectors due to challenges in contacting certain entities, such as foreign monetary authorities, other foreign investors, and the household sector. According to the Federal Reserve, the household sector is a residual category and includes individuals holding Treasury securities, hedge funds, and other institutions not required to report to regulatory bodies. We excluded this sector due to the difficulty of identifying, ranking, and contacting individual household investors and other entities. We excluded Government Sponsored Enterprises because these entities are unlikely to provide additional insights into the Treasury market beyond our sample, which includes commercial banks. We excluded federal government retirement funds because the Thrift Savings Plan does not invest in marketable Treasury securities. To identify the organizations within each sector that would receive our web-based survey, we used rankings of the largest organizations in each sector based on total assets or an equivalent financial indicator, such as assets under management or direct premiums written, and selected the 10 largest in each sector. In the case of mutual funds and exchange traded funds, we used information from the Investment Company Institute on total assets under management in Treasury- and government-focused funds to identify the largest 15 in that sector. For the broker-dealer sector, we selected the 10 largest primary dealers. Appendix II: Selected Results from Survey of Market Participants As part of our survey of market participants, we asked respondents to identify products or debt management practices that, if the Department of the Treasury (Treasury) introduced, would increase the respondent’s overall demand for Treasury securities. Results from our related survey questions are presented below. Survey Question: If Treasury were to make the following changes to its offerings, would your overall demand for Treasury securities increase? (see fig. 14). Survey Question: If Treasury were to change its debt management practices in the following ways, would your overall demand for Treasury securities increase? (see fig. 15). Appendix III: Comments from the Department of the Treasury Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Thomas J. McCabe (Assistant Director), Margaret M. Adams (Analyst-in-Charge), Abigail Brown, Michael Hoffman, Loren Lipsey, Daniel Mahoney, Anna Beth Smith, Andrew J. Stephens, Farrah Stone, and Wade Tanner made significant contributions to this report. Robert Gebhart, Jerome Sandau, Peter Verchinski, and Alicia White also contributed to this report.
Why GAO Did This Study The Congressional Budget Office projects that federal deficits will reach $1 trillion in 2020 and average $1.2 trillion per year through 2029, further adding to the more than $16 trillion in current debt held by the public. As a result, Treasury will need to issue a substantial amount of debt to finance government operations and refinance maturing debt. To support its goal to borrow at the lowest cost over time, Treasury must maintain strong demand from a diverse group of investors for Treasury securities. GAO prepared this report as part of continuing efforts to assist Congress in identifying and addressing debt management challenges. This report (1) identifies factors that affect demand for Treasury securities and (2) examines how Treasury monitors and analyzes information about the Treasury market to inform its debt issuance strategy. GAO analyzed data on investor holdings of Treasury securities; surveyed a non-generalizable sample of 109 large domestic institutional investors across 10 sectors (67 responded); reviewed Treasury analysis and market research; and interviewed market participants across sectors, experts on foreign investors, and Treasury officials. What GAO Found The large institutional investors GAO surveyed across multiple sectors identified liquidity, depth, and safety as the most important characteristics of Treasury securities. This combination supports reliable demand from different types of investors through changing market conditions. Many investors accept low yields because of these characteristics, keeping the Department of the Treasury's (Treasury) borrowing costs low. Market participants GAO interviewed and surveyed identified risks that could degrade these key characteristics and reduce future demand: Debt limit impasses could force Treasury to delay payments on maturing securities and interest, until sufficient funds are available, compromising the safety of Treasury securities. Unsustainable levels of federal debt could cause investors to demand a risk premium and seek out alternatives to Treasury securities. A reduced role for the U.S. dollar as the dominant reserve currency could diminish the advantages of holding Treasury securities for foreign investors, particularly foreign government investors who hold large amounts of dollar-denominated assets to assist in managing their exchange rates. Changes in the Treasury secondary market where securities are traded— including high-frequency trading and a reduced role for broker-dealers who buy and sell for customers—could increase volatility and reduce liquidity. Treasury regularly makes important issuance decisions—such as what types of securities to issue and in what quantities—to maintain broad-based demand and support its goal of borrowing at the lowest cost over time. Treasury officials said three key inputs support these decisions: market outreach; auction and market metrics (e.g., trading volumes); and analytical models . However, Treasury has not finalized its policy for systematically conducting bilateral market outreach to ensure a thorough understanding of market demand. Treasury also does not have a policy governing important aspects of its analytical modeling, including following and documenting quality assurance steps to ensure that analytical methods are appropriate and available to future model developers and users. Codifying policies governing key information sources would help ensure that Treasury's decisions are based on the best possible information. What GAO Recommends GAO recommends that Treasury (1) finalize its policy for conducting bilateral market outreach and (2) establish a policy for the documentation and quality assurance of analytical models. Treasury agreed with these recommendations.
gao_GAO-19-254
gao_GAO-19-254_0
Background HUD created REAC in 1997 to obtain consistent information on, among other things, the physical condition of its public and multifamily properties. REAC generally inspects properties every 1 to 3 years, using a risk-based schedule (discussed in detail below). REAC developed a standardized protocol to inspect properties, referred to as the Uniform Physical Condition Standards. As part of the protocol, REAC also inspects properties to identify health and safety deficiencies, including exigent health and safety deficiencies, which are life-threatening and require immediate action or remedy (such as exposed electrical wires or blocked access to windows or doors in case of a fire). REAC’s data system automatically generates an overall inspection score for the property from 0 to 100 based on the information an inspector records. At the end of each day of an inspection, an inspector is required to inform a property manager or other representative if the inspection identified exigent health and safety issues. Before releasing the inspection score, REAC reviews the inspection through a quality assurance process to ensure it is accurate. Following verification of the inspection score, REAC releases an inspection report to the property owner or PHA and the relevant HUD program office. The inspection report contains the overall inspection score, as well as more detailed information on physical deficiencies identified during the inspection. REAC primarily uses contractors—who are trained and certified in REAC’s Uniform Physical Condition Standards protocol—to conduct inspections of multifamily and public housing properties. In addition to these contract inspectors, REAC uses quality assurance inspectors, who are HUD employees, to oversee and monitor contract inspectors, as well as to ensure that REAC provides accurate and reliable inspections. Both contract and quality assurance inspectors complete several phases of training on the inspection protocol, including online, classroom, and field- based training. To procure inspections of HUD-assisted properties, REAC primarily uses an auction process to award contracts either to eligible contract inspectors or to companies that employ contract inspectors. This process, called a reverse auction program, occurs at least once a quarter. Contract inspectors or companies bid to inspect properties across the United States and its territories in a web-based auction. At the close of the auction, REAC awards the inspection to whoever bids the lowest price and is eligible to conduct inspections. The contract inspector then schedules and performs the property inspections in accordance with Uniform Physical Condition Standards protocol. According to REAC officials, this process is designed to increase cost savings and small business participation. REAC Roles and Responsibilities REAC is situated within PIH. Several departments within REAC are involved in facilitating the physical inspection process: Physical Assessment Subsystem (PASS): PASS has three primary divisions that are responsible for different aspects of the inspection process. The PASS Physical Inspection Operations division coordinates the procurement of inspections. The PASS Quality Assurance division evaluates and monitors REAC’s inspection program to ensure reliable, replicable, and reasonable inspections; trains contract and quality assurance inspectors; and provides technical assistance to HUD-assisted properties and other relevant stakeholders. The PASS Inspector Administration division monitors the performance of inspectors and takes administrative actions, such as decertifying inspectors who do not meet REAC’s standards for inspectors. Research and Development: REAC’s Research and Development division produces data analysis and statistical reports on REAC’s information products (e.g., physical inspection reports and Public Housing Assessment System scores) and assesses these products to ensure they are accurate and valid. REAC is also responsible for evaluating additional conditions, beyond physical conditions, of multifamily and public housing properties. Specifically, REAC evaluates the financial conditions of multifamily properties and assesses the financial and management performance of public housing properties. This performance assessment is conducted through the Public Housing Assessment System. REAC uses several data systems to collect, score, and report on the financial and management conditions of public housing properties, along with evaluating the utilization of property modernization and development funds (capital funds). We describe this process in more detail later in the report. HUD Offices Involved in Monitoring and Enforcement of Physical Condition Standards HUD’s PIH, Multifamily Housing, and Departmental Enforcement Center are responsible for ensuring that the owners of REAC-inspected properties (including PHAs) correct the identified physical deficiencies. PIH: This office helps low-income families by providing rental assistance through three programs; our review focuses on physical inspections of the public housing program. In 2018, HUD’s public housing program provided low-rent housing units to over 1 million eligible households. Public housing consists of reduced-rent developments owned and operated by local PHAs and subsidized by the federal government. About 3,300 PHAs own and manage public housing properties. These properties can include high-rise and low-rise buildings and scattered single-family properties, or they can be part of mixed-income housing developments, and they can range in size from fewer than 100 units to more than 30,000 units. PHAs typically have an executive director to manage their operations, as well as a governing board—called a Board of Commissioners—to approve policy, clarify goals, and ensure compliance with federal regulations. PHAs have contracts, called Annual Contributions Contracts, with the federal government. Under the terms of their contracts, PHAs agree to administer their properties according to federal regulations, in exchange for federal funding in the form of operating and capital grants. PIH is organized into six geographic networks, each with several field offices. Multifamily Housing: This office manages HUD’s portfolio of multifamily properties and provides rental assistance through several programs, including Section 8 project-based rental assistance, in which HUD contracts with private property owners to rent housing units to eligible low-income tenants for an income-based rent. Multifamily Housing also oversees the Federal Housing Administration’s multifamily mortgage insurance on loan originations and administers supportive housing for the elderly and programs for persons with disabilities. Collectively, the properties that Multifamily Housing oversees provided affordable rental housing to more than 1.2 million low-income households in 2017. Property owners or management agents of multifamily properties sign business agreements with HUD. Under these agreements, owners or agents agree to administer their properties according to federal rules and regulations, and in exchange, among other benefits, they receive federal assistance through mortgage insurance or housing assistance payments. Multifamily Housing has 12 field offices across five geographic regions. Departmental Enforcement Center: The Departmental Enforcement Center is located within HUD’s Office of General Counsel and works with several of HUD’s program offices, including PIH and Multifamily Housing, to ensure that program funds are used according to federal regulations. These program offices make referrals for the Departmental Enforcement Center to review the financial and other conditions of properties receiving rental assistance from HUD. Based on these reviews, the Departmental Enforcement Center can take various enforcement actions, such as imposing administrative sanctions to bar individuals from participating in HUD programs or civil money penalties for violations. Inspection Frequency REAC conducts inspections on multifamily and public housing properties using a risk-based schedule defined in federal regulations. According to our analysis of REAC inspection data, REAC conducted 44,486 inspections of multifamily properties and 15,156 inspections of public housing developments from fiscal years 2013 through 2017. For multifamily properties, REAC inspects properties every 1 to 3 years. Generally, properties that receive an inspection score below 80 are inspected within 1 year of the previous inspection; between 80 to 89 within 2 years; and 90 to 100 every 3 years. The inspection frequency for public housing developments varies depending on the overall size of the PHA (that is, the number of units and properties that they manage), an individual housing development’s inspection score, and the PHA’s overall performance on the Public Housing Assessment System. For PHAs with 250 housing units or more, REAC inspects developments every 1 to 3 years, using the same risk- based thresholds as Multifamily Housing. For small PHAs with fewer than 250 units, their score on the Public Housing Assessment System determines the inspection frequency, with higher scores associated with less frequent inspections. However, all developments—regardless of the number of units—that receive an overall performance assessment score (as part of the Public Housing Assessment System) of less than 60 out of 100 are designated to have a physical inspection every year. REAC’s Inspection Process Has Some Weaknesses That May Hinder Its Ability to Identify Physical Deficiencies REAC Has a Standardized Inspection Process for Identifying Physical Deficiencies at HUD- Assisted Properties REAC’s Uniform Physical Condition Standards inspection protocol is designed to help provide assurance that physical deficiencies will be identified at HUD-assisted properties. Under the protocol, contract inspectors inspect five areas of a property using a handheld data collection device to help identify and record deficiencies (see fig. 1). The devices have embedded software that provides step-by-step instructions on conducting the inspection. The software helps to ensure consistency between inspectors and consistency with the protocol, according to REAC staff. The software includes a decision-tree model to guide the inspectors on recording and classifying the severity of deficiencies they identify. For example, if an inspector identifies a deficiency with a door in a dwelling unit, the software will ask the inspector to identify which door has the deficiency and the nature of the deficiency (e.g., door lock does not work). The software then assigns a severity level to the deficiency and, if it is severe enough, requires the inspector to take a photo (see fig. 2). REAC has a number of quality assurance processes intended to ensure that contract inspectors identify deficiencies and conduct quality inspections: Collaborative quality assurance (CQA) review. In CQA reviews, REAC quality assurance inspectors observe contract inspectors to help ensure their inspections are accurate and consistent with protocol. REAC uses CQA reviews to coach contract inspectors to help improve their performance. Post-inspection review process. Completed inspections receive two levels of review by REAC quality assurance staff, who use software that compares certain aspects of the current and previous inspections—such as inspection scores, property profiles (for example, number of units), site measurements, and time taken to complete the inspection—and highlights large variances. Quality control inspection (QCI). If REAC reviewers find large variances in current and previous inspection scores and other aspects, they may reject the inspection and schedule a QCI. The QCI is a review of a previously inspected property to evaluate an inspector’s performance and identify potential weaknesses in the quality of the inspection. This review process requires a REAC quality assurance inspector to conduct a second inspection of the same property, including selecting the same sample of buildings and units of the original inspection. Once the QCI is completed, REAC’s reviewers, in collaboration with REAC’s Research and Development division, identify any deficiencies missed and determine whether the contract inspector was complying with REAC’s physical inspection standards. Property owners may appeal deficiencies REAC has identified during the physical inspection. For example, an owner might appeal a deficiency resulting from a window air conditioner blocking egress by providing evidence that this is permitted by local building code. If the appeal is successful, REAC removes the deficiency and the inspection software updates the score. Contract inspectors, REAC quality assurance inspectors, and representatives of property owner associations with whom we spoke had mixed views on REAC’s inspection process. Participants in three of the five discussion groups we held with contract and quality assurance inspectors said that the inspection process provides a comprehensive review of a property and that the inspection software helps promote consistency in inspections. Likewise, representatives from one property owner association we met with said that the inspection process was more standardized and less subjective than in the past. Representatives from another association said that the inspection process effectively identified deficiencies. However, participants in three of the same five discussion groups with contract and quality assurance inspectors noted inconsistent application of protocols and standards, noting that some cases were unclear and required judgment in identifying deficiencies. REAC’s inspection process has features similar to those of home inspection organizations such as the American Society of Home Inspectors (ASHI) and the International Association of Certified Home Inspectors (InterNACHI). For example, ASHI and InterNACHI have developed standards of practice that guide their inspectors on conducting inspections, similar to the role of REAC’s Uniform Physical Condition Standards inspection protocol. In addition, ASHI and InterNACHI require their inspectors to inspect the same five areas of a property that REAC does. Finally, ASHI and InterNACHI have codes of conduct that specify what constitutes ethical conduct for their inspectors; similarly, REAC has developed business rules that define ethical conduct for contract inspectors. REAC has made two major changes to the inspection process over the past 6 years. First, in 2012, REAC updated its inspection software to include the decision-tree model previously discussed and established a point-loss cap to limit the amount by which a single deficiency in an inspectable area could reduce the overall property score. For example, if an inspector found numerous tripping hazards within the same inspectable area, the inspector would record all instances of this hazard, but the software would only deduct from the inspection score once rather than multiple times, according to REAC staff. Second, in 2017, REAC updated its compilation bulletin to address concerns that property owners were making cheap, non-industry-standard repairs to disguise deficiencies during a REAC physical inspection. REAC now requires its inspectors to determine if deficiencies have been corrected consistent with industry standards. For example, property owners cannot use materials such as asphalt, caulking, spray foam, or screws to cover or fill a crack or opening in an electrical panel because that repair would not be consistent with industry standards (see fig. 3). As shown in table 1, from fiscal years 2013 through 2017, the median inspection scores for multifamily and public housing properties were in the mid- to high-80s, with scores trending downward toward the end of that time frame. (See apps. II and III for additional data on REAC scores.) However, a small percentage of multifamily properties scored below 60, which for multifamily properties is defined as a failure and triggers enforcement actions that Multifamily Housing or the Departmental Enforcement Center can take to require the correction of physical deficiencies. Of 27,486 multifamily properties that were inspected during fiscal years 2013 through 2017, 1,760 properties (6 percent) failed at least one inspection, and 272 properties (1 percent of the total) failed two or more inspections. Staff in Multifamily Housing field offices said multiple failed inspections are a sign of serious owner noncompliance, such as an owner who plans to sell and thus lacks motivation to make needed repairs. Multifamily Housing staff said that they take enforcement action in these cases. A higher percentage of public housing properties scored below 60 during this same period. Of the 7,699 public housing properties that were inspected during this period, 887 (11 percent) scored below 60 for at least one inspection, and 291 (4 percent) scored below 60 for two or more inspections. REAC Has Not Conducted a Comprehensive Review of Its Inspection Process since 2001 REAC has not conducted a comprehensive review of its inspection process since 2001, even though new risks to its process have emerged since then. A concern of REAC staff is that some property owners have taken advantage of the scoring system and others have misrepresented the conditions of their properties. Specifically, because more points are deducted for deficiencies on the property site than for deficiencies in a dwelling unit, some property owners prioritize site repairs over unit repairs. Additionally, some property owners attempt to cover up, rather than address, deficiencies—such as by using mulch on a building exterior to hide erosion. REAC staff have also raised concerns about property owners employing current or former REAC contract inspectors to help prepare for an inspection, sometimes by guiding owners to repair just enough to pass inspection rather than comprehensively addressing deficiencies. REAC also continues to find that some contract inspectors are conducting inspections that do not meet REAC’s quality standards (discussed later in the report). Property owner associations we met with also raised concerns about the fairness of the inspection process. Specifically, representatives of two property owner associations said that REAC’s inspection process penalizes properties for items that do not affect the livability of a unit (e.g., property receives severe deficiency for chips on exterior bricks even though the dwelling units are in good condition). Representatives from one property owner association said that some properties’ scores have fluctuated even though the condition of the property has not changed. HUD’s Office of Inspector General (OIG) also identified some weaknesses in the inspection process. Specifically, the OIG found that REAC did not verify the accuracy of sampled units for public housing agencies. Further, REAC fundamentally changed the entities that conduct inspections. In 1998, REAC employed a few large inspection companies to conduct the inspections. However, in 2005, REAC introduced the reverse auction program and opened up the inspection process to a larger number of small businesses, which resulted in a change in the composition of inspectors conducting the inspections. One of the subgoals of REAC’s strategic plan for 2011–2015 was for REAC to produce inspections of HUD-assisted properties that are reliable, replicable, and reasonable. To meet this subgoal, the plan states that REAC should assess its inspection process and apply lessons learned over the last 10 years in order to improve the process. The plan also states that REAC should conduct independent, internal audits and reviews of the inspection process to identify strengths and weaknesses and develop recommendations for improvement. Further, federal internal control standards state that management should implement control activities through policies, such as by periodically reviewing policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. REAC officials stated that they understand the importance of conducting a comprehensive review of the inspection process similar to what they did in 2001 but that they have focused their staff and resources on other priorities—for example, upgrading their technology and quality assurance processes, hiring and training quality assurance inspectors, and conducting targeted assessments of their inspection process in reaction to specific events or risks. For example, REAC staff worked on an intra- agency team to develop recommendations to address weaknesses in the inspection process that were identified as part of the assessment of Eureka Gardens. (We describe this effort later in the report.) REAC staff also said that they updated the compilation bulletin in reaction to property owners who were making cheap, non-industry-standard repairs to disguise deficiencies during a REAC physical inspection. In addition, REAC staff noted that they meet biweekly to address certain parts of the inspection process, such as the appeals and quality assurance processes. However, these efforts help identify weaknesses in the inspection process related to specific risks and were not comprehensive enough to identify and address broader risks. For example, REAC has not assessed how changes to one part of its inspection process (for example, changing how many points are deducted for a particular inspectable area) can affect other parts of the process or result in unintended consequences. Without a comprehensive review to assess its inspection process, REAC cannot determine if it is meeting the goal of producing inspections that are reliable, replicable, and reasonable. REAC May Not Be Identifying All Properties in Need of More Frequent Inspections or Enforcement Actions REAC may not be identifying all properties that need more frequent inspections or enforcement actions because it does not consider sampling errors of the inspection scores. REAC’s inspection process does not require the inspection of all units and buildings within large properties due to REAC’s limited inspection resources. For these properties, the inspection process provides for inspecting statistical samples of units and buildings. The results for the sample are then used to estimate a score that represents the condition of the entire property. Sampling introduces a degree of uncertainty, called sampling error, which statisticians commonly express as a range associated with numerical results. For example, for a property that scored 62 on its physical inspection, REAC would consider this a passing score that requires an annual inspection and no enforcement action. However, due to sampling error, the range associated with this score could be between 56 on the lower bound and 68 on the upper bound. HUD takes enforcement action for multifamily properties with a score below 60. Federal internal control standards state that management should use quality information to achieve the entity’s objectives. In particular, internal control standards note the importance of using the entity’s objectives and related risks to identify the information requirements needed to achieve the objectives and address the risks. REAC’s property inspection scores are currently presented as numerical results without any information on the range associated with the score. REAC’s prior version of its scoring software automatically calculated the sampling errors in the inspection scores, and this information was available for inspection scores from fiscal years 2002 through 2013. However, according to REAC staff, the current version of its scoring software does not automatically calculate the sampling errors, in part because of a lack of resources and also because they believe there is no need to calculate them. Yet, in a review we conducted of REAC in 2000, officials told us that they planned to adjust the score downward and take appropriate actions for inspection scores with a lower bound that fell under an administrative cutoff, such as 60 points. During our current review, REAC staff told us that they did not implement this plan because they would need to coordinate with other HUD offices, such as the Office of Housing, and issue a notice in the Federal Register for public comment. Based on our analysis of REAC inspection data, HUD potentially could have taken enforcement actions against more properties if REAC had taken sampling errors in inspection scores into account. For example, from fiscal years 2002 through 2013, about 4.3 percent of inspections of multifamily and public housing properties had an inspection score of 60 or slightly above 60 but had a lower bound score under 60. In addition, some multifamily and public housing properties might have been inspected more frequently if the sampling errors were taken into account. For example, federal regulations require inspections of multifamily properties scoring 90 or greater once every 3 years; scoring 80 to 89 once every 2 years; and scoring less than 80 every year. Taking sampling errors into account, about 7.1 percent of multifamily properties inspected from fiscal years 2002 through 2013 might have been inspected 1 year after the most recent inspection rather than 2 years. Likewise, about 7.2 percent of inspections of multifamily properties might have occurred 2 years after the most recent inspection, rather than 3 years. Without reporting on sampling errors and considering the results, REAC will not identify some properties which could require more frequent inspections or enforcement actions. REAC Lacks Comprehensive or Organized Documentation of Sampling Methodology REAC lacks comprehensive or organized documentation of the sampling methodology it uses to make generalizable estimates about the condition of properties with its scoring system. REAC’s documentation supporting its sampling methodology is contained in five documents, none of which provides a comprehensive description of the methodology with all changes to the methodology incorporated. The main document that describes the sampling methodology is a paper presented to the American Statistical Association in 2002. This document provides a very short summary of the sampling methodology, but some key assumptions, calculations, and details are not included. For example, this document does not show how REAC derived one of the variables used to calculate the number of units to sample. When we asked REAC staff to provide us with documentation on how they derived this variable, they could only provide us with an email from 2005 from a former REAC statistician that discussed some of the statistical considerations that went into the derivation of the sample-size formula. The other four documents related to the sampling methodology are dated prior to 2002 and include the initial methodology developed and subsequent changes, but these also do not provide complete information on why key assumptions were used, or the documents lack certain formulas. Further, REAC has not updated any of its documents related to the sampling methodology since 2002 to reflect current practices. Federal internal control standards state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. In particular, the standards note the importance of developing and maintaining documentation of the internal control system. This documentation provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parties, such as external auditors. Further, this documentation of controls, including changes to controls, is evidence that controls are identified, capable of being communicated to those responsible for their performance, and capable of being monitored and evaluated by the entity. However, REAC does not have a process to ensure comprehensive and organized documentation of the sampling methodology of its inspection process. Instead, REAC relies on the institutional knowledge of individual staff members. For example, when we requested documentation of its sampling methodology, REAC relied on a statistician who had been with the organization for many years to locate and provide us with the documents we requested. In addition, we had to interview this individual to better understand the methodology because key pieces of information were missing from these documents. REAC staff told us that since the inspection process has remained relatively consistent over time, they have not seen the need to ensure that documentation of the sampling methodology is comprehensive and organized. By interviewing multiple individuals, reviewing multiple documents, and conducting our own calculations, we were able to determine that REAC’s sampling methodology is suitable for making generalizable estimates about the condition of a property with the scoring system. However, the lack of comprehensive and organized documentation could affect REAC’s ability to preserve institutional knowledge and make changes or improvements to its inspection process if key staff leave the agency. REAC Does Not Always Meet Its Schedule for Inspecting Multifamily Properties or Track Progress toward Meeting Scheduling Requirements REAC schedules inspections of multifamily properties based on the prior REAC inspection score, but it did not meet its schedule for about 20 percent of inspections from calendar years 2013 through 2017. As discussed earlier, federal regulations require inspections of multifamily properties scoring 90 or greater once every 3 years, those scoring 80 to 89 once every 2 years, and those scoring less than 80 every year. Our analysis of REAC inspection data showed that about 20 percent of the properties were not inspected within 3 months before or after what HUD has identified as the ideal date to conduct the inspection, called an ideal future date. On average, REAC conducted inspections for these properties about 6 months past the ideal future date. REAC staff told us that there may be legitimate reasons for not conducting an inspection according to the ideal future date. For example, Multifamily Housing can delay an inspection because of natural disasters or major rehabilitations to the property, among other reasons. However, REAC maintains limited data on the reasons why inspections have been rescheduled or cancelled. In addition, these data are not readily available to understand retrospectively why an inspection did not occur on schedule. REAC also does not track its progress toward meeting its requirement for inspecting multifamily properties within prescribed time frames. Federal internal control standards state that management should use quality information to achieve the entity’s objectives. In particular, the standards note the importance of designing a process that uses the entity’s objectives and related risks to identify the information requirements needed to achieve the objectives and address the risks. Further, management should obtain relevant data from reliable internal and external sources in a timely manner and process these data into quality information that supports the internal control system. Multifamily Housing depends on REAC inspections to provide assessments of the physical condition of properties under its jurisdiction (discussed in greater detail later in this report). REAC’s inability to adhere to the inspection schedule for multifamily properties could hinder Multifamily Housing’s ability to monitor the physical condition of properties on a timely basis and take enforcement actions when warranted. The lack of a mechanism to track REAC’s progress toward meeting its requirement for inspecting multifamily properties also hinders REAC’s ability to determine what factors are contributing to delays in conducting the inspections. As a result, REAC lacks the information needed to determine the scope of the problem and what actions it can take to ensure multifamily properties are inspected on a timely basis. REAC Is Piloting a Process for Hard-to-Staff Inspections but Lacks Plans to Evaluate Pilot Results REAC has started a pilot program to staff inspections that contractors typically do not bid on, but it has not developed a formal plan to evaluate the results of this pilot. Since 2005, REAC has used the reverse auction program to save money on inspections and increase small business participation. However, under the reverse auction program, REAC has faced challenges in obtaining bids for inspections in some urban areas, such as Chicago, and some remote areas. To address this challenge, for a select number of properties, REAC has implemented a pilot program as an alternative to the reverse auction program. Under this alternative process, REAC has awarded multiple Indefinite Delivery/Indefinite Quantity (IDIQ) contracts to four companies to conduct these inspections. The IDIQ contracts are intended to ensure that REAC obtains physical inspections of all HUD-assisted properties on one task order rather than allowing contractors to selectively choose properties under the current program. The “all or none” approach, a key feature of these IDIQ contracts, eliminates the need to re-auction the same properties multiple times at higher prices to incentivize contractors to bid on the property. The pilot differs from REAC’s current physical inspection process in a number of ways. The pilot requires the companies that have been awarded the IDIQ contract to inspect all properties in a geographic region rather than to select which individual properties they want to bid on. Another difference is that the companies conduct quality assurance functions normally conducted by REAC staff, such as ensuring that inspectors are certified and identifying and addressing any gaps in inspectors’ performance. As of November 2018, REAC had focused its efforts on implementing the pilot but had not developed a formal plan to evaluate its results. GAO’s guide for designing evaluations states that a program evaluation is a systematic study using research methods to collect and analyze data to assess how well a program is working and why. Some key attributes of effective program evaluation design include the following: identification of data sources and collection procedures to obtain relevant, credible information; clear criteria for making comparisons that will lead to strong, defensible evaluation conclusions; and an established evaluation scope that will ensure the evaluation is tied to research questions. Federal internal control standards also state that management should use quality information to achieve the entity’s objectives. In particular, the standards note the importance of management designing a process that uses the entity’s objectives and related risks to identify information requirements needed to achieve the objectives and address the risks. Further, the standards stress the importance of management obtaining relevant data from reliable internal and external sources in a timely manner based on the identified information requirements. REAC staff told us that they plan to measure the success of the pilot program by determining whether companies are completing quality inspections in a timely manner. However, REAC staff did not provide details about how the results of the pilot would be compared to the existing process and how the quality of inspections and the performance of inspectors would be measured and assessed. Absent a formal process that incorporates key attributes for effectively evaluating the results of the pilot program, REAC may lack the information needed to determine if the pilot is a success or whether changes are needed before moving from a pilot to a permanent process. HUD Has Made Limited Progress in Implementing Recommendations from an Internal Review of REAC HUD has made limited progress in implementing recommendations from an internal review of REAC that was conducted in 2016. HUD created the Rapid Response and Resolution team—which consisted of staff from REAC and other units within HUD, including Multifamily Housing—in response to, among other things, problems associated with Eureka Gardens, a multifamily property in Jacksonville, Florida. In 2015, REAC conducted a physical inspection of Eureka Gardens, and the contract inspector gave the property a score of 85. However, REAC later declared that the inspection was out of standard when it learned that the contract inspector had only inspected one of the two properties associated with Eureka Gardens (the better of the two properties). REAC officials told us that property management engaged in some deceptive practices (such as making quick, cheap repairs) in an attempt to influence the inspection score. According to these officials, the inspector did not conduct the inspection consistent with REAC’s standards and was subsequently decertified. REAC then reinspected the entire Eureka Gardens property with its own quality assurance staff and found numerous deficiencies, which resulted in the property receiving an inspection score of 62. The Rapid Response and Resolution team developed 31 recommendations, 8 of which were specific to REAC, in January 2017. As of December 2018, nearly 2 years after the recommendations were developed and 3 years after the initial inspection of Eureka Gardens, REAC had reached concurrence with Multifamily Housing on 3 of these recommendations and asked for Multifamily Housing’s consideration of the funding and rulemaking requirements for the remaining 5. HUD had also not yet implemented the 3 recommendations on which it reached concurrence. Some of these recommendations address REAC’s management of the inspection process. They include the following: Weighting of dwelling units in inspection score. The review team recommended that REAC consider increasing the weight of dwelling- unit deficiencies in the physical condition score. This recommendation attempts to address the issue, discussed earlier, of property owners who focus their repairs on common areas of the property over dwelling units. Notice provided to property owners of impending inspection. This recommendation reduces the time that REAC can take to notify property owners of an upcoming inspection from 15 days to 3 days for properties that have failed their previous REAC inspection. REAC staff said that this change would provide a more accurate picture of the condition of properties since property owners generally address the maintenance of the property just before an inspection. In addition, this recommendation could address the concern discussed earlier of property owners hiring current or former REAC contract inspectors to help them prepare for an inspection. This recommendation should also encourage property owners to maintain properties in good condition at all times, according to REAC staff. Exigent health and safety risks. Another recommendation was that REAC work with Multifamily Housing and PIH to implement a risk- based exigent health and safety abatement verification policy. According to REAC staff, some properties certify that they have corrected exigent health and safety deficiencies when they have not done so. We found that many inspections conducted from fiscal years 2013 through 2017 had at least one exigent health and safety deficiency, and the percentage has been higher in recent years (see table 2). Field office staff from PIH and Multifamily Housing may check to ensure that these repairs have been made when they are onsite. However, neither of these offices has a formal program to ensure that property owners are actually addressing the exigent health and safety issues. As a result, property owners may choose to correct only those deficiencies that they believe will be checked by HUD field office staff, according to REAC staff. Federal internal control standards state that management should identify, analyze, and respond to risks related to achieving the defined objectives. By establishing the Rapid Response and Resolution team, HUD took the steps of identifying the risks to its inspection process and designing responses to these risks. However, the standards also call for remediating identified internal control deficiencies on a timely basis. HUD officials we met with attributed the delay in implementing the recommendations to prior vacancies in some senior leadership positions, including positions in Multifamily Housing. HUD’s delay in implementing most of the recommendations from the Rapid Response and Resolution team affects REAC’s ability to respond to weaknesses it has identified in the inspection process in a timely manner. REAC’s Processes for Selecting, Training, and Developing Inspectors Have Weaknesses REAC Sets but Does Not Verify Qualification Requirements for Contract Inspector Candidates Contract inspector candidates certify through an application that they meet REAC’s qualification requirements, but REAC does not currently verify that candidates have met these requirements before REAC selects them for training and determines them to be eligible to inspect HUD- assisted properties. Before inviting candidates to participate in inspector training, REAC requires them to certify that they meet three main qualifications: Inspections. Candidates must have conducted a minimum of 250 residential or commercial inspections. Building trades knowledge. Candidates must have building trades knowledge, such as knowledge of construction methods or electrical systems. Computer literacy. Candidates must be able to use email, the internet, and Microsoft Windows. However, REAC does not require documentation from contract inspector candidates demonstrating that they successfully conducted 250 inspections. REAC officials told us that they intend to verify a sample of the 250 inspections for each inspector, but as of November 2018 they had not yet developed a process for doing so, such as by developing a methodology for sampling and a timeline for contacting references. In contrast, one of the home inspection associations we met with, ASHI, requires certified inspector candidates to submit a list of 250 fee-paid home inspections that meet or exceed the ASHI standards and to provide a notarized affidavit validating those inspections. In addition, REAC staff told us that some contract inspector candidates have inspection experience based on inspections that are not as rigorous as those conducted using the Uniform Physical Condition Standards protocol. Participants in three of the four discussion groups we held with REAC quality assurance inspectors and supervisors told us that they had trained candidates who had included information on their applications about previous inspection experience that was not well matched to REAC’s inspection process. For instance, some inspector candidates submitted Federal Emergency Management Agency inspections and U.S. Army Office of Housing inspections as evidence of having completed 250 inspections, but REAC officials said these inspections are not as comprehensive as REAC inspections because they do not assess building systems, such as electrical or heating, ventilation, and air conditioning systems. Federal internal control standards call for management to recruit competent individuals so that they are able to accomplish their assigned responsibilities. In addition, key principles for workforce planning state that agencies need to determine the critical skills and competencies necessary to achieve their goals. REAC officials told us that the inspector training program should weed out inspector candidates that may not have the appropriate qualifications. However, although REAC officials told us that inspector candidates have been removed from training for not having the requisite skills, the officials were not able to determine how many candidates had misrepresented their qualifications on their application or had failed training for other reasons. REAC does not verify the inspections submitted by inspector candidates—relying instead on training to screen out unqualified candidates—and does not determine the type of inspection that may count as a qualifying inspection. As a result, REAC may be allowing candidates with insufficient experience to proceed in the training process, which may waste resources by training candidates who are unlikely to become successful inspectors. Training for Contract Inspectors Is Not Consistent with Key Attributes of Effective Training and Development Programs Contract inspector candidates must complete several phases of REAC training—online, in-class, and field—and pass associated examinations, as well as a background check. Online training. Inspector candidates first complete a 6-week online training that includes web-based modules on the Uniform Physical Condition Standards protocol and the use of the software system for the handheld data collection device. Candidates must pass a pre- certification examination to progress to the next phase of training. In-class training. After passing a background check, inspector candidates then begin in-class training. This phase consists of 3 to 4 days of in-class training led by REAC quality assurance inspectors and covers the compilation bulletin, Uniform Physical Condition Standards protocol, best practices, simulations of the inspection software, and hands-on practice exercises using the software. To proceed to field training, inspector candidates must pass a certification examination with a minimum score of 75 percent that covers material from both the compilation bulletin and the Uniform Physical Condition Standards protocol. Field training. The last phase is a 5-day field training course that culminates in a field examination. REAC quality assurance inspectors lead and provide instruction for the first 4 days of field training. Inspector candidates independently conduct a mock inspection using the Uniform Physical Condition Standards protocol on the fifth day, and a quality assurance inspector evaluates the candidate’s performance. REAC has made changes to training in recent years. For example, REAC began using actual HUD-assisted properties, rather than simulated properties, for the mock inspection. Some quality assurance staff and property owner associations told us they regarded the changes made in recent years to be beneficial. Participants in three of the four discussion groups we held with quality assurance supervisors and inspectors, as well as two representatives of property owner advocacy organizations, said that, in addition to classroom training, field training on a physical property helped to assess the competency of inspector candidates. In addition, stakeholders—including property managers, contract inspectors, and REAC staff—told us the mock inspections have been effective at providing training to new inspectors, and that the professionalism of contract inspectors has improved. REAC contracts with a private vendor to provide the contract inspector online training, and the vendor provides data and reports that REAC staff use to track inspector candidates’ progress through the online training modules. REAC officials told us they use this information to identify areas of the training where candidates struggle and to help revise the training material. In addition, REAC solicits feedback from contract inspector candidates on the online training. REAC also uses key performance indicators to track the number of inspector candidates who enroll and whether they pass or fail training. However, REAC does not currently have formal metrics or use data to track the effectiveness of its three phases of training. For instance, REAC does not track key measures of performance that could provide management with information to improve the training process, such as how individuals score in each section of the in-class training examination and their field examinations. REAC also does not track the resources spent on training, either in terms of funds spent or number of quality assurance inspectors who participate. According to key practices we have identified for training and development, agencies should have processes to systematically track the cost and delivery of training and measure the effectiveness of those efforts. REAC officials said that they would like to have such mechanisms and have developed a proposal to consolidate training functions and better align training to REAC’s strategic goals. However, the proposal does not include performance measures for evaluating the effectiveness and efficiency of training. Use of cost-tracking and performance measures tied to its strategic goal of improving the inspection process could improve REAC’s ability to manage scarce resources, evaluate the effectiveness of its training program, and plan for future training. Quality Assurance Inspector Training Requirements May Not Cover All Job Duties and Are Not Documented REAC’s quality assurance inspectors—who train and oversee contract inspectors—must be able to conduct physical inspections of properties as well as assess contract inspectors’ performance. To assess contract inspector performance, quality assurance inspectors oversee and mentor contract inspectors during CQA reviews and provide them feedback in a collaborative manner, an approach REAC management implemented in 2017. Some senior quality assurance inspectors are also responsible for leading classroom and field training for contract inspectors. According to REAC officials, REAC’s quality assurance inspectors receive the same training as contract inspectors on the Uniform Physical Condition Standards inspection protocol. However, REAC’s training for quality assurance inspectors does not include formal instruction on how to coach or provide feedback during CQA reviews. Instead, new quality assurance inspectors are provided with on-the-job training, and they can only conduct CQA reviews independently when quality assurance supervisors are satisfied that they are sufficiently competent. Beyond the on-the-job training, quality assurance inspectors are encouraged to undergo additional online training on coaching, but there are no specific training requirements related to conducting CQA reviews. Participants in two of our three discussion groups with quality assurance inspectors told us they were not sure how to provide the collaborative coaching and mentorship REAC officials said they wanted. REAC also does not specifically train quality assurance inspectors in how to provide classroom and field training to contract inspectors, and participants in all three discussion groups with quality assurance inspectors told us that instructors do not seem to take a consistent approach to classroom and field training. In addition, REAC’s training requirements for quality assurance inspectors are not documented in the quality assurance standard operating procedures or other documents we reviewed. For example, REAC’s new, more collaborative approach to CQA reviews was communicated to quality assurance inspectors during staff meetings, but REAC staff have not documented the approach or developed any specific training. Some contract inspectors told us that quality assurance inspectors often have less experience conducting inspections than they do. They suggested that this gap may affect quality assurance inspectors’ ability to competently oversee CQA reviews and conduct QCIs. REAC officials told us they are considering changing quality assurance inspector training requirements to be more rigorous than contract inspector training. For example, staff from REAC’s Quality Control group, created in 2017, told us that they are considering expanding training on the five inspectable areas and assessing quality assurance inspectors to see if they need additional support in any of these areas. They would also like to require quality assurance inspectors to pass the training examinations with minimum scores of 90 percent, instead of the score of 75 percent that currently applies to both contract and quality assurance inspectors. However, the Quality Control group has not implemented these changes, officials said, because its staff resources are limited, and staff have been reallocated to support other projects within REAC. In comparison, one of the home inspector associations we met with, InterNACHI, has specific requirements for its instructors. According to the association, its instructors are certified master inspectors, have completed a minimum of 1,000 paid inspections or hours of education or some combination thereof, and have conducted inspections for a minimum of 3 years. The instructors also assist in developing the educational material for training courses. Federal internal control standards state that management should demonstrate a commitment to recruit, develop, and retain competent individuals. In particular, the standards note that agency personnel need to possess and maintain a level of competence that allows them to accomplish their assigned responsibilities. The standards also note the importance of management enabling individuals to develop competencies appropriate for key roles and tailoring training based on the needs of the role. Without assessing whether training for quality assurance inspectors is sufficient and requiring additional training as needed, REAC may not have reasonable assurance that these inspectors have the skills required to oversee contract inspectors. Federal internal control standards also state that management should design control activities to achieve objectives and respond to risks, and they note the importance of documenting internal control—for example, in management directives, administrative policies, or operating manuals. These standards also state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. For example, the standards note that effective documentation assists in management’s design of internal controls by establishing and communicating the who, what, when, where, and why of internal control execution to personnel. Without documenting training requirements that encompass all job responsibilities, REAC may not have reasonable assurance that the required skills and competencies are clearly communicated to and understood by quality assurance inspectors and aligned with job duties. REAC Does Not Require Continuing Education for Contract and Quality Assurance Inspectors REAC has ongoing requirements for contract inspectors to maintain their eligibility, but these do not include continuing education requirements. Contract inspectors must conduct at least 25 successful inspections per year—that is, inspections found to be within REAC’s inspection standards—and pass a background check every 5 years to remain certified. REAC also offers optional training through online refresher modules. However, REAC does not know how many contract inspectors use these resources or how effective they are. In comparison, ASHI and InterNACHI have continuing education requirements for their certified inspectors. ASHI requires inspectors to earn 20 continuing education credits annually. The qualifying training courses must be ASHI-approved, the inspector must submit a signed affidavit attesting to having attended the training, and ASHI spot checks to monitor compliance. InterNACHI requires inspectors to earn 24 continuing education credits annually and to pass the InterNACHI Online Inspector Examination with a score of 80 percent or better every 3 years. REAC encourages quality assurance inspectors to take additional training but does not require continuing education. REAC offers optional “dine- and-learn events” to update both contract and quality assurance inspectors on policy and procedure changes and point out errors they commonly observe. In 2017 REAC also began offering limited coaching to contract inspectors by quality assurance inspection reviewers, a process separate from CQA reviews. The reviewers compare physical defects identified by contract inspectors to those that were identified by quality assurance inspectors during QCIs of the same property. Reviewers then provide one-on-one feedback to contract inspectors to address any discrepancies in inspection scores. REAC officials said that for continuing education they prefer self-paced learning to formal instruction because it more appropriately matches the varying education needs of inspectors. REAC’s strategic plan proposes developing REAC-wide policies for staff training and skill development, but it does not include any requirements for continuing education. Key practices we have previously identified for training and development suggest that agencies should encourage employees to take an active role in their professional development, which can include requiring employees to complete a specific level of continuing education. Ongoing training requirements for contract and quality assurance inspectors could help REAC ensure that inspectors are up-to- date on REAC policies and industry standards. Such continuing education could also refresh existing knowledge, helping contract and quality assurance inspectors conduct high-quality inspections consistently. Continuing education could also help quality assurance inspectors to develop their mentoring and coaching skills, which would better enable them to develop and oversee contract inspectors. REAC’s Processes for Monitoring and Evaluating Contract and Quality Assurance Inspectors Have Weaknesses REAC Uses Several Mechanisms to Monitor and Evaluate Contract Inspectors Collaborative Quality Assurance Reviews REAC’s mechanisms to monitor and evaluate its contract inspectors include collaborative quality assurance reviews, quality control inspections, and various other monitoring tools. REAC uses CQA reviews to monitor, evaluate, coach, and provide feedback to contract inspectors. REAC has documented its processes for conducting and reporting on CQA reviews in field training guidance, standard operating procedures, and the compilation bulletin. To determine which inspections will receive a CQA review, REAC combines a risk-based approach—targeting low-performing inspectors—with scheduling needs, based on the timing and geographic location of the inspection. REAC conducted almost 3,000 CQA reviews from fiscal years 2013 through 2017. As shown in table 3, the percentage of inspections each year that received a CQA review ranged from about 3 to 8 percent. Our analysis of CQA review data shows that some contract inspectors are not conducting inspections in accordance with REAC standards. From fiscal years 2013 through 2017, an average of 17 percent of CQA reviews resulted in contract inspectors receiving a rating that was outside of REAC’s physical inspection standards, referred to as an outside standard rating. This rating is based on the contract inspector committing at least 1 of 18 types of performance- or scheduling-related REAC protocol violations, with 8 of the performance violations resulting in an automatic outside standard rating. The percentage of outside standard ratings was significantly higher in fiscal years 2015 through 2017, as compared to 2013 and 2014 (see fig. 4). According to REAC officials, this increase was likely due, in part, to an increase in the number of less experienced contract inspectors. Specifically, from September 2013 through December 2015, REAC attempted to expand the pool of contract inspector candidates by lowering the required number of inspections from 250 to 50. REAC officials confirmed that these inspectors were less experienced and more likely to violate protocols. As previously discussed, REAC has taken steps to make its CQA reviews more collaborative, but some stakeholders said that challenges remain. REAC’s 2015 standard operating procedures stated that the goal for CQA reviews should not be to designate a contract inspector as outside standard, but rather to ensure inspection accuracy and to improve the knowledge of the contract inspector. However, it was not until more recently that REAC management emphasized in several staff meetings the need for quality assurance inspectors to take a collaborative approach, according to REAC staff. Despite this new emphasis, participants in a discussion group we held with contract inspectors told us that they believe a punitive approach persists with some quality assurance inspectors, that repeated pairings of contract and quality assurance inspectors could lead to bias, and that contract inspectors who receive a high number of CQA reviews may feel like REAC is targeting them. Quality Control Inspections REAC uses QCIs to further ensure the accuracy of inspections conducted by contract inspectors. As previously noted, a combination of factors can lead REAC to reject an inspection and trigger a QCI, including significant differences between the current inspection and previous inspections and other contextual factors, such as the inspector’s past CQA review performance. In a QCI, the quality assurance inspector reviews an inspection report and then conducts a new inspection to identify potential weaknesses and evaluate the inspector’s performance. The QCI results in a new inspection score. REAC has standard operating procedures that document requirements for scheduling, conducting, and reporting QCIs. REAC completed 226 QCIs from March 2017 through June 2018. Our review found that more than 50 percent of QCIs resulted in an outside standard rating. On average, contract inspectors gave properties a score that was 16 points higher than the score given subsequently by quality assurance inspectors, indicating that those contract inspectors missed deficiencies. Of these inspections, about 15 percent that had initially received a passing score from the contract inspector failed the subsequent QCI. Other Monitoring Tools for Contract Inspectors REAC also uses ratings and reports to oversee contract inspectors. REAC assigns each contract inspector a rating based on a combination of factors, including CQA results and percentage of inspections rejected, and these ratings help target which inspectors should receive CQA reviews and CQIs. REAC also produces two reports for all contract inspectors to assist in its oversight: Defect Comparison Reports compare the specific deficiencies reported by an inspector to the frequency with which other contract inspectors reported that same deficiency across properties. REAC primarily uses the results to target areas to coach contract inspectors who have been flagged for a QCI. Defect Delta Reports compare deficiencies in a contract inspector report to deficiencies a quality assurance inspector reported in a follow-up inspection (usually a QCI). REAC primarily uses this information to identify the types of deficiencies the contract inspector is missing. REAC Has Not Met Management Targets for Reviews of Contract Inspectors REAC did not meet management targets for the number of CQA reviews to be conducted in any quarter from fiscal years 2013 through 2017 (see fig. 5). REAC officials told us their management target is to conduct three CQA reviews for each high-risk contract inspector and two CQA reviews for lower-risk contract inspectors each quarter. REAC officials told us that in developing the targets, they attempted to balance risks to the quality of inspections and resources available. In addition, as of June 2018, REAC had not met management targets for timeliness of QCIs in any quarter. REAC has no documented timeliness goals for QCIs, but REAC officials told us that QCIs are supposed to take place within 30 days of the original inspection date because the condition of the property can change over time (see fig. 6). REAC officials told us they did not meet these management targets for CQAs and QCIs because they did not have enough quality assurance inspectors. In addition, when quality assurance inspectors are pulled onto other projects, such as supporting HUD’s efforts related to natural disasters, REAC’s ability to conduct CQAs and QCIs is reduced. For example, in fiscal year 2018, 28 quality assurance inspectors were pulled offline to assist HUD in areas affected by hurricanes. REAC officials told us these temporary reassignments have affected their ability to implement the quality assurance process. REAC staff told us that the quality of inspections may have deteriorated because some contract inspectors were aware that quality assurance inspectors would not be conducting CQA reviews or QCIs during those post-disaster periods. REAC has recently hired more quality assurance inspectors to help address staffing shortages. In addition, REAC officials told us that they intend to take into account the likely effects of natural disasters on their ability to conduct quality assurance reviews when planning for these reviews in the future, but REAC has not yet developed a plan to meet its targets that includes, for example, mechanisms to mitigate resource constraints and unforeseen demands on staff. According to REAC’s strategic plan, to produce physical inspections that are reliable, replicable, and reasonable, REAC is to look for patterns and trends in inspection results, such as inconsistencies between inspectors, regional and area differences, and patterns in different inspection criteria. In addition, the strategic plan calls for REAC to assess and improve the quality of contract inspectors. However, if REAC is unable to meet its management targets for CQA reviews, it may not be able to consistently produce high-quality inspections because it is not providing routine opportunities for contract inspectors to receive coaching from quality assurance inspectors, which could include addressing deficiencies that the contract inspectors did not initially identify and record. In addition, if QCIs are not conducted shortly after the original inspections, REAC may not be able to verify the quality of the inspection because the condition of the property could change over time. REAC’s inability to meet management targets for CQA reviews and QCIs could also affect its ability to monitor patterns in inspection results because, for example, the quality of QCI data would be less reliable due to the lapse in time. As a result, REAC may not be using quality assurance inspector resources as effectively as possible. REAC Takes Administrative Actions against Contract Inspectors Who Do Not Meet REAC Requirements REAC’s Inspector Administration division administers a variety of administrative actions and disciplinary sanctions against contract inspectors in response to complaints or CQA reviews and QCIs. Inspector Administration officials said that they acknowledge and follow up on all complaints received from property representatives and residents, among others. Complaints about a contract inspector can relate to conduct, inspection protocol violations, scheduling, and conflicts of interest, among other issues. Inspector Administration uses the Code of Conduct, Uniform Physical Condition Standards inspection protocol, and compilation bulletin as standards for evaluating contract inspector conduct. In order of increasing severity, Inspector Administration can issue a letter of warning, issue a performance deficiency, suspend or decertify the inspector, or refer the inspector to HUD’s OIG, among others. While Inspector Administration can use professional judgement in adjudicating complaints, some actions are automatic. For example, decertification is automatic for inspectors with three or more performance deficiencies, inspectors found to have engaged in egregious misconduct, or inspectors who conduct fewer than 25 inspections annually. Inspector Administration took more than 700 administrative enforcement actions against contract inspectors from fiscal years 2013 through 2017 (see fig. 7). As part of its effort to reform its contract inspector pool, REAC decertified 127 inspectors from fiscal years 2013 through 2017 due to inactivity, conduct, or performance issues. For example, REAC decertified the contract inspector who gave Eureka Gardens a passing inspection score even though it was in poor physical condition. Two advocacy organizations told us they noticed that REAC was decertifying more inspectors than in the past, and one said that the quality of contract inspectors had improved as a result. In response to concerns from contract inspectors, Inspector Administration is proposing changes to, among other things, provide contract inspectors who are subject to potential enforcement actions with more opportunities to present their perspective. For example, Inspector Administration would allow contract inspectors to appeal performance deficiencies earlier in the process. Other proposed changes would make performance deficiencies based on outside standard ratings discretionary instead of automatic and would remove a performance deficiency from a contract inspector’s record after 25 consecutive inspections without a new performance deficiency instead of 30. Inspector Administration officials said the new rules would also adjust decertification sanction periods, which specify the amount of time a decertified contact inspector must wait before reapplying to REAC, to account more appropriately for the reason the contact inspector left REAC (e.g., resignation, performance, or conduct). REAC’s Quality Control Group Has Not Yet Implemented Procedures for Inspector Oversight REAC created the Quality Control group to standardize quality assurance inspector reviews by conducting more frequent oversight and looking for trends across all quality assurance inspectors, according to a Quality Control official. This official said that one type of oversight involves a quality control staff member conducting an identical inspection 1 day after that of a quality assurance inspector to determine how well the inspector recorded deficiencies. Inspections are then rated as either “acceptable” or “unacceptable” based on whether the inspector followed established protocols and observed and accurately recorded 90 percent or greater of the existing deficiencies. According to the official, inspection reviews are expected to be shared with quality assurance management and individual supervisors to support quality assurance inspector development. This official also told us Quality Control plans to conduct reviews of quality assurance inspectors at least once a year, or more frequently as needed. In November 2018, Quality Control developed a mission statement which says that the primary goal of the group is to improve the consistency of inspections. Also in November 2018, Quality Control developed procedures for reviewing quality assurance inspectors, which include processes for conducting field reviews of completed inspections, criteria for acceptable inspections, and processes for providing feedback. An official from Quality Control said that the group worked with other divisions within REAC, such as PASS Quality Assurance and Research and Development, to develop the procedures and criteria for evaluating quality assurance inspectors. The official told us both its mission statement and procedures have not been implemented, in part because Quality Control staff have been repeatedly pulled onto other special projects. The official told us that these documents have been approved by REAC management and that Quality Control intends to implement the procedures in 2019. According to federal internal control standards, management should implement control activities through policies. For example, the standards call for documenting in policies each unit’s responsibility for an operational process’s objectives and related risks. Without finalizing and implementing its policies and procedures for reviewing quality assurance inspectors, Quality Control may not be able to provide consistent reviews of quality assurance inspectors, which could affect the quality of inspections as well as the feedback and coaching quality assurance inspectors provide to contract inspectors. Prioritizing the implementation of Quality Control’s review procedures could help ensure that Quality Control achieves its objectives and provides consistent reviews of quality assurance inspectors. Performance Standards for Quality Assurance Inspectors Do Not Fully Align With Job Duties The standards REAC uses to measure quality assurance inspectors’ performance do not fully align with their job duties. Quality assurance supervisors are primarily responsible for evaluating quality assurance inspector performance using five performance elements, which REAC’s performance appraisal system describes as follows: Collaboration: Provide customer service communication both verbally and in writing to internal and external HUD stakeholders, customers, or anyone who comes in contact with quality assurance services. Individual training: Develop competencies and perform individual training associated with job duties. Personal investment: Improve processes, such as through special projects or self-initiated projects that improve the quality assurance division’s standard operating procedures, the Uniform Physical Condition Standards inspection protocol, the compilation bulletin, or others. Risk management: Maximize scarce resources and be cost efficient to the government in all aspects of job duties and assignment. Meeting the need for quality affordable rental housing: Perform in accordance with all protocols and standard operating procedures, and complete CQA reviews and Uniform Physical Condition Standards inspections. REAC’s performance appraisal system includes descriptions of the standards for each of the five performance elements, as well as supporting behaviors. For example, to be rated fully successful for “meeting the need for quality affordable rental housing,” quality assurance inspectors should independently complete Uniform Physical Condition Standards inspections with no more than two inspections being rejected by REAC within the rating period. Based on the standards, quality assurance inspectors are also expected to conduct CQA reviews and field trainings for contract inspector candidates. However, the performance appraisal system for quality assurance inspectors does not include performance elements with competencies that relate to all of their job duties. For example, the performance appraisal system does not define expectations for performing CQA reviews or QCIs. In addition, it does not include criteria for evaluating the training, coaching, and mentoring that quality assurance inspectors are expected to provide to contract inspectors. Quality assurance supervisors can incorporate information from reviews of quality assurance inspectors by Quality Control in their performance evaluations. However, REAC officials told us that Quality Control does not evaluate inspectors based on the performance elements and standards. Instead, Quality Control’s reviews only evaluate an inspector’s performance as it relates to the QCI being reviewed, such as following established protocols and observing and accurately recording 90 percent or greater of the existing deficiencies. In addition, Quality Control’s reviews do not include evaluations of a quality assurance inspector’s performance for other key job duties, such as training and mentoring contract inspectors. According to key practices we have identified for effective performance management, agencies should use competencies to define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. REAC staff told us they do not know when the performance elements and standards for quality assurance inspectors were last revisited, and new job duties such as conducting QCIs have been added that are not part of the performance elements. Better alignment between the performance competencies and the job responsibilities of quality assurance inspectors would help ensure that inspectors are assessed on all their key duties—including training and mentoring contract inspectors—which could improve the quality of inspections and reviews. HUD’s Key Rental Programs Rely on REAC Physical Inspection Scores as Part of Their Monitoring and Enforcement Processes PIH and Multifamily Housing each have separate processes to monitor the conditions of HUD-assisted properties, including physical conditions, and take enforcement actions if properties are not decent, safe, sanitary, and in good repair. PIH assesses the performance of PHAs on key indicators through a federal regulatory process—the Public Housing Assessment System. PIH also monitors PHAs through a Risk Assessment Protocol, which incorporates qualitative data and determines actions to address identified risks. The Risk Assessment Protocol is intended to be a proactive approach to address risk at PHAs and use resources efficiently. Separately, Multifamily Housing monitors properties that score below 60 on the REAC physical inspection. To account for properties scoring 60 or above on the REAC inspection, as well as to monitor property characteristics other than physical conditions, Multifamily Housing assesses properties through its risk rating system. REAC Scores Factor into PIH’s Assessment of Public Housing Agencies’ Performance and Help Determine Actions to Address Deficiencies Public Housing Assessment System Process The Public Housing Assessment System uses the REAC physical inspection score for each public housing development to determine the physical performance of the PHA. The physical performance of PHAs is one of four indicators within the Public Housing Assessment System, which assesses the performance of PHAs and determines a performance designation. The four indicators are (1) the physical condition of the PHA’s housing developments, (2) the financial condition of the agency, (3) the management operations of the agency, and (4) utilization of property modernization and development funds (capital fund). REAC inspection scores are adjusted to reflect the size of each housing development, and their weighted average is the physical performance indicator for a PHA. The physical indicator score, worth a maximum of 40 points (out of 100 points total) toward the overall Public Housing Assessment System score, has the highest value of the four indicators. To determine the financial, management, and capital fund indicators, PHAs upload information electronically to REAC, and REAC’s data systems generate a score for each indicator and overall. Figure 8 shows the maximum value for each indicator and overall score. Table 4 explains how the indicator and overall assessment scores lead to a performance designation. PHAs are assessed and receive a performance designation every 1 to 3 years, according to their size and prior performance designation. PHAs with at least 250 units receive an assessment annually. PHAs with fewer than 250 units receive an assessment every 3 years if designated as a high performer, 2 years if designated as a standard or substandard performer, and annually if designated as a troubled or capital fund troubled performer. In years that smaller PHAs do not receive an assessment, they must provide financial data to REAC but do not receive a published assessment score or designation. Following REAC’s release of the performance designations, PHAs and PIH each have a role in ensuring that physical deficiencies are corrected. REAC is not responsible for ensuring that PHAs correct physical deficiencies. According to federal regulations, PHAs must take certain actions depending on their performance designation. PHAs designated as troubled must enter into a recovery agreement with PIH to improve their performance within 2 years. PHAs designated as standard or substandard performers must correct deficiencies identified in the assessment within 90 days, or they may develop a plan to correct the deficiency within a specified time frame. PIH officials told us they monitor whether PHAs designated as standard or substandard performers have developed a plan or if field offices are assisting the PHA. PHAs designated as high performers are not required to correct deficiencies. Table 5 shows the number of PHAs in each designation for fiscal years 2013 through 2017, including some PHAs exempt from receiving a performance designation (e.g., small PHA deregulation). If PHAs do not correct deficiencies or improve their performance, PIH officials told us they can initiate a series of actions. First, PIH field offices are to remind PHAs of their obligation to provide housing that is decent, safe, sanitary, and in good condition. If those conversations are not effective, PIH can take administrative or enforcement actions. For example, PIH can refer PHAs to the Departmental Enforcement Center, which can exclude PHA leadership from participating in HUD programs. However, we previously found that PIH refers PHAs to the Departmental Enforcement Center infrequently, making 12 referrals in 2017 and 25 referrals in 2016. In rare instances, PIH also can place the PHA into administrative receivership and take control of the PHA’s operations. These actions also could be part of a recovery agreement for troubled performer PHAs. PIH officials told us they initiate actions specified in the recovery agreement for troubled performer PHAs that do not improve their performance within 2 years. PIH’s Risk Assessment of Public Housing Agencies Incorporates REAC Scores and Determines Actions to Address Physical and Other Risks Risk Assessment Protocol To inform its monitoring efforts, PIH uses the Risk Assessment Protocol to assess PHAs in four risk categories: physical, governance, financial, and management. PIH collects quantitative data from various HUD data systems and qualitative data from a survey administered by PIH field offices. The physical risk category uses a PHA’s Public Housing Assessment System physical indicator score—which is determined using the REAC inspection score—as one factor in determining physical risk. PIH also assesses physical risk using the qualitative survey and location of the PHA. Additionally, the performance designation from the Public Housing Assessment System—which incorporates the REAC inspection score—is included as part of assessing governance risk. The financial and management categories do not incorporate the REAC physical inspection score. For each risk category, PIH assigns points and designates a risk level for each PHA, as shown in figure 9. A higher number of points is associated with higher risk. For example, PIH assigns 25 points to PHAs with a physical indicator score of 25 or below and zero points to PHAs with a physical indicator score of 28 or higher. After assigning points, PIH designates a risk level for each risk category, as well as overall, based on the average PHA score for each category and overall. These risk designations are very high, high, moderate, and low. PHAs furthest from and above the average score are designated as very high risk, and PHAs closest to the average score are designated as low risk. PIH designates a risk level to PHAs every quarter, although some information used to determine the risk level is not updated every quarter. For example, the qualitative survey is updated every other quarter. PIH determines actions—called risk treatments—to address each risk category based on a PHA’s risk level. PIH determines actions each quarter for every PHA newly designated as very high or high risk, and it determines actions every other quarter for all other very high, high, or moderate risk PHAs. To address physical risks, field office staff may provide training or technical assistance to PHAs. For example, field office staff told us they provided technical assistance by explaining the physical inspection standards and policies related to using operating funds to make physical repairs. Risk treatments have a completion date, and PIH field office staff are to monitor whether the treatment is effective. If the risk treatments do not result in improvements, PIH officials told us they can seek technical assistance from subject matter experts within PIH or can elevate the risk treatment, among other actions. For example, PIH can provide on-site assistance rather than remote assistance. Multifamily Housing’s Process for Directing Property Owners to Correct Physical Deficiencies Is Based on REAC Scores Process for Correcting Deficiencies Multifamily Housing is required to direct property owners to correct physical deficiencies based on the REAC inspection score. For properties that score below 60 on the REAC physical inspection, Multifamily Housing issues property owners a notice to take the following actions: (1) provide a copy of the notice to residents; (2) survey 100 percent of the property to identify all physical deficiencies; (3) correct all deficiencies identified during the survey and the REAC inspection; (4) certify that they have corrected all deficiencies; and (5) submit a 100- percent survey of the property and certification of corrected deficiencies to HUD. Property owners should complete these actions within 60 days of receiving the notice but may request an extension if correcting the deficiencies will take longer than 60 days. For example, Multifamily Housing may issue extensions for notices received in winter months because seasonal conditions may make certain repair work, such as pouring concrete, more difficult to complete within 60 days. Multifamily Housing schedules a follow-up inspection depending on whether property owners submit a certification, as well as if the property scores 30 or below on the inspection. For property owners who certify that deficiencies have been corrected, Multifamily Housing schedules the property’s next inspection to take place within 1 year after the date of the last inspection. For property owners who do not submit the certification or for properties that score 30 or below on the REAC inspection, Multifamily Housing or the Departmental Enforcement Center schedules a follow-up inspection as soon as possible. Multifamily Housing uses the REAC score from that next inspection to determine whether the owner corrected deficiencies. Actions for Low-Scoring Properties After issuing a notice, Multifamily Housing can take various actions when properties’ scores on the REAC inspection remain below 60, if owners do not certify or correct physical deficiencies. Table 6 summarizes the actions Multifamily Housing took in fiscal years 2016 and 2017. For example, Multifamily Housing officials initially can place a flag in a data system to indicate that an owner has not met requirements for properties to be decent, safe, sanitary, and in good repair. This flag may prevent the owner from further participation in HUD programs. Another action Multifamily Housing officials can take is to change the property’s management agent. Multifamily Housing officials told us this action has been successful in improving the physical conditions of properties when properties do not require significant repair work. Multifamily Housing also can take more significant actions, such as terminating a rental assistance contract or foreclosing on a loan and relocating tenants from these properties. In addition to taking these actions, REAC and Multifamily Housing refer properties to the Departmental Enforcement Center when they score below a determined threshold. Upon publishing the inspection score, REAC refers properties that score 30 or below on a REAC inspection to the Departmental Enforcement Center automatically. Multifamily Housing officials told us they coordinate with relevant stakeholders to discuss these properties. Multifamily Housing also can refer properties electively to the Departmental Enforcement Center when they score between 31 and 59 on the REAC inspection. Further, Multifamily Housing can recommend specific actions for the Departmental Enforcement Center to take regardless of a property’s inspection score. The Departmental Enforcement Center can impose civil money penalties to encourage compliance with HUD’s regulations or limit a property owner from participating in HUD programs. Our analysis of referral data for physical conditions from fiscal years 2012 through 2017 shows that for 12 referrals, the Departmental Enforcement Center imposed money penalties through a settlement, and that no referrals resulted in a suspension or debarment. However, according to our previous work on the Departmental Enforcement Center, most referrals result from financial reviews rather than physical inspections. HUD’s Threshold for Issuing Notices for Property Owners Is Inconsistent with Requirements of Appropriations Legislation The Office of Multifamily Housing’s current practice of issuing notices to property owners when the REAC score is 59 or below is inconsistent with the legal requirement. As previously discussed, for properties that score 59 or below on the REAC inspection, HUD issues notices for property owners to certify that deficiencies have been identified and corrected within 60 days. However, the 2017 and 2018 Consolidated Appropriations Acts state that HUD must provide a notice to owners of properties that score 60 or below on the REAC physical inspection. Multifamily Housing officials told us that they believe language in the appropriations acts is not clear regarding the threshold to issue notices to property owners. Specifically, the appropriations acts state that HUD should issue a notice for properties that score 60 or below, and also that HUD may withdraw the notice to property owners when they successfully appeal their inspection score to 60 or above. Additionally, Multifamily Housing officials told us that HUD’s long-standing and current practice is to issue notices when a property receives a score of 59 or below. According to our analysis of inspection data, 30 properties received a score of 60 from May 2017 to December 2017 and would not have received a notice to correct physical deficiencies under HUD’s approach. Unless Congress changes the threshold identified in appropriations acts from 60 to 59 or HUD changes its practice to issue notices to properties that score 60 or below, HUD’s actions will continue to be inconsistent with the legal requirement. Other Multifamily Housing Monitoring Processes Also Incorporate REAC Scores Multifamily Housing also uses other processes to monitor the physical condition of properties, including properties that score 60 or above on the REAC inspection. These other processes incorporate additional aspects of properties beyond physical conditions. Risk Rating System Multifamily Housing’s risk rating system uses information on properties’ physical, financial, and management conditions to assign one of three risk ratings—troubled, potentially troubled, or not troubled—to each property. The REAC inspection score, along with actions taken to correct deficiencies, is one factor that determines the risk rating. Specifically, properties that score between 30 and 70 on the REAC inspection are rated as potentially troubled if the property owner is addressing physical deficiencies. Properties that score between 30 and 59 are rated as troubled if the owner has not certified that deficiencies have been corrected. Properties that score below 30 are rated as troubled and maintain that rating until the next REAC inspection. Multifamily Housing field office and headquarters staff told us they provide greater monitoring and oversight to properties rated as troubled and potentially troubled. Properties rated as troubled are required to develop an action plan to identify and document steps to address their risk, including physical risk. For example, a plan to improve the physical condition of a property may direct property owners to rehabilitate units. Properties rated as potentially troubled may develop such a plan but are not required to do so. Additionally, Multifamily Housing headquarters staff conduct a monthly call with field office staff to discuss properties rated as troubled. Multifamily Housing officials told us they review properties every 3 to 12 months based on the risk rating and can take actions if properties are not correcting issues. If property owners do not correct issues outlined in their plan, Multifamily Housing can take many of the actions listed previously, such as changing the management agent. Other Monitoring Processes Multifamily Housing can also monitor properties through other processes, such as site visits or other reviews. According to Multifamily Housing officials, field office staff conduct site visits of properties if they receive multiple complaints from tenants or notice a particular concern, or if the property receives media attention. Multifamily Housing also can conduct site visits of properties through a Management and Occupancy Review. Multifamily Housing officials told us they are moving toward a risk-based approach, using results from prior reviews and a property’s risk rating to determine how often to conduct these Management and Occupancy Reviews. However, Multifamily Housing officials told us that budget and staffing constraints continue to limit the number of reviews completed annually, with less than half of project-based rental assistance properties reviewed in 2017. To complete Management and Occupancy Reviews, HUD staff or contractors review documentation to monitor whether properties are adhering to requirements for receiving HUD funding and to target potential issues. This review gathers information on seven factors of property management, including management of a property’s physical condition. As part of gathering information, HUD staff or contract administrators interview the property owner or agent and may visit a sample of housing units to verify that deficiencies identified in the REAC inspection have been corrected. The Management and Occupancy Review specifies—in a summary report for owners and agents— corrective actions to take within targeted completion dates, not to exceed 30 days, based on the documentation review and on-site visit. Properties that perform poorly on the review also must provide proof of taking these actions. If properties do not provide proof of taking these corrective actions, Multifamily Housing can take some of the previously listed actions, such as changing the agent of a property. Conclusions REAC’s inspection process annually identifies properties that are in poor physical condition and contain life threatening health and safety issues. With over 2 million moderate- and low-income households living in public housing or multifamily properties assisted or insured by HUD, it is imperative that these properties are decent, safe, sanitary, and in good repair. Our review of REAC found areas for improvement in its inspection process: Review of inspection process. A comprehensive review of the inspection process could help REAC identify risks and ensure it is meeting the goal specified in its strategic plan that inspections be reliable, replicable, and reasonable. Sampling errors in inspection scores. If REAC were to resume reporting on sampling errors and develop a process to address properties that fall below certain cutoff scores when the sampling error is taken into account, it would have the information it needs to identify properties that may require more frequent inspections or enforcement actions. Sampling methodology documentation. Comprehensive and organized documentation of the sampling methodology could help REAC preserve the institutional knowledge of important features of its inspection process, particularly when key staff leave the agency. Timing of housing inspections. Improvements in REAC’s on-time performance of multifamily property inspections could provide HUD with more timely information on the physical condition of these properties and the information it needs to take any enforcement actions. Further, by developing mechanisms to track its progress on meeting the schedule for inspections and improving its collection of data on why inspections are delayed, REAC could better determine what factors are contributing to delays in conducting inspections. Staffing inspections. A formal evaluation plan could help REAC determine if its pilot program for staffing inspections in difficult geographic areas is a success or whether changes are needed before moving from a pilot to a permanent process. Implementation of open recommendations. Taking timely actions on internal-review recommendations could help HUD to improve REAC’s inspection process and the safety of HUD-assisted properties. We also found areas for improvement in REAC’s processes for selecting, training, and overseeing contract and quality assurance inspectors: Inspector candidates’ qualifications. A more robust process for verifying contract inspectors’ qualifications could reduce the number of candidates with insufficient experience who participate in REAC’s training program, which could help REAC to expend fewer resources on training candidates who are unlikely to become successful inspectors. Contract inspector training. Evaluating the effectiveness of its training program for contract inspectors could help REAC better assess the quality of the program and plan for future training. Quality assurance inspector training. By developing and documenting training for quality assurance inspectors that encompasses all of their job responsibilities, REAC can better ensure that inspectors have the skills required to oversee contract inspectors. Continuing education requirements. Continuing education requirements for contract and quality assurance inspectors could help REAC ensure that inspectors are up-to-date on REAC policies and industry standards. Targets for reviews of contract inspectors. Improving its ability to meet management targets for CQA reviews and QCIs could help REAC better ensure that contract inspectors are receiving the feedback needed to improve their performance, thereby improving the quality of inspections. Formal policies for Quality Control group. By implementing policies and procedures for the Quality Control group, REAC can help ensure that the group achieves its objective of providing consistent reviews of quality assurance inspectors that will enable these inspectors to improve their oversight roles. Performance standards for quality assurance inspectors. Reviewing and updating REAC’s performance standards for quality assurance inspectors so that they align with their job duties can help REAC ensure that staff understand how their duties are prioritized within REAC’s mission and improve the quality of performance reviews. Finally, Multifamily Housing’s current practice of taking actions against property owners when the REAC score is 59 or below is inconsistent with the legal requirement to take action when the score is 60 or below. While in practice this affects very few properties, without either Congress changing the threshold identified in appropriations acts from 60 to 59 or HUD changing its practice to issue notices to properties that score 60 or below, HUD’s actions will continue to be inconsistent with the legal requirement. Recommendations for Executive Action We are making the following 14 recommendations to HUD: The Deputy Assistant Secretary for the Real Estate Assessment Center should conduct a comprehensive review of the physical inspection process. (Recommendation 1) The Deputy Assistant Secretary for the Real Estate Assessment Center should resume calculating the sampling error associated with the physical inspection score for each property, identify what changes may be needed for HUD to use sampling error results, and consider those results when determining whether more frequent inspections or enforcement actions are needed. (Recommendation 2) The Deputy Assistant Secretary for the Real Estate Assessment Center should develop comprehensive and organized documentation of REAC’s sampling methodology and develop a process to ensure that documentation is maintained going forward. (Recommendation 3) The Deputy Assistant Secretary for the Real Estate Assessment Center should track on a routine basis whether REAC is conducting inspections of multifamily housing properties in accordance with federal guidelines for scheduling and coordinate with the Deputy Assistant Secretary for Multifamily Housing to minimize the number of properties that can cancel or reschedule their physical inspections. (Recommendation 4) The Deputy Assistant Secretary for the Real Estate Assessment Center should design and implement an evaluation plan to assess the effectiveness of the Indefinite Delivery/Indefinite Quantity pilot in ensuring timely and quality inspections for properties in hard-to-staff geographic areas. (Recommendation 5) The Deputy Assistant Secretary for Multifamily Housing and the Deputy Assistant Secretary for the Real Estate Assessment Center should expedite implementation of the recommendations from the Rapid Response and Resolution Team. (Recommendation 6) The Deputy Assistant Secretary for the Real Estate Assessment Center should follow through on REAC’s plan to create a process to verify candidate qualifications for contract inspectors—for example, by calling references and requesting documentation from candidates that supports their completion of 250 residential or commercial inspections. The plan should also consider whether certain types of inspections—such as Federal Emergency Management Agency inspections and U.S. Army Office of Housing inspections—satisfy REAC’s requirements. (Recommendation 7) The Deputy Assistant Secretary for the Real Estate Assessment Center should develop a process to evaluate the effectiveness of REAC’s training program—for example, by reviewing the results of tests or soliciting participant feedback. (Recommendation 8) The Deputy Assistant Secretary for the Real Estate Assessment Center should revise training for quality assurance inspectors to better reflect their job duties. Revised training should be documented, include expanded subject matter training, and address skills that may not be included in training for contract inspectors—for example, instructing contract inspector candidate trainings and coaching and providing feedback. (Recommendation 9) The Deputy Assistant Secretary for the Real Estate Assessment Center should develop continuing education requirements for contract and quality assurance inspectors. (Recommendation 10) The Deputy Assistant Secretary for the Real Estate Assessment Center should develop and implement a plan for meeting REAC’s management targets for the timeliness and frequency of CQA reviews and QCIs. The plan should include consideration of resources of and demands on quality assurance inspectors, including the effect of natural disasters and other special assignments. (Recommendation 11) The Deputy Assistant Secretary for the Real Estate Assessment Center should ensure that Quality Control’s policies and procedures for overseeing quality assurance inspectors are implemented. (Recommendation 12) The Deputy Assistant Secretary for the Real Estate Assessment Center should review quality assurance inspector performance standards and revise them to better reflect the skills and supporting behaviors that quality assurance inspectors need to effectively contribute to REAC’s mission. (Recommendation 13) The Deputy Assistant Secretary for Multifamily Housing should report to Congress on why the agency has not complied with the 2017 and 2018 Consolidated Appropriations Acts requirement to issue notices to properties when the REAC score is 60 or below, including seeking any statutory flexibilities or exceptions believed appropriate. (Recommendation 14) Agency Comments and Our Evaluation We provided a draft of this report to HUD for review and comment. In written comments, reproduced in appendix V, HUD agreed with 11 recommendations, partially agreed with 2, and neither agreed nor disagreed with 1. In its written comments, HUD noted that it largely agreed with the findings and has been examining how it can develop, pilot, and evaluate an alternate approach to its inspection model that will address the issues raised in our report. Consistent with our report, HUD recognized that after 20 years, its physical inspection process has become susceptible to manipulation. HUD said it plans to pilot a new physical inspection process in one of HUD’s administrative regions later this year. HUD stated that given its limited resources, it will be unable to simultaneously develop the new process and implement all of the recommendations to its current process. We maintain that implementing the recommendations will help REAC to ensure that properties are decent, safe, sanitary, and in good repair. HUD agreed with 11 recommendations and provided specific information about planned steps to implement them. For example, for our first recommendation on conducting a comprehensive review of REAC’s physical inspection process, HUD noted in its written comments that it plans to develop new standards, protocols, scoring approaches, and software to be validated through a demonstration. In addition, if resources are available, HUD plans to contract with an external vendor to assess the accuracy and effectiveness of the new inspection process and the statistical validity of scoring. For our eighth and ninth recommendations on evaluating and revising training for contract and quality assurance inspectors, HUD noted that it would evaluate its internal training program for contract inspectors as it pilots its new inspection process and compare the results with its evaluation of an outsourced training approach. In addition, HUD noted that it would identify the subject matter expertise needed for quality assurance inspectors and provide training to address any skills gaps among these inspectors. HUD partially agreed with our fourth and sixth recommendations and noted some considerations for addressing them. HUD partially agreed with our fourth recommendation regarding tracking its progress on conducting inspections of multifamily properties in accordance with federal guidelines, but did not identify the reason for its partial agreement. In written comments, HUD described actions it plans to take that we consider consistent with the intent of the recommendation. We maintain that this recommendation should be implemented to achieve benefits, including better understanding of the factors that contribute to inspection delays. HUD also partially agreed with our sixth recommendation regarding expedited implementation of recommendations from the Rapid Response and Resolution Team. In written comments, HUD noted that in order to balance resources invested in the current approach with those needed to design future operations, it would consider whether the remaining recommendations from the Rapid Response and Resolution Team fit with the new inspection model that it plans to pilot. Whether in the current inspection model or a future one, we maintain that expediting implementation of the recommendations from the Rapid Response and Resolution Team will support that team’s intention to address conditions at troubled multifamily properties. HUD neither agreed nor disagreed with our second recommendation to resume calculating the sampling error associated with the physical inspection for each property, identify the changes that may be needed for HUD to use sampling error results, and consider those results when determining whether more frequent inspections or enforcement actions are needed. In response to this recommendation, HUD noted in its written comments that it is examining resource implications, regulations and policies that would need to be changed, and the viability and effectiveness of making the changes included in our recommendations. We maintain that implementing this recommendation would improve REAC’s inspection process by identifying properties that may require more frequent inspections or enforcement actions. We are sending copies of this report to the appropriate congressional committees and the Secretary of Housing and Urban Development. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or GarciaDiazD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) the Department of Housing and Urban Development’s (HUD) Real Estate Assessment Center’s (REAC) process for identifying physical deficiencies; (2) REAC’s processes for selecting, training, and developing contract and quality assurance inspectors; (3) REAC’s processes for monitoring contract and quality assurance inspectors; and (4) HUD’s monitoring and enforcement processes for addressing physical deficiencies and how REAC’s information is used to support these processes. To address the first objective, we reviewed regulations and policies and procedures related to REAC’s physical inspection process. Specifically, we reviewed the final notice on REAC’s physical inspection scores and the 2017 update to REAC’s compilation bulletin, which is the guidance document for inspectors conducting physical inspections. We also reviewed REAC’s user guide, which explains how REAC’s inspection software and handheld data collection devices are used to conduct the inspection and record deficiencies. To describe the quality assurance processes for physical inspections, we reviewed REAC’s quality assurance standard operating procedures, which provide instructions to REAC’s quality assurance inspectors on how they are to conduct various monitoring activities over contract inspectors to assess the quality of inspections. We also reviewed REAC’s standard operating procedures for post-inspection reviews. As part of our assessment of the physical inspection process, we reviewed the statistical methodology used by REAC to determine the sample size for dwelling units and buildings. We reviewed REAC’s documentation describing the sample-size calculations for units and buildings and interviewed a REAC statistician to obtain information on the statistical approach and assumptions used in the sample size calculations. With this information, we were able to conduct our own calculations on the sample-size and compare our results to REAC’s. To report on the number of physical inspections conducted from fiscal years 2013 through 2017, as well as other data on inspections over this period, we accessed REAC’s Record and Process Inspection Data database. This database contains information related to physical inspections, such as the types and locations of properties inspected, dates of inspection, and inspection scores. To assess the reliability of the database, we first identified the various tables in the database that held the relevant data we needed for our analysis. We also identified the common identifier in each of these tables to construct records of inspections with the relevant data. We met with REAC’s staff to confirm that our selection of the tables and our construction of records was correct. We then performed our analysis and developed various descriptive statistics, such as the number of inspections per year by property type from fiscal years 2013 through 2017, the number of multifamily properties that failed their REAC inspection (scored below 60) for fiscal years 2013 through 2017, the percentage of multifamily property inspections that occurred on time given their inspection score, and various inspection score ranges by state. We compared our statistics on the number of inspections per year with comparable statistics developed by REAC. In cases where we had differences, we obtained explanations from REAC for these differences and revised our analysis where appropriate. Based on our overall assessment of the REAC data we used, we found them to be sufficiently reliable for analyzing the number and timing of inspections and trends in scoring. To obtain the views of various stakeholders on the inspection process, we held discussion groups with contract inspectors and REAC’s quality assurance inspectors and supervisors. Each discussion group had between 6 and 13 participants and was facilitated by a GAO staff member. We covered a number of topics in these discussion groups, including the inspection and quality assurance processes. We held one discussion group with contract inspectors, three with REAC quality assurance inspectors, and one with REAC quality assurance supervisors: Contract inspectors. For the discussion group with the contract inspectors, we invited all of the contract inspectors who were attending a conference at REAC’s headquarters in Washington, D.C. Thirteen contract inspectors attended the discussion group. Quality assurance inspectors. For the discussion groups with the quality assurance inspectors, we reached out to all quality assurance staff and coordinated with REAC to arrange specific meeting times to maximize the number of participants. We held two separate discussion groups with experienced inspectors. The first of these groups had 11 participants, and the second group had 6. We also held a separate discussion group with 11 newly hired quality assurance inspectors. Quality assurance supervisors. For the last discussion group, we reached out to all quality assurance supervisors and met with 6 of them. We recorded all of the discussion groups to help transcribe the conversations. In order to analyze the discussion group transcripts, we identified phrases that represented key themes across the groups. One GAO analyst reviewed one of the transcripts to identify any additional phrases we should add to our analysis. Once we arrived at our final set of key themes, one GAO analyst reviewed all of the transcripts and matched responses in the transcripts to the key themes. A second GAO analyst then checked the work to determine if he agreed with the coding of the first analyst. If there were any disagreements on the coding, the two analysts met to reach consensus on the appropriate coding. Finally, to obtain the perspectives of property owners on REAC’s inspection process, we met with four organizations representing multifamily or public housing property owners. These organizations were the Council for Large Public Housing Authorities, the National Affordable Housing Management Association, the National Leased Housing Association, and the Public Housing Authorities Directors Association. Also, to understand how private home inspection associations developed their inspection processes, we interviewed staff from the American Society of Home Inspectors and the International Association of Certified Home Inspectors. To address the second and third objectives, we reviewed REAC’s policies and procedures for selecting, training, developing, and monitoring contract and quality assurance inspectors. We reviewed the contract inspector candidate assessment questionnaire and construction analyst job announcement, which describe the requirements to become a contract and quality assurance inspector, respectively. We also reviewed documents describing the online (Phase Ia), classroom (Phase Ib), and field (Phase II) training courses. We also reviewed an assessment that Deloitte, a management consultant firm, conducted of REAC’s training, quality assurance, and inspector oversight processes. We compared REAC’s training processes for inspectors with key attributes of effective training and development programs. In our discussion groups with contract and quality assurance inspectors, we also asked their views on REAC’s selection, training, and monitoring processes. In addition, we interviewed REAC management officials to discuss their processes for the selection, training, monitoring, and oversight of contract and quality assurance inspectors. We spoke with staff from the American Society of Home Inspectors and the International Association of Certified Home Inspectors to understand how their selection and training requirements for inspectors who are members of home inspection associations compared with REAC’s. To examine REAC’s processes for monitoring contract inspector performance, we reviewed REAC’s quality assurance standard operating procedures, REAC’s strategic plan, and various tools REAC has developed to assess how contract and quality assurance inspectors perform relative to their peers. We obtained data on collaborative quality assurance reviews for fiscal years 2013 through 2017 and data on quality control inspections for January 2017 through June 2018. We analyzed the data to determine, for example, how often contract inspectors were conducting inspections in accordance with REAC’s Uniform Physical Conditions Standards protocol and its quality assurance standard operating procedures, and how often REAC was meeting its goals for timeliness and frequency of reviews. We assessed the reliability of the data by interviewing knowledgeable officials and conducting manual testing on relevant data fields for obvious errors. Based on these steps, we found the data to be sufficiently reliable for the purposes of our analyses. To examine REAC’s processes for monitoring and overseeing quality assurance inspector performance, we reviewed the performance standards and performance elements REAC uses to evaluate quality assurance inspectors. We interviewed staff from REAC’s Quality Control department, which conducts inspection reviews on quality assurance inspectors. We compared REAC’s performance management processes to key practices we have identified for effective performance management. We also compared REAC’s policies for oversight and monitoring of quality assurance inspectors to criteria in Standards for Internal Control in the Federal Government. To address the fourth objective, we reviewed documentation related to monitoring and enforcement processes for HUD’s Office of Multifamily Housing (Multifamily Housing), Office of Public and Indian Housing (PIH), and the Departmental Enforcement Center. For example, we reviewed relevant protocols and guidance documents on PIH’s and Multifamily Housing’s processes to address physical risk, among other risks. We also reviewed the relevant legal authorities in the 2014 through 2018 Consolidated Appropriations Acts and federal regulations for these HUD program offices to take enforcement actions for properties with physical deficiencies. We interviewed officials from Multifamily Housing, PIH, and the Departmental Enforcement Center on their processes to monitor the physical condition of properties and take enforcement actions. We further selected two Multifamily Housing and four PIH field offices throughout the United States to understand actions they take to monitor properties and ensure that physical deficiencies are corrected. We developed a two- stage process to select field offices with a higher percentage of inspections with scores 70 and below. We first selected HUD regions based on our score criteria and then selected specific field offices within those regions using similar score criteria. Because we selected a nonprobability sample of field offices, the information we obtained cannot be generalized more broadly to all field offices. However, the information provides important context and insight into how the enforcement process for physical deficiencies works across the country. In addition, we obtained data on performance designations for public housing agencies within PIH, actions taken by Multifamily Housing for properties scoring below 60 on the REAC inspection, and actions taken by the Departmental Enforcement Center for Multifamily Housing properties. We assessed the reliability of the data by reviewing relevant HUD guidance and obtaining written responses from agency officials on how the data were collected, maintained, analyzed, and presented. Based on these steps, we found the data to be sufficiently reliable for the purposes of our analyses. Finally, we reviewed prior reports from GAO and from the HUD Office of Inspector General that discussed efforts to monitor the physical condition of properties, among other conditions. We conducted this performance audit from July 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Number of Multifamily Housing Inspections and Percentage of Inspections in Selected Score Ranges, Fiscal Years 2013– 2017 The Real Estate Assessment Center (REAC) conducted 44,486 inspections of Office of Multifamily Housing (Multifamily Housing) properties from fiscal years 2013 through 2017, according to our analysis of REAC’s inspection data. Properties received a score from 0 to 100, with a score below 60 considered as failing. Table 7 shows the percentage of inspections conducted in each state across three score ranges. States varied in the percentage of inspections that fell within the score range considered failing (0 to 59), from a low of 1 percent to a high of 10 percent. REAC inspects properties with lower scores more frequently than properties with higher scores. For example, properties that scored below 80 would have been inspected annually over this period, while properties that scored 90 or above would have been inspected every 3 years. Appendix III: Number of Public Housing Inspections and Percentage and Number of Inspections in Selected Score Ranges, Fiscal Years 2013–2017 The Real Estate Assessment Center (REAC) conducted 15,156 inspections of public housing properties from fiscal years 2013 through 2017, according to our analysis of REAC’s inspection data. Properties received a score from 0 to 100, with a score below 60 considered as failing. Table 8 shows the percentage of inspections conducted in states or U.S. territories across three score ranges. States varied in the percentage of inspections that fell within the score range considered failing (scores 0 to 59), from a low of 1 percent to a high of 34 percent. REAC generally inspects properties with lower scores more frequently than properties with higher scores. According to our analysis, REAC conducted fewer than 100 inspections of public housing properties in 18 states or territories. Table 9 shows the number of inspections conducted within three score ranges for these 18 states or territories. Appendix IV: Recommendations to the Real Estate Assessment Center from the Rapid Response and Resolution Team The Rapid Response and Resolution Team was created by the Department of Housing and Urban Development (HUD) in May 2016 to address troubled multifamily properties by improving HUD’s internal processes for assessing properties and analyzing risk so that properties do not become troubled; improving HUD’s processes for inspecting properties so that troubled ones are identified earlier and more reliably and communicating the results to stakeholders; and improving HUD’s processes for enforcing corrective actions and resolving troubled properties and working with owners so that HUD resources are used only on safe and healthy housing. The team consisted of staff from the Real Estate Assessment Center (REAC) and other units within HUD, including the Office of Multifamily Housing (Multifamily Housing). In January 2017, the team presented 31 recommendations, 8 of which were specific to REAC. As of December 2018, REAC had not yet implemented any of these recommendations. REAC had reached concurrence with Multifamily Housing on 3 of these recommendations and asked for Multifamily Housing’s consideration of the funding and rulemaking requirements for the remaining 5 recommendations. The 8 recommendations that were specific to REAC are as follows: 1. Implement a risk-based exigent health and safety abatement verification policy. 2. Inspect properties that have a REAC physical inspection score of less than 60 after a 3-day notice. 3. Increase the scoring weights of units and reexamine point deduction caps. 4. Expand photo capability in the inspection process to level 1 and level 2 deficiencies and a panoramic photo of the property. 5. Inspect carbon monoxide detectors in the inspection process. 6. Develop health and safety abatement requirements, including focusing on water ponding and missing lead-based paint disclosure forms and inspection reports. 7. Take enforcement action to protect tenants before the 45-day appeal period is over for properties that score under 30 points and that have exigent health and safety deficiencies. 8. Require electronic exigent health and safety certifications and abatements within 24 hours of the inspection. Appendix V: Comments from the Department of Housing and Urban Development Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Andy Pauline (Assistant Director), José R. Peña (Analyst in Charge), Carl Barden, Chloe Brown, Hannah Dodd, Juan Garcia, Jeff Harner, Emily Hutz, Jill Lacey, Jerry Sandau, Jessica Sandler, Jennifer Schwartz, and Jena Sinkfield made key contributions to this report.
Why GAO Did This Study Over 2 million low- and moderate-income households live in HUD-assisted (subsidized) or -insured multifamily housing. HUD's REAC uses contractors to inspect the physical condition of these properties to determine that they are decent, safe, sanitary, and in good repair. The 2017 Consolidated Appropriations Act, Joint Explanatory Statement, included a provision for GAO to review REAC's policies and processes. This report discusses, among other things, (1) REAC's process for identifying physical deficiencies and (2) REAC's selection, training, and monitoring of contract inspectors and its own quality assurance inspectors. GAO reviewed HUD documents and data related to REAC's physical inspection process, use of contract and quality assurance inspectors, and enforcement processes. GAO also interviewed HUD officials and housing industry stakeholder groups and conducted discussion groups with contract and quality assurance inspectors. What GAO Found The Department of Housing and Urban Development's (HUD) Real Estate Assessment Center's (REAC) standardized process to identify physical deficiencies at HUD multifamily properties (including public housing) has some weaknesses. For example, REAC has not conducted a comprehensive review of its inspection process since 2001, even though new risks to its process have emerged, such as property owners misrepresenting the conditions of their properties. A comprehensive review could help REAC identify risks and ensure it is meeting the goal of producing inspections that are reliable, replicable, and reasonable. In addition, REAC does not track its progress toward meeting its inspection schedule for certain properties, which could hinder HUD's ability to take enforcement actions. Finally, in the wake of concerns that inspections were not always identifying troubled properties, REAC and other HUD units, including the Office of Multifamily Housing, made eight recommendations in January 2017 to enhance the inspection process, but HUD had only approved three of these recommendations and had not implemented any of them as of December 2018. REAC uses contractors to inspect properties; these contract inspectors are trained and overseen by quality assurance inspectors hired directly by REAC. However, REAC's processes to select, train, and monitor both contract inspectors and quality assurance inspectors have weaknesses. Selection. REAC does not verify the qualifications of contract inspector candidates before they are selected to begin training to become certified inspectors. Formal processes to verify qualifications may help REAC identify unqualified candidates before they begin training and avoid expending resources on training these candidates. Training. REAC lacks formal mechanisms to assess the effectiveness of its training program for contract and quality assurance inspectors. In addition, unlike other professional inspection organizations, REAC does not have continuing education requirements. Formal mechanisms to assess the effectiveness of its training program could help REAC ensure that its program supports the development needs of inspectors. Further, requiring continuing education could help REAC ensure that inspectors are current on any changes in REAC's policies or industry standards. Monitoring. REAC has not met management targets for the number and timeliness of its inspection oversight reviews of contract inspectors. For example, REAC has not met its target of conducting three quality assurance reviews of poor-performing contractors per quarter. As a result, if deficiencies are not identified and recorded by contract inspectors, they may not be addressed in a timely manner. In addition, REAC's performance standards for its quality assurance inspectors have not been updated to reflect their broader job duties, such as conducting inspector oversight reviews and coaching and mentoring contract inspectors. Performance standards that are directly linked to these job duties would help ensure that inspectors are assessed on all of their key responsibilities. What GAO Recommends GAO makes 14 recommendations to HUD to improve REAC's physical inspection process and its selection, training, and monitoring of contract and quality assurance inspectors, among other things. HUD agreed with 11 recommendations, partially agreed with 2, and neither agreed nor disagreed with 1. GAO maintains that its recommendations should be fully addressed to improve the inspection process.
gao_GAO-19-719T
gao_GAO-19-719T_0
Background Federal Requirements Related to Equal Employment Opportunity and Affirmative Action Private companies are generally prohibited by federal law from discriminating in employment on the basis of race, color, religion, sex, national origin, age, and disability status. Additionally, federal contractors and subcontractors are generally required to take affirmative action to ensure that all applicants and employees are treated without regard to race, sex, color, religion, national origin, sexual orientation, and gender identity, and to employ or advance in employment qualified individuals with disabilities and qualified covered veterans. EEOC enforces federal antidiscrimination laws, and OFCCP enforces affirmative action and nondiscrimination requirements for federal contractors. EEOC and OFCCP share some enforcement activities and have established a memorandum of understanding (MOU) to minimize any duplication of effort. U.S. Equal Employment Opportunity Commission The EEOC enforces Title VII of the Civil Rights Act of 1964, as amended, which prohibits employment discrimination on the basis of race, color, religion, sex, or national origin. EEOC also is responsible for enforcing other federal laws that prohibit discrimination in employment based on age and disability, among other characteristics. EEOC investigates charges of employment discrimination from the public, litigates major cases, and conducts outreach to prevent discrimination by educating employers and workers. EEOC also pursues a limited number of cases each year designed to combat systemic discrimination, defined by the agency as patterns or practices where the alleged discrimination presented by a complainant has a broad impact on an industry, profession, company, or geographic location. EEOC can also initiate a systemic investigation under Title VII with the approval of an EEOC commissioner provided the commissioner finds there is a reasonable basis for the investigation. In fiscal year 2018, EEOC resolved about 90,558 charges of discrimination, secured more than $505 million for victims of discrimination, and filed 199 lawsuits. Office of Federal Contract Compliance Programs The OFCCP within DOL is responsible for ensuring that about 200,000 federal contractor establishments comply with federal nondiscrimination and affirmative action requirements. Under Executive Order 11246 and other federal laws and regulations, covered federal contractors and subcontractors are prohibited from discriminating in employment on the basis of race, color, religion, sex, sexual orientation, gender identity, or national origin and are required to take affirmative action to help ensure that all applicants and employees are treated without regard to these factors. OFCCP also enforces Section 503, and the affirmative action provisions of VEVRAA, which require covered contractors to take affirmative action to employ and advance in employment qualified individuals with disabilities and covered veterans, respectively. OFCCP uses two approaches to ensure compliance with federal equal employment and affirmative action requirements—enforcement and compliance assistance. OFCCP’s enforcement program primarily involves conducting evaluations of contractors’ compliance with federal requirements and these evaluations represent the preponderance of agency activity. In 2015, OFCCP compliance officers conducted 2,345 compliance evaluations, which represented about 2 percent of federal contractor establishments in its jurisdiction. OFCCP has since significantly decreased the number of compliance evaluations it conducts. In fiscal year 2018, OFCCP completed 812 compliance evaluations, which is 65 percent fewer than in fiscal year 2015. Since fiscal year 2016, OFCCP has adopted a strategy of conducting fewer compliance evaluations and prioritizing larger systemic cases. Since OFCCP can only evaluate a small fraction of federal contractors each year, the agency also carries out compliance assistance efforts, including issuing guidance, conducting outreach concerning nondiscrimination requirements, and providing compliance assistance to contractors. OFCCP’s regulations generally require that covered contractors prepare and maintain an affirmative action program (AAP). Contractors must also comply with certain recordkeeping requirements; for example, under Executive Order 11246, covered contractors are required to maintain records pertaining to hiring, promotion, layoff or termination, rates of pay, and applications, among other records. Under OFCCP’s Executive Order 11246 regulations, an AAP is a management tool that is designed to ensure equal employment opportunity, with an underlying premise that the gender, racial, and ethnic makeup of a contractor’s workforce should be representative of the labor pools from which the contractor recruits and selects. An AAP must also include practical steps to address underrepresentation of women and minorities, such as goals for expanding employment opportunities to these groups in instances in which they are underrepresented. Companies must create an AAP for each business establishment—generally, a physical facility or unit that produces goods or services, such as a factory, office, or store for the federal contractor. Religious Freedom Restoration Act of 1993 (RFRA) Each year the federal government provides billions of dollars to organizations that provide social services to needy families and individuals. Some of these funds are provided through competitive grants to faith-based organizations (FBO), which may include religious groups, like churches, mosques, synagogues, and temples, or charitable organizations affiliated with religious groups. In some instances, FBOs believe it is necessary to hire only individuals who share their religious beliefs in order to carry out their mission. Title VII of the Civil Rights Act of 1964 generally prohibits employment discrimination based on religion. However, section 702(a) of the Act exempts FBOs with respect to basing employment decisions on religion, thereby permitting FBOs to intentionally, and exclusively, hire individuals who share their religious beliefs. In light of this exemption, FBOs that receive federal grant funding or that contract with the federal government have also generally been permitted to make employment decisions based on religion. OFCCP is responsible for ensuring that federal contractors comply with federal nondiscrimination requirements and provides compliance assistance to the entities it oversees, including guidance related to this exemption. There are, however, certain federal grant programs that are subject to statutory restrictions that prohibit recipients from using grant funding, in whole or in part, to discriminate or deny employment on the basis of religion, among other factors. In June 2007, the Department of Justice’s Office of Legal Counsel issued an opinion in a particular case stating that the Religious Freedom Restoration Act of 1993 (RFRA) could be reasonably construed to require an agency to exempt FBOs from statutory requirements that restrict federal grantees from hiring on the basis of religion. Pursuant to that opinion, and the RFRA, certain federal agencies have permitted FBOs that receive funding under a program that is subject to a statutory restriction on religious-based hiring to certify that they are exempt from such restrictions, allowing these FBOs to engage in religious-based hiring, provided that they do not discriminate on other bases. OFCCP and EEOC Could Improve the Effectiveness of their Processes to Ensure Employers Meet Equal Employment Opportunity Requirements OFCCP and EEOC face challenges in conducting oversight efforts to ensure that employers meet applicable federal equal employment opportunity requirements. For example, in our September 2016 report, we found several shortcomings that limited OFCCP’s oversight efforts, including weaknesses in OFCCP’s compliance evaluation selection process, its reliance on voluntary compliance, and the lack of staff training. Also, in our November 2017 report, we found that OFCCP’s planned methodology for identifying equal employment disparities by industry, such as the technology sector, might not accurately identify industries at greatest risk of potential noncompliance with affirmative action and nondiscrimination requirements. Additionally, we reported that while EEOC had identified barriers to recruitment and hiring in the technology sector as a strategic priority, it had not consistently captured information identifying specific industries when conducting investigations. EEOC’s inability to capture this information using standard industry codes impeded its ability to conduct related analysis that could be used to more effectively focus its limited enforcement resources and outreach activities. Weaknesses in the Compliance Evaluation Process Limited OFCCP’s Ability to Ensure Federal Contractors’ Nondiscrimination Compliance In September 2016, we reported that about 22 percent of OFCCP’s compliance evaluations of supply and service contractors found violations of some type and about 2 percent had discrimination findings, since 2010 (see figure 1). When OFCCP found violations during compliance evaluations, it often resolved those violations with conciliation agreements that outlined remedial action that contractors agreed to take. As a result of our work, we made six recommendations (see table 1). The agency has taken action to fully implement three of our recommendations: (1) to address the risk geographic imbalances in compliance evaluation assignments; (2) to review outreach and compliance assistance efforts and identify options for improving information provided to federal contractors; and (3) assess existing contractor guidance for clarity. However, the agency has not taken action to fully implement our other three recommendations that focus on improving enforcement and compliance. With regard to the recommendations that have not been fully implemented, OFCCP has taken action to date as described below. Focus compliance evaluations on greatest violation risk. We found the process OFCCP used to select contractors for compliance evaluations could not ensure that contractors with the highest risk of noncompliance were being selected. OFCCP’s selection process was nonrandom and did not produce a generalizable sample of contractors for evaluation. As a result, OFCCP was unable to draw conclusions about noncompliance risk in the overall federal contractor population. While the selection process included consideration of a number of neutrally applied factors, such as alphabetical order, employee count at the establishment, contract value, or contract expiration date, OFCCP was not able to identify which of these factors, or any factors, are associated with risk of noncompliance. Thus, OFCCP was unable to quantify the extent to which federal contractors in its jurisdiction are noncompliant, and did not have reasonable assurance that it was focusing its efforts on those contractors at greatest risk of not following equal employment opportunity or affirmative action requirements. Because OFCCP only conducts evaluations about 2 percent of federal contractor establishments in its jurisdiction, without an effective risk-based contractor selection process, OFCCP may be missing opportunities to evaluate whether there is a significant segment of contractors who may be more likely to violate nondiscrimination and affirmative action requirements, leaving workers potentially vulnerable. OFCCP has taken steps to improve its contractor selection process, but has not fully implemented either this 2016 recommendation or a related recommendation we made in 2017 that it assess the quality of its proposed methods to incorporate consideration of disparities by industry before selecting contractors for compliance evaluation. Beginning in fiscal year 2020, contractors will be able to apply to the Voluntary Enterprise- wide Review Program (VERP), which aims to remove top-performing contractor participants from the pool of contractors scheduled for compliance evaluations. OFCCP also recently implemented a new scheduling list (the list of contractor establishments selected for evaluation) methodology based on research on closed cases from the previous five years (2014-2018). Thirty-three percent of the new scheduling list was comprised entirely of contractor establishments from the three industries with the highest rates of violation based on this sample of closed cases. However, the scheduling lists of the previous 5 years included nonrandom selections of contractor establishments that included a number of neutrally applied factors. If OFCCP’s goal is to prioritize contractors at highest risk of noncompliance, this new scheduling methodology may not achieve this, because contractors selected will be weighted towards prior neutrally applied selection factors, such as employee count, in addition to violation risk. Further, while VERP may remove some compliant contractors from the scheduling list pool, without overwhelming volunteer participation, it will do little to help identify those most likely to violate. Consequently, it remains unclear whether contractors with the highest risk of not following equal employment opportunity and affirmative action requirements will be selected for compliance reviews. Monitor affirmative action programs. In 2016 OFCCP relied significantly on voluntary compliance by federal contractors, and this approach could not ensure that contractors were complying with basic requirements like developing and maintaining an AAP. By signing a qualifying federal contract, covered contractors are required to develop an AAP within 120 days of contract commencement and update it annually. However, OFCCP had no process for ensuring that the tens of thousands of establishments that had signed a qualifying federal contract do so. OFCCP has taken steps towards implementing a mechanism to monitor AAPs but has not fully implemented this recommendation. In 2018 OFCCP contracted with an information technology vendor to develop a web-based portal to allow contractors to upload their AAPs electronically for convenience, increased compliance, and for OFCCP review and resource prioritization. Officials anticipate delivery of the portal by the close of fiscal year 2019. Simultaneously, according to officials, OFCCP has developed the necessary information collection request to obtain approval from OMB to collect all contractors’ AAPs annually. The agency anticipates that OMB approval will be timely to align with completion of the AAP portal. Facilitate timely compliance officer training. In 2016, we found that OFCCP may not be providing timely training for new compliance officers. According to OFCCP officials, budget constraints had made it difficult to hold timely centralized training for new compliance officers. In half of the regions we visited, compliance officers or management officials we spoke with noted that this training was not provided in a timely manner after new officers are hired. For example, one compliance officer told us they worked for 8 months before receiving formal training. In one district office, compliance officers we spoke with explained that the lack of uniform, timely training made compliance officers feel unprepared when they began their job. Further, without providing timely training to new compliance officers, OFCCP cannot ensure consistency in its enforcement efforts across its offices. OFCCP has taken steps to improve its training program, but has not fully implemented this recommendation. In 2018, OFCCP retained an expert consultant to assess its national training program and standardize its training development and evaluation process. The assessment was completed in 2019 and a plan of action was created to address any program gaps, according to agency officials. Officials reported that the plan of action was fully implemented in fiscal year 2019 and OFCCP obtained a 5 year International Association for Continuing Education and Training (IACET) accreditation for its program. OFCCP officials told us they are developing a learning management system that will allow new compliance officers easy access to training soon after the hiring. OFCCP plans for the system to include the development of course requirements by level of competence— basic, intermediate, and advanced. OFCCP officials told us they plan to roll out the new system in January 2020. Weaknesses in Oversight Efforts Impact EEOC’s and OFCCP’s Effectiveness in Ensuring Nondiscrimination and Equal Employment Opportunity in the Technology Sector In November 2017, we reported that the estimated percentage of minority technology workers had increased from 2005 to 2015, however, while we found statistically significant increases in the numbers of Asian and Hispanic workers, no growth had occurred for either female or Black workers (see figure 2). Further, female, Black, and Hispanic workers remain a smaller proportion of the technology workforce—mathematics, computing, and engineering occupations—compared to their representation in the general workforce. These groups have also been less represented among technology workers inside the technology sector—those companies that have the highest concentration of technology workers in such industries as computer systems design and software publishing—than outside the technology sector such as retail or finance companies. In contrast, Asian workers were more represented in these occupations than in the general workforce. As a result of our work, we made one recommendation to EEOC and five recommendations to OFCCP (see table 2). EEOC has taken action, but not fully implemented our recommendation on identifying missing standard industry classification data from its handling of charges. By providing guidance to contractors regarding the option to include more specific goals in their AAPs, OFCCP has taken actions to implement one of our six recommendations—to take steps toward requiring contractors to disaggregate demographic data for the purpose of setting placement goals in the AAP. The agency has not taken action to fully implement our other four recommendations that focus on improving oversight, as shown in table 2 and discussed below. With regard to the recommendations that have not been fully implemented, EEOC and OFCCP have taken action to date as described below. Capture standard industry classifications on charges. In our November 2017 report, we found that EEOC could not analyze charge data by industry to help identify investigation and outreach priorities. This was inconsistent with EEOC strategic planning documents and EEOC Inspector General reports which, had emphasized the importance of analyzing charge data by industry. EEOC’s inability to analyze charge data by industry limits EEOC’s ability to identify trends by industry sector and conduct sector-related analyses that could be used to more effectively to focus its limited enforcement resources and outreach activities. EEOC has taken some action towards addressing missing industry code data, but has not taken actions sufficient to fully implement this recommendation. As part of an effort to overhaul its data system, EEOC has begun developing an Employer Master List that will provide a source of employer information, including industry codes, but EEOC told us that it has not yet completed this effort. It anticipates this system will be more fully developed by spring 2020. Use data on closed evaluations to address delays. In our November 2017 report, we found that OFCCP did not analyze data on closed evaluations to understand the root causes of delays in its compliance review process that may be straining its resources and inhibiting OFCCP’s efforts to identify potential discrimination. This evaluation could help OFCCP determine whether changes are needed in its own internal policies and processes, as well as guide OFCCP’s selection of improved methods for obtaining complete, accurate, and timely documentation from federal contractors. OFCCP has taken actions but it does not fully address this recommendation. In June 2019, OFCCP officials reported that OFCCP’s procedures outlined in the Active Case Enforcement Directive (DIR 2011-01) caused delays in case closures, but it does not indicate that this conclusion resulted from the recommended analysis of internal process data from closed evaluations. OFCCP officials reported that the agency’s aged case rate—defined as a case which is open for more than 730 days and has not been referred for further enforcement— has dropped from 27.7 percent in fiscal year 2017 to 20.9 percent in fiscal year 2019, though they did not report any corresponding change in case outcomes. In September 2019, OFCCP officials told us they continue to study causes and how to address delays with effective policies that make the agency more efficient. Assess the methods used to consider industry disparities in compliance. In our November 2017 report, we found that OFCCP’s current methodology for identifying disparities by industry—using data from the American Community Survey—may not have accurately identified industries at greatest risk of potential noncompliance with nondiscrimination and affirmative action requirements. In its agency response to our November 2017 report, OFCCP officials reported that the agency was exploring the use of U.S. Census Bureau and administrative data to refine its selection process to focus on industries with a greater likelihood of noncompliance. OFCCP has taken some action, but has not fully implemented this recommendation. In January 2019, DOL officials reported that DOL had revised its scheduling methodology to include industries with the highest rates of violations. OFCCP published the scheduling list in March 2019 and its field offices started scheduling cases in May 2019. OFCCP stated it will continue to monitor results from this revised scheduling methodology to determine its effectiveness. It will be important for OFCCP to refine these methods based on its experiences with them. This new process is a step toward focusing efforts on industries at greater risk of potential noncompliance with nondiscrimination or affirmative action requirements. Evaluate establishment-based approach to compliance evaluations. In our November 2017 report, we found that OFCCP had made no changes to its establishment-based approach since OFCCP was founded in 1965. However, OFCCP officials acknowledged the changing nature of a company’s work can involve multiple locations and corresponding changes in the scope of hiring and recruitment. OFCCP has taken some action, but has not fully addressed this recommendation. In fiscal year 2019, OFCCP evaluated its current approach for identifying subcontractors for review. OFCCP stated that the current approach does not reliably include subcontractors in the pool from which contractors are scheduled because there is no government or public database that captures the complete universe of subcontractors and other important data. In June 2019, OFCCP submitted revisions to its process to the Office of Management and Budget (OMB) for approval. Evaluate the Functional Affirmative Action Program. In November 2017, we found that OFCCP had not evaluated its Functional Affirmative Action Program (FAAP)—an alternative affirmative action program for a business function or unit that may exist at multiple establishments or multi-establishment contractors. OFCCP offered the FAAP so that companies could move away from establishment-based reviews, which may be more appropriate for some multi-establishment contractors. However, few contractors participate in this program and the agency has not conducted an evaluation of it. OFCCP has taken some action, but has not fully implemented this recommendation. OFCCP has taken steps to encourage contractors to use the FAAP program without fully evaluating it as an alternative to the establishment-based program. Evaluating the FAAP could help OFCCP improve its ability to achieve its objectives and may provide broader insight for OFCCP’s overall enforcement approach. Few Faith-based Grantees Certified They Were Exempt from Statutory Restrictions on Religious-based Hiring In our October 2017 report, we found that from fiscal years 2007 through 2015, 9 of the 117 potential FBOs we identified across HHS, DOJ, and DOL, certified that they were exempt based on RFRA from nondiscrimination laws related to religious-based hiring (see fig. 3). As a result, the nine FBOs were allowed to consider a prospective employees’ religious faith when making employment decisions. All nine of the FBOs were awarded funding by DOJ primarily through the agency’s Justice Programs, and collectively received approximately $3.2 million, which is less than 1 percent of the $804 million in grants that DOJ awarded that were subject to statutory restrictions from fiscal years 2007 to 2015. HHS, DOJ, and DOL awarded funding to at least 2,586 grantees through 53 grant programs that were subject to statutory restrictions on religious- based hiring. The number of relevant grant programs could be higher because GAO could not identify all such programs due to data limitations. We interviewed six of the nine faith-based grantees that certified that they were exempt from religious-based hiring restrictions. Each of the six grantees emphasized the importance of hiring someone of the same religious faith to assist with grant activities. For example, the grantees said that hiring someone with the same religious faith was critical to their mission and organizational success, and if the RFRA exemption were not available, they may not have sought the grant. We also interviewed grantees from five of 35 potential FBOs that did not certify that they were exempt from statutory restrictions based on religious-based hiring to see if they were aware of the potential for an exemption. The five grantees said that they did not recall seeing information about the exemption option in the grant application or grant award documentation. They said that they also may not have been looking for the information because they were not considering religion in their hiring decisions. HHS, DOJ, and DOL used various methods for informing grant applicants and recipients of the statutory restrictions on religious-based hiring and their processes for obtaining an exemption from such restrictions. Specifically: DOJ had made this information available on agency web pages as well as in the documentation that is provided to grant recipients. DOL had a web page dedicated specifically to explaining statutory restrictions on religious-based hiring to faith-based grant applicants and recipients, which also covers the process for seeking exemptions from the restrictions. In addition to providing information in grant announcements, HHS provided all Substance Abuse and Mental Health Services grant applicants seeking funds for substance abuse prevention and treatment services with a form that cites laws and regulations governing religious organizations that receive grant funding, including the regulation that outlines the exemption process. As we reported in 2016, DOJ, DOL, and HHS all required grantees that seek to make employment decisions based on religion to self-certify that they met requirements to be eligible for an exemption from statutory restrictions on religious-based hiring, but varied in how they reviewed and approved requests for approval. All three agencies required that faith- based grantees complete a form or some written request to demonstrate their eligibility for the exemption, but DOL is the only agency that reviewed and approved the requests. For example, DOL required that faith-based grantees submit their requests for the exemption for review and approval by the Assistant Secretary responsible for issuing or administering the grant. Conversely, while DOJ and HHS required that faith-based grantees submit a form or written request, respectively, neither reviewed nor approved the requests. On August 15, 2019, OFCCP proposed regulations intended to clarify the scope and application of the religious exemption to help religious employers with federal contracts and subcontracts and federally assisted construction contracts and subcontracts better understand their obligations. OFCCP proposes to add definitions of the following terms: exercise of religion; particular religion; religion; religious corporation, association, educational institution, or society; and sincere. In addition, the proposed rule states that the religious exemption should be construed to provide the broadest protection of religious exercise permitted under the Constitution and related laws, consistent with the administration policy to protect religious freedom. The stated intent of the proposed rule is to make clear that religious employers who contract with the federal government can condition employment on acceptance of or adherence to religious tenets, provided that they do not discriminate on other bases. Chairwoman Bonamici, Senior Republican Comer, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Cindy Brown Barnes, Director, Education, Workforce and Income Security Team at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Blake Ainsworth, Amber Yancey- Carroll, Melinda Bowman, Sheranda Campbell, Sarah Cornetto, Mary Crenshaw, Helen Desaulniers, Holly Dye, Michael Erb, Monika Gomez, LaToya King, Joel Marus, Diana Maurer, Heidi Neilson, James Rebbe, Katrina Taylor, Rosemary Torres Lerma, Kathleen van Gelder, and Betty Ward Zukerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Several federal laws, executive orders, and regulations seek to promote equal employment opportunity by prohibiting employers from discriminating in employment on the basis of race and gender, among other things, and generally require companies contracting with the federal government to comply with affirmative action and other equal employment opportunity provisions. The EEOC and OFCCP are the primary federal agencies that enforce these requirements. Although federal law also generally prohibits employment discrimination based on religion, faith-based organizations may hire based on religion. Some federal grant programs contain statutory restrictions prohibiting this practice; however, since a 2007 DOJ legal opinion, federal agencies have allowed faith-based grantees to use RFRA as a basis for seeking an exemption to allow religious-based hiring. GAO has issued three reports since September 2016 that address equal employment opportunity ( GAO-16-750 , GAO-18-69 , and GAO-18-164 ). This testimony is based on these three reports and discusses 1) OFCCP and EEOC's progress in addressing prior GAO recommendations and 2) equal employment opportunity exemptions for faith-based organizations. To update the status of prior recommendations, GAO reviewed agency guidance and documentation and interviewed agency officials. What GAO Found The Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) and the Equal Employment Opportunity Commission (EEOC) face challenges in overseeing compliance by employers and federal contractors with applicable federal equal employment opportunity requirements. In its 2016 report, GAO made six recommendations to OFCCP and in its 2017 report made five additional recommendations to OFCCP and one to EEOC to strengthen program oversight. OFCCP has implemented four recommendations, but seven require additional agency action to be fully implemented, as does the one to EEOC. For example: In 2016, GAO found that OFCCP's oversight was limited by reliance on contractors' voluntary compliance with affirmative action plan requirements. OFCCP has taken steps to develop a new web portal for collecting those plans annually, but has not yet obtained Office of Management and Budget approval for the collection or launched the portal. GAO also found OFCCP's oversight was limited by a lack of timely staff training. OFCCP has taken steps to implement a new training curriculum, but has not yet implemented its new learning management system that will help ensure timely and regular training. In 2017, GAO found that EEOC had not consistently captured information on industry codes, which limits EEOC's ability to identify trends by industry sector and conduct sector-related analyses. EEOC has not yet completed development of its Employer Master List that will include industry codes. GAO also found that OFCCP's methodology for identifying equal employment disparities by industry might not accurately identify industries at greatest risk of noncompliance with affirmative action and nondiscrimination requirements. OFCCP has taken steps to develop a new methodology, but needs to further refine it to ensure that it will identify industries at greatest risk. From fiscal years 2007 through 2015, few faith-based grantees sought an exemption from nondiscrimination laws related to religious-based hiring under the Religious Freedom Restoration Act of 1993. In October 2017, GAO found that the Departments of Justice (DOJ), Health and Human Services (HHS), and Labor (DOL) had awarded funding to at least 2,586 grantees through at least 53 grant programs that restricted grantees from making employment decisions based on religion. The number of relevant grant programs could be higher because GAO could not identify all such programs due to data limitations. Across the three agencies, GAO identified 117 grantees that were potentially faith-based organizations (FBO). Of the 117 potential FBOs, nine DOJ grantees were FBOs certified as being exempt from statutory restrictions on religious-based hiring. All three agencies required grantees seeking an exemption to self-certify that they were eligible for the exemption, but the agencies' processes for reviewing and approving exemption requests varied. In August 2019, OFCCP issued a proposed rule to clarify the scope and application of the religious exemption to help organizations with federal contracts and subcontracts and federally assisted construction contracts and subcontracts better understand their obligations.
gao_GAO-20-25
gao_GAO-20-25_0
Background The CSBG program is intended to focus on three overall (national) goals: reducing poverty, empowering low-income families and individuals to become self-sufficient, and revitalizing low-income communities. The program is administered by OCS within the Administration for Children and Families (ACF) at HHS. CSBG was an outgrowth of the War on Poverty of the 1960s and 1970s, which established the Community Action program under which the nationwide network of local community action agencies was developed. The federal government had direct oversight of local agencies until 1981, when the CSBG program was established and states were designated as the grant recipients. OCS and states now share responsibility for oversight of CSBG grantees. In fiscal year 2019, states received approximately $700 million of the total $725 million CSBG appropriation. Appendix II provides the funding amounts for each state. OCS distributes CSBG funding to states and they, in turn, distribute funds to over 1,000 local agencies. Most of these local agencies receive funding from a variety of federal, state, and private sources. In fiscal year 2017, the latest data available, local agencies received about $9 billion from all federal sources, including about $700 million from CSBG. Other federal programs providing funding include Head Start, the Low Income Home Energy Assistance Program (LIHEAP), the Community Development Block Grant (CDBG), the Child Care and Development Block Grant, Temporary Assistance for Needy Families, and the Social Services Block Grant (see fig. 1). Programs administered by ACF contributed about $6.6 billion of the funds provided to local agencies. CSBG funding can be used broadly, allowing state and local agencies flexibility to provide services tailored to organizational and community needs. CSBG funds can be used by local agencies to provide services to participants in their programs and fill gaps in the funding provided by other means. For example, local agencies may use CSBG funds to support a position for a staff member who determines the service needs of potential participants and connects them with the appropriate services—a position that would not be an allowable expense under the funding rules of other federal programs, according to a local agency official we interviewed. Local agencies have also used CSBG funding to leverage other public and private resources to support a variety of initiatives, such as Head Start programs, low-income energy assistance programs, and low-income housing. Federal Role OCS monitors all states receiving grant funds to ensure that they are meeting the standards for federal grant programs set by the Office of Management and Budget (OMB) and the specific expenditure requirements for the program. The CSBG Act requires that states submit plans to OCS describing how they intend to use the funds to address the needs of the local community and annual reports detailing the actual use of funds, including information on state performance results and populations served. OCS is required by the CSBG Act to conduct compliance evaluations of several states each fiscal year to review the states’ use of CSBG funds, report to states on the results of these evaluations, and make recommendations for improvements. However, the CSBG Act does not specify the number of states subjected to an evaluation each year or the timeframe each state must undergo such evaluations. Following a compliance evaluation, states are required to submit a plan of action in response to any OCS recommendations. In addition to conducting compliance evaluations to assess states’ use of CSBG funds, OCS is required to submit an annual report to Congress. This annual report must include a summary of how states and local agencies had planned to use CSBG funds; how funds were actually spent, data on the number and demographics of those served by local agencies, and other information. The CSBG Act requires OCS to provide training and technical assistance to states and to assist them in carrying out corrective action activities and monitoring. OCS must reserve 1.5 percent of annual appropriations (in fiscal year 2019, this percentage totaled about $11 million of the total appropriation) for many activities, including training and technical assistance; planning, evaluation, and performance management; assisting states with carrying out corrective action activities; and oversight including reporting and data collection activities. The CSBG Act also requires that states complete several steps before terminating an underperforming entity. The state agency is required, among other things, to provide training and technical assistance, if appropriate, to help the agency correct identified deficiencies, review the local agency’s quality improvement plan, and provide an opportunity for a hearing. The entity can request a federal review of the state’s decision to reduce or terminate funding, which must be completed within 90 days of OCS’s receipt. During this period, the state is required to continue funding the entity until OCS responds to the request. State and Local Agencies’ Roles The CSBG Act requires each state to designate a lead state agency to administer CSBG funds and provide oversight of local agencies that receive funds. States are required to award at least 90 percent of their federal block grant allotments to eligible local agencies, and to determine how CSBG funds are distributed among local agencies. States may use up to $55,000 or 5 percent of their CSBG allotment, whichever is higher, for administrative costs. States may use remaining funds for the provision of training and technical assistance, and other activities. In addition, states and local agencies that expend $750,000 or more in total federal awards are required to undergo an audit annually and submit a report to the Federal Audit Clearinghouse. The CSBG Act requires states to determine if local agencies meet the performance goals, administrative standards, and financial management requirements for the CSBG program. For each local agency, the CSBG Act requires the state to conduct: a full onsite review at least once during each 3-year period; an onsite review of each new local agency following the completion of the first year receiving CSBG funds; followup reviews including prompt return visits to local agencies that fail to meet goals, standards, and requirements established by the state; and other reviews as appropriate, including reviews of local agencies found to have had other grants terminated for cause. For states to receive CSBG funding, they must submit an application and state plan at least biennially describing, among other things, how they will use CSBG funds to accomplish various things such as helping families and individuals to achieve self-sufficiency, find and retain meaningful employment, and obtain adequate housing. Within their state plan, states must attest that (1) funds will be used to address the needs of youth in low-income communities; (2) funds will be used to coordinate with related programs; and (3) local agencies will provide emergency food-related services. States must also complete annual reports that include fiscal, demographic, and performance data. In their state plans, states must provide an assurance that all local agencies will submit a community action plan that includes a community needs assessment for the community served. In addition, local agencies must administer the CSBG program through a three-part board, consisting of one-third elected public officials and at least one-third representatives of the low-income community, with the balance drawn from officials or members of the private sector, labor, religious, law enforcement, education or other groups in the community served. Performance Measurement The Government Performance and Results Act of 1993 (GPRA) as enhanced by the GPRA Modernization Act of 2010 (GPRAMA) focuses federal agencies on performance by, among other things, requiring agencies (including HHS) to develop outcome-oriented goals and a balanced set of performance indicators, including output and outcome indicators as appropriate, to assist agencies in measuring or assessing their progress toward goals. OMB provides guidance to federal executive branch agencies on how to prepare their strategic plans in accordance with GPRA requirements. We have reported that strategic planning requirements established under GPRA and GPRAMA can also serve as leading practices for strategic planning at lower levels within federal agencies. Federal standards for internal control help to ensure efficient and effective operations, reliable financial reporting, and compliance with federal laws. Internal controls help government program managers achieve desired results through effective stewardship of public resources. Such interrelated controls comprise the plans, methods, and procedures used to meet missions, goals, and objectives. Internal controls support performance-based management and should provide reasonable assurance that an organization achieve its objectives of (1) effective and efficient operations, (2) reliable reporting, and (3) compliance with applicable laws and regulations. With regard to performance measurement for state and local agencies, the CSBG Act requires OCS, in collaboration with states and local agencies, to facilitate the development of one or more model performance measurement systems which may be used by states and local agencies to measure their performance in fulfilling CSBG requirements. Each state receiving CSBG funds is required to participate in and ensure that all local agencies in the state participate in either a performance measurement system whose development was facilitated by OCS or in an alternative system approved by OCS. OCS developed the Results Oriented Management and Accountability (ROMA) performance management approach that states and local agencies follow when overseeing programs and measuring their performance in achieving their CSBG goals. In 2012, OCS began four initiatives to update how it oversees the performance of the CSBG program, and as of April 30 2019, OCS had implemented all four of the initiatives, which include: an updated ROMA process for program management, 58 organizational management standards for local agencies, new federal and state accountability measures, and an updated annual report format where oversight and performance information from states is collected in an automated online data system. In addition, OCS developed the CSBG Theory of Change which illustrates how the core principles of the CSBG program, the performance management framework, and services and strategies offered with CSBG funds relate. The three national goals established under the CSBG Theory of Change are similar to the three national goals identified in the CSBG Act, but are not identical. The three goals under the CSBG Theory of Change are: 1. individuals and families are stable and achieve economic security, 2. communities where low-income people live are healthy and offer 3. people with low incomes are active in their community. OCS and Selected States Conducted Onsite and Routine Oversight Activities and Provided Training and Technical Assistance to CSBG Grant Recipients OCS and states are responsible for conducting oversight activities to ensure that CSBG recipients use the funds in accordance with the CSBG Act, which includes ensuring that the funds are used in line with the grant’s three national goals related to addressing the causes and conditions of poverty. Our review of oversight efforts during fiscal years 2016 and 2017 for the select states showed that OCS and states conducted required oversight activities, as well as additional oversight activities, and provided training and technical assistance to help CSBG recipients meet CSBG program requirements. Our review of file documentation for six selected states where OCS conducted compliance evaluations during fiscal years 2016 and 2017, and six selected states where OCS conducted routine oversight, showed that OCS identified primarily administrative issues, but in some instances identified non- compliance and other more serious issues that required corrective actions that states took action to resolve. We largely found similar results in our review of the selected states’ onsite and routine oversight activities for local CSBG funds recipients for the same time period. Beyond findings of an administrative nature, a fiscal year 2017 OCS compliance evaluation found that one state did not conduct required monitoring of its eligible entities during fiscal year 2015. Also, one state identified financial mismanagement, which resulted in termination of a local grantee from the CSBG program. Additionally, we found that OCS and states provided training and technical assistance to help CSBG recipients meet requirements. OCS and Selected States Conducted Onsite and Routine Oversight Activities, and Identified Issues Requiring Corrective Actions OCS’s Onsite Compliance Evaluations OCS officials conducted onsite compliance evaluations, in addition to other oversight activities, for 12 states using a risk assessment and prioritization process during fiscal years 2016 and 2017. We reviewed six of these 12 states and found that a majority of errors identified by OCS were administrative. The CSBG Act requires OCS to conduct compliance evaluations for several states each year. Since fiscal year 2009, OCS has conducted onsite compliance evaluations in five to seven selected states each year, in addition to the routine oversight it conducts for all the states. According to OCS officials, the number of states visited each year depends upon available resources. OCS primarily bases its selection of states for onsite compliance evaluations on a risk assessment conducted using a scoring tool. The scoring tool generates a risk score of 1 to 5 for each state using a number of measures, as shown in figure 2. The various factors used in developing the total risk score are weighted to ensure the most significant risk indicators and prioritization factors have the most impact on the selection of states for onsite monitoring. The list of risk factors was developed by OCS in response to a recommendation from our 2006 report in which we found that OCS did not systematically use available information to assess risk to focus its monitoring resources on states with the highest risk. According to OCS officials, OCS rarely visits states that they identify as low risk or states that have very few local agencies as grantees, and they try to not visit the same state within 3 years of their last visit. OCS officials told us that monitoring resources limit their ability to reach all of the states for onsite review. We found that, since fiscal year 2008, eight states have not received an onsite evaluation and 10 had been visited twice. According to agency officials, the risk assessment is part of a larger risk assessment and prioritization process designed to direct monitoring resources over multiple years. After determining risk under the scoring tool, OCS considers several other factors and may place a higher priority on states with lower risk scores when selecting states for onsite compliance evaluations. Agency officials said such factors include: size of the CSBG award, findings from single audits, the rate at which the state spends its CSBG funds, time since the last OCS visit, and feedback from the OCS program manager using information gathered from the quarterly calls with the states. For states selected for onsite compliance evaluations, we found that OCS conducts a comprehensive review of each of the state’s plan and annual reports and examines the state’s supporting documents to determine if that state is meeting the requirements of the CSBG program. Although OCS reviews the plans for all 56 states as part of its routine oversight efforts, during the onsite visit the agency also conducts interviews with staff and examines state statutes or regulations and supportive information, such as financial ledgers and oversight procedural manuals. OCS also reviews the state’s grant funding to determine if the state allocated the funds in accordance with the requirements of the CSBG program. Additionally, OCS reviews each state’s fiscal controls and accounting procedures and associated documents to assess the financial integrity of the state’s process for drawing down federal funds, providing funds to local agencies, and reporting financial information. For example, OCS officials may review the state agency’s bookkeeping system and accounting software. In our review of OCS’s file documentation for the six selected states, we found OCS generally identified administrative errors, but in some instances identified issues of non-compliance and other issues that the states took action to resolve. For example, during its fiscal year 2017 onsite visit to Louisiana, OCS found that Louisiana did not implement procedures to monitor and track prior year single audit findings for corrective action and issue management decisions as required. To address this concern, the state assigned a member of its staff to execute these duties and submitted a copy of the Single Audit Process and audit log to OCS. Additionally, OCS found that Louisiana did not visit any of its 42 local agencies in fiscal year 2015 because of limited capacity such as staffing shortages, among other non-compliance issues. OCS determined that Louisiana addressed this issue by visiting all of the local agencies before the end of fiscal year 2017. Also, in a fiscal year 2016 onsite visit to Indiana, OCS found that the state agency did not submit a required financial report to account for CSBG expenditures within established timeframes in two consecutive fiscal years—2014 and 2015—due to the lack of a process to ensure the timely submission of the report. OCS also found that the financial report for fiscal year 2014 contained incorrect amounts for certain expenditures. The Indiana state agency responded to the issues by developing formal written procedures regarding the preparation and submission of financial reports. In addition, for the six selected states, we found that OCS had assessed state plans and annual reports to ensure that the states were complying with the programmatic, financial, and administrative requirements of the CSBG program, as outlined in the CSBG Act. OCS’s Routine Oversight Activities In our review of the selected states, we found that during fiscal years 2016 and 2017, OCS conducted routine reviews and other oversight activities to assess states’ use of CSBG funds. We selected six states (Alaska, Colorado, Kentucky, Mississippi, North Dakota, and Rhode Island) for our review of file documentation of OCS’s routine reviews. We found that for these six states, the routine reviews consisted of OCS reviewing all state plans and annual reports to determine if the state completed all sections of the plan and provided information about how it would achieve the goals of the program. In our review of file documentation for the six states, we found that OCS requested states to provide additional details about their plans; however, like the issues identified in the onsite compliance evaluations, the issues on which OCS commented were primarily administrative. For example, in fiscal year 2016, OCS reviewed Colorado’s 2016 annual plan and requested that the state provide additional details on plans to modify its organizational standards. Also, in its fiscal year 2017 review, OCS requested that Alaska provide additional information in its annual plan to explain how the state would prioritize providing services to individuals based on their income. We found that the states addressed OCS’s comments. OCS officials told us that they used quarterly calls as a part of their routine oversight. Agency officials told us that they generally use quarterly calls to discuss the state plans and the CSBG program broadly, and review the annual reports. OCS officials also told us that OCS uses these calls to update states on issues that have significant impact or importance on the successful operation of the CSBG grantees. In some cases, OCS program specialists may use the quarterly calls to identify areas where the state may be struggling and to discuss ways to address those issues. In addition, OCS officials stated that OCS program specialists will work with states to assist with developing work plans or reviewing corrective action procedures for high-risk local agencies. Selected States’ Onsite Visits All three states we visited (New York, North Dakota, and Texas) conducted onsite visits to local agencies at least once every 3 years as required by the CSBG Act, and conducted routine oversight activities. In response to our June 2006 recommendation, OCS issued guidance clarifying that states must conduct an onsite review of each local agency at least once every 3 years. Besides the triennial onsite reviews, the law requires states to conduct: (1) follow up reviews including prompt return visits to local agencies that fail to meet state goals, standards, and requirements, (2) an onsite review of new local agencies following the completion of the first year receiving CSBG funds, and (3) other reviews as appropriate, including reviews of local agencies found to have had other grants terminated for cause. Each of the states we visited had developed oversight policies and procedures that included information on how often CSBG programs should be reviewed onsite and what program operations should be covered during onsite visits; two states provided sample forms or instructions on what forms to use to record findings. For example, each state’s policies and procedures established the frequency of onsite visits: New York and Texas conduct the visits at least once every 3 years and North Dakota conducts them once every 2 years (see table 2). The selected states’ policies and procedures also specified that state officials assess local agency financial controls, review financial records and client files, and review local agency governance. They also described information about actions state officials were required to take when they identified deficiencies in a local agency’s operations. For example, in all three of the states we visited the policies and procedures required state officials to notify local agencies of deficiencies in writing. Our findings from the two local agencies we visited in each of the three states showed that state officials identified a variety of issues during their reviews, but none that required those local agencies to lose their CSBG funding (see table 3). Generally, we found that the issues identified could be characterized as fiscal, governance, or administrative. Fiscal issues included improper use of funds. For example, state officials in one selected state found that a local agency had improperly used a small amount of CSBG funds to purchase a grill for agency activities. Governance-related findings included issues with both the composition and manner of selecting the local agency’s CSBG Board of Directors members. For example, in Texas, state officials cited one local agency for not complying with the CSBG Act’s requirement regarding the structure of its Board. Also, North Dakota cited a local agency for not having the required representation of low-income individuals on its Board. Administrative issues included recordkeeping of information on participants. For example, Texas cited a local agency for inaccurately reporting a program participant as having transitioned out of poverty. The state agency found that the participant’s file did not contain all of the required documentation needed to show that the participant had maintained a certain income level for a 90-day period. The state agency officials we spoke with told us that their reviews sometimes identified more serious issues that resulted in local agencies being terminated from the program. For example, Texas terminated two local agencies’ CSBG funding due to financial mismanagement that was uncovered during state monitoring of the local agencies. Texas officials noted that the process for terminating local agencies with deficiencies was, for them, a prolonged process, in part because of the steps they took to provide technical assistance and work with agencies in an attempt to resolve issues before terminating them from the program. They told us they found it difficult to establish sufficient grounds for termination and, for one of the terminations, Texas officials continued to work with the agency for two years while also working with OCS. Texas officials told us that they found the guidance on terminations to be unclear. OCS officials acknowledged that the information memorandum they have developed on terminations provides broad guidance that covers a range of issues states might encounter, and may not have detailed guidance covering each situation. However, they noted that they work with states on a case by case basis, as they did with Texas, to provide guidance that is specific to each situation. State officials in the selected states told us that local agencies identified as having deficiencies are notified of those deficiencies and provided information on how to correct them. Further, our review of corrective actions required of selected local agencies by the states we visited showed that the local agencies addressed the concerns raised by the states. For example, Texas required a local agency that it found did not comply with CSBG Board requirements concerning membership to fill the vacancies on the Board and to provide the state a timeline for completing the required corrective actions. In addition to taking corrective actions, local agencies may be required to submit fiscal and programmatic reports more frequently when monitoring uncovers problems. For example, North Dakota’s policies and procedures indicate that monthly reports may be required of local agencies that have been found to have financial recordkeeping problems. We also found that state agency officials in our three selected states conducted onsite reviews more frequently than the once every 3 years requirement, as well as routine offsite reviews. For example, New York conducted quarterly onsite visits to all local agencies, where each quarterly visit involved a targeted review of a specific aspect of a local agency’s CSBG program. For example, during the third quarterly visit of the year, state officials focused on local agency planning efforts for the next funding year, including the community needs assessment, while during the last quarterly visit of the year, state officials focused on grant closeout activities. New York, like North Dakota and Texas, also conducted routine offsite reviews of local agencies’ activities and finances. In our three selected states, these reviews included examining fiscal and program reports periodically submitted by local agencies to state officials, periodic meetings and conference calls between state and local agency staff, and reviewing audit reports. These oversight activities also included fiscal audits conducted by the state auditor or independent auditors when a local agency’s funding met the threshold for such review. Our review of single audits and interviews with each state’s auditor’s office in the three states we visited showed that none of the state audit agencies focused specifically on CSBG funding during the period of our review. Texas last conducted an audit focusing on CSBG in 2014 and North Dakota did so in 2011; neither state reported findings as a result of those audits. Officials from the state auditor offices in North Dakota and Texas said CSBG funding levels are below the federally-established threshold for programs that must be audited. New York state audit officials told us that they had not conducted any audits focused on CSBG. OCS and States Provided Training and Technical Assistance to CSBG Recipients OCS and states provided training and technical assistance through a variety of methods to help CSBG recipients meet program requirements. In fiscal years 2016 and 2017, OCS designated nearly $14 million over the 2-year period for such efforts. OCS officials told us that they determine what training is needed through input from OCS program specialists, information obtained through a data task force, and requests from state and local agencies. OCS officials stated that the OCS’s program specialists use the quarterly calls to identify the types of support that states need. For example, a specialist may notice that the states need additional guidance on using their customer survey results. In response, the specialist may share a guide on how states can use the survey results to set reasonable performance improvement goals. In addition, OCS sponsors a CSBG Data Task Force to recommend strategies for building network capacity for collecting, analyzing, reporting and using performance data as well as identifying on-going training and technical assistance needs. OCS officials told us that they also conducted focus groups in 2016 to gather states’ perspectives on their training and technical assistance needs. From these focus groups, OCS issued guidance stating its technical assistance priorities and strategy for meeting identified needs for training and technical assistance in areas including: performance management, governance, effective state oversight, and results-oriented services and strategies. In 2017, OCS issued guidance laying out the agency’s 3-year training and technical assistance strategy to guide the development and delivery of training and technical assistance for the CSBG network. OCS officials said that once they establish the standards for the training and technical assistance and identify specific training needs, the agency awards cooperative agreements to organizations that focus on developing and providing training to build upon guidance already provided. During the period of our review, we found that each agreement focused on a specific type of training. For example, the National Association for State Community Services Programs (NASCSP) has a cooperative agreement with OCS to provide the orientation and oversight training for new state officials overseeing the CSBG program, and collects and coordinates the analysis of the data provided in the state plans and annual reports. OCS has also worked closely with NASCSP in the transition to the new performance framework. OCS officials told us that they are currently reviewing their training and technical assistance portfolio and may issue additional guidance on its strategy and coordination efforts during fiscal year 2020. In addition, OCS uses various methods to provide guidance to states to help them meet CSBG requirements, but state officials differed in their views on the usefulness of the guidance. OCS provides guidance to states through informational memorandums, letters, webinars, and communications with program specialists. Some of the state agency officials in two of the states we visited said that the guidance that OCS has provided to help states ensure compliance with program requirements is not always clear and up to date. For example, officials in North Dakota said that they did not understand the information requirements for a form used to gather information from applicants for local programs. State agency officials in Texas said that OCS issued guidance on the new information requirements just weeks before the reporting deadline, and that this did not allow states sufficient time to set up their data systems to meet the new requirements. OCS officials acknowledged that they were aware of the issues raised by state agency officials and explained that some states have difficulty with the guidance because it is written at a high level so that it can apply to all states. They also acknowledged the delays in getting new information requirements to states and said that such delays were related to troubleshooting the new smart forms and online database. They said that they do not anticipate such delays in the future. As previously discussed, Texas state officials also said that they found the guidance for terminating a deficient agency’s CSBG funding confusing. However, officials in New York said that they found the guidance to be clear. They said that the informational memorandum on terminating agencies’ CSBG funding is more prescriptive than previously issued CSBG guidance. OCS officials stated that the agency is continuously seeking opportunities to work with its technical assistance centers to identify the best means of delivering guidance to states and to eligible entities. OCS officials also said that they must continue to refresh training efforts when there is turnover among key staff in a state agency and work with new state administrators to transition into their new roles. State agency officials in all three states we visited told us that they used some of their state’s discretionary funding for training and technical assistance to help local agencies meet CSBG requirements. The CSBG Act allows states to use a maximum of 10 percent of their CSBG funds for training and technical assistance and other specified purposes. In the selected states, officials spent from $65,000 to over $400,000 for training and technical assistance for local agencies (see table 4). Across the three selected states, we found that the training provided to local agencies addressed what local agencies need to do to meet a wide variety of CSBG requirements, from planning community needs assessments to implementing performance management requirements. In addition, some funds states provided for training were used by local agencies to send their staff to regional or national conferences for training (see table 5). State officials in two of the three states we visited said that they determine what training they need to offer based on analysis of feedback and specific requests from local agencies. For example, Texas identified training needs for local agencies through a Training and Community Affairs group that gathered information from local agencies about their training needs. Texas officials said they analyzed assessment results, feedback, and requests from local agencies and other sources to determine the training needs of individual state and local agencies. State officials said that they then met with the state association to develop the Joint State Training and Technical Assistance Plan and, ultimately, to provide trainings at the annual state conference, and to identify workshops, webinars, and online resources (guides, tools, best practices, and links to other training resources) that need to be added or changed. Similarly, state officials in North Dakota reported working closely with the state association of community action agencies to plan and conduct training for local agency staff. State and local agency officials also said that they have relied on the OCS-funded national resource centers for assistance. Officials in the states we visited all reported being helped by information provided by the national centers on topics such as the new organizational standards and how to submit data in the new annual report. Local agency officials told us that they send staff to the conferences sponsored by the national resource centers to obtain training when funding is available for that purpose. In addition to training, state officials in the states we visited cited a variety of practices that contribute to effective oversight. Both New York and North Dakota officials emphasized the importance of frequent, ongoing communication with local agencies as crucial to successful oversight. New York also identified frequent visits to local agencies and immediate action in response to problems as additional key factors for effective oversight. OCS Reports on CSBG’s National Effectiveness, but Several Elements of Its Redesigned Performance Management Approach Do Not Align with Leading Practices OCS Uses State Outcome Data to Report on the National Effectiveness of CSBG, but the Performance Measure Used for this Purpose is of Limited Use OCS uses outcome data from state agencies that collect and aggregate data from local CSBG recipients to provide an indication of CSBG’s progress in meeting the three national program goals. As previously discussed, the three national goals of the CSBG program as established under the CSBG Act are to (1) reduce poverty, (2) empower low-income families and individuals to become self-sufficient, and (3) revitalize low- income communities. State agencies report data from a menu of more than 100 performance measures established by OCS and grouped by service types such as employment, early childhood programs, and education. OCS sets annual targets for the overall performance of the CSBG program and uses the aggregated state data as an indicator of CSBG’s national effectiveness to inform budget decisions consistent with federal requirements for performance management. Until fiscal year 2018, OCS used one performance measure—the number of barriers to economic security that the local agencies receiving CSBG funds eliminated for individuals, families, and communities—to provide an indication of CSBG’s national effectiveness. To do this, OCS combined the outcome data from 10 of the more than 100 performance measures from the state annual reports to derive a cumulative total number of barriers overcome. OCS selected the 10 measures as a way to track outcomes from services that range from emergency services to more comprehensive and coordinated services. The 10 measures included outcomes such as the number of participants who obtained a job, maintained employment, maintained an independent living situation, reached the goals of enrichment programs, or obtained emergency assistance. While this one performance measure of barriers eliminated was intended to provide OCS with an indication of how the program was meeting CSBG national goals, several weaknesses with this measure limited OCS’s ability to do so. First, the measure included duplicative counts. For example, an individual may overcome a number of different barriers to reach the outcome of obtaining a job. As a result, by tracking the number of barriers, an outcome may be counted multiple times when combining data from multiple measures. Second, it is also difficult to know which CSBG funded program or service caused the positive outcome or if one service helped achieve multiple outcomes. Third, OCS officials clarified that when calculating this and other outcome measures, the removal of barriers to economic security is not solely the result of CSBG funds, but of all funding administered to local agencies that received CSBG funds. As such, they said that it is difficult to isolate the effects of CSBG funding. In its agency wide budget justification for fiscal year 2020, HHS reported that in fiscal year 2017 local agencies eliminated 32.2 million barriers to economic security, well above the 27.6 million it set as its goal for the year. In the same year, 16.2 million individuals received support through local agencies receiving CSBG funds. While the performance measure aided OCS in providing some indication of how the CSBG program contributes to the goal of improving self-sufficiency, it still did not provide information on the program’s progress in meeting the other two national program goals. Leading practices in performance management stress that performance measures should be tied to the specific goals of the program. However, no such linkage existed between the performance measure OCS used to report on the progress of the CSBG program and the program’s three national goals. required to annually report, among other things, a summary of certain information the states provide and its findings on state compliance to Congress. While OCS does submit such reports, we found that there has historically been a multi-year lag in OCS providing these reports to Congress. In May of 2019, OCS released its fiscal year 2015 CSBG report to Congress (see sidebar on data reported in the CSBG fiscal year 2015 report to Congress). Over the last decade, this type of reporting lag has been common and OCS has taken an average of more than 3 years from the end of the federal fiscal year until the time the Congress received the final report. OCS officials told us that they submitted the draft annual report for fiscal year 2016 for internal review by HHS in October 2018, but said that they could not project when the final report would be issued to Congress. They said they are currently drafting the fiscal year 2017 report. Several Elements of OCS’s Redesigned Performance Management Approach Do Not Align with Federal Leading Practices, Limiting OCS’s Ability to Report on CSBG’s National Progress OCS has taken steps to redesign its performance management approach, but several elements of the new approach do not align with federal performance management and internal control standards. OCS has been redesigning how it oversees and manages the performance of the CSBG program to better align with GPRAMA, according to OCS officials. Since fiscal year 2016, OCS has been implementing new performance management tools for the CSBG program, including updating what data it collects and how it collects it on the services and outcomes, or performance measures, of the CSBG program. OCS officials stated that the changes are necessary to be able to provide more information and analysis on CSBG funded programs and their outcomes. They also noted the importance of these updates given a tightening federal budget. As part of these changes, OCS updated its more than 100 performance measures by revising the language of some and adding new measures that state and local agencies can report on, including measures more focused on outcomes in the communities they serve. State and local agency officials told us that the increased emphasis on outcomes in the new measures was an improvement and increased their own focus on connecting CSBG funds to traceable results. In addition, OCS transitioned to an online data reporting system that allows state agencies to directly report and access CSBG program data. However, OCS is still revising how it will use the data provided by state and local agencies to reflect nationwide results. OCS is using the data collected in state annual reports to develop a new national measure intended to provide a national total count of individuals who achieve at least one positive outcome through programs and services offered by local agencies that receive CSBG funds. Unlike the prior measure on the number of barriers to economic security eliminated by local CSBG recipients that could include duplicative counts, the new measure will be a count of individuals. OCS stopped using the prior measure after fiscal year 2017. Until OCS finalizes the new measure, it does not have a performance measure in place with targets and results that it can report to Congress. As such, it is unclear if OCS will report national performance outcomes for fiscal year 2018 or how useful the new measure will be while it is still in development through fiscal year 2022. While OCS has taken steps to redesign its performance management approach, several elements of the new approach do not align with federal performance management and internal control standards. Specifically, OCS has not established (1) how the new national measure will be used to assess CSBG goals, (2) the relationship between state and local measures and program goals, and (3) how OCS will monitor the reliability of state and local agencies’ program data. How the newly developed national measure will assess CSBG program goals. As discussed, OCS is developing a new national measure intended to provide a total number of individuals who achieved at least one positive outcome from CSBG funded program or services. However, it is unclear which of the three program goals—reducing poverty, empowering low-income families and individuals to become self- sufficient, or revitalizing low-income communities—the new national measure is being used to assess. As noted previously, OCS officials have stated that they are working to establish ways to provide more information and analysis on programs and their outcomes. OCS officials also told us that they are using GPRAMA as a guide for these changes and in our prior work we have reported that these requirements can serve as leading practices for strategic planning at lower levels within federal agencies. GPRAMA requires agencies to establish performance goals and a balanced set of performance indicators, including output and outcome indicators as appropriate, in measuring or assessing progress toward those goals. Additional leading performance management practices state that performance measures should be tied to the specific goals of the program. However, OCS’s new measure which is intended to provide a count of the number of individuals that achieve one or more positive outcomes does not specify which of the three national program goals the new measure will address, nor how the other two national program goals will be addressed. OCS officials told us that the new measure is related to two of the three goals because it is aggregated data from some of the outcome measures focused on individual and family outcomes. However, officials acknowledged that the agency has not yet developed a national measure for revitalizing low-income communities. Officials stated they plan to report on progress toward developing these measures and that it will provide examples of community-level outcomes in upcoming reports to Congress. Without clearly linking the measure to the goals, there is no way to tell if, and to what degree, the services local agencies are providing through CSBG grant funds are having the desired effect on their communities, even if examples are included in the shared results. How state and local performance measures are related to the three program goals. It is unclear how the large number of updated state and local performance measures under OCS’s redesigned approach aligns with CSBG’s three national program goals. OCS still collects data on more than 100 measures but it is unclear which of these measures will be analyzed at a national level. According to OCS officials, these data are most useful to state and local agencies for assessing outcomes against their unique goals and numerous measures are necessary to capture the variety of services and outcomes across the 1,000 local agencies. In our prior work on ways that agencies could improve performance management, we have stated that using a minimal number of critical measures is a leading practice. We have found that organizations that seek to manage an excessive number of performance measures may risk creating confusing, excess data that will obscure rather than clarify performance issues. The large number of measures can also further complicate OCS’s efforts to align the measures with CSBG’s three national program goals. How OCS will assess data reliability long-term. Although OCS is taking steps to assess data collected from state and local agencies for its new national measure, it does not have a written plan for how it will assess the data’s reliability for future years. As previously discussed, OCS is using a new data reporting system to collect the data it will subsequently use for its new national measure and this data will now be received directly by OCS instead of a third party. However, OCS does not have written plans in place for how the agency will determine if the new data collected will be a valid measure of the national program’s effectiveness or if the data will be reported reliably by the states into OCS’s online data system. OCS received its first round of performance data for the new measure for fiscal year 2018 on April 30, 2019, and is working with its cooperative agreement grantees and contactors to compile results and conduct quality assurance tests for the new performance data using a multi-step process that involves: OCS staff comparing data provided in the annual report to information previously provided in the state plans; OCS conducting quality assurance reviews, with assistance from the organizations the office has cooperative agreements with, that include checks for discrepancies and identifying items requiring clarification, and conducting follow-up with the states; and, OCS soliciting feedback from state officials and consulting with performance management experts within HHS about refinements to assist OCS in establishing a baseline that will be used in setting future targets. OCS officials also told us that the next steps will be to make any necessary modifications to the measure, such as adjusting how states calculate positive outcomes, and establishing a baseline to set future targets. On October 2, 2019, OCS announced via a Federal Register Notice that it was requesting a three year extension with minor changes of the CSBG Annual Report. OCS plans to make only minor changes to the current data collection tool for 2 years to allow state and local agencies time to assess current information and intends to begin a longer term planning process starting in fiscal year 2020. OCS officials told us that they plan to implement and maintain a quality assurance process to ensure the accuracy of the data based on data from previous years. While the process OCS has put in place to ensure data reliability for the first round of data collected for the new measure is a step in the right direction, OCS does not have a plan for assessing future years’ data. OCS officials told us that they will use selected cooperative agreements and contracts to develop a written plan for how the agency will monitor state and local agency data reliability going forward, but did not provide a timeframe for when this would be completed. Leading practices established by federal internal control standards state that agencies should use quality information that is appropriate, current, complete, and accurate to make informed decisions and evaluate the entity’s performance in achieving key objectives. OCS officials reported that they and contractors are working with the states to adjust and finalize data for fiscal year 2018 by November 2019. By not aligning its redesigned performance management approach with federal performance management leading practices related to program goals, performance measures, and data reliability, OCS cannot properly assess its progress in meeting CSBG’s three national goals. Conclusions Poverty erodes the well-being of individuals, families, and communities. The CSBG program is intended to reduce poverty, empower low-income individuals and families to become self-sufficient, and revitalize low- income communities. The CSBG program allows local agencies to use funds in a wide variety of ways to reduce the causes of poverty in the communities they serve. However, the inherent flexibility of the program also makes it difficult to assess the program’s performance. OCS recently redesigned its performance management approach to better understand how well the CSBG program is progressing toward meeting national goals. However, several elements of the redesigned approach do not align with leading practices in federal performance management. Inconsistencies with these practices, such as having an excessive number of performance measures and lacking a plan for assessing the reliability of state and local performance outcome data, limit OCS’s ability to demonstrate the national effectiveness of the CSBG program. As such, OCS cannot assure the Congress and the American public that the funding is meeting its intended purpose to reduce the causes of poverty. Recommendation for Executive Action The Director of OCS, in developing the new performance management approach for the CSBG program, should ensure that its performance framework includes information on (1) details for how the national measure is linked to and used to assess the three national program goals, (2) descriptions of how the updated state and local performance outcome measures align with national program goals, and (3) a written plan for how OCS will assess the reliability of state performance outcome data. (Recommendation 1) Agency Comments We provided a draft of this report to HHS for review and comment. We received written comments from HHS, which are reprinted in appendix III. HHS concurred with our recommendation, and stated that it plans to take actions to better align its performance measures with the three national performance goals outlined in the new CSBG Theory of Change. While we commend HHS for its plans to address our recommendation, we urge HHS to focus on aligning its performance outcomes with the three national goals of the CSBG program as established by the CBBG Act, which are similar but not identical to the three goals outlined in the new CSBG Theory of Change. HHS also stated that it would implement additional actions to assess the reliability of state performance outcome data. In addition, HHS provided technical comments which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of HHS. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report is listed in appendix IV. Appendix I: Objective, Scope, and Methodology This appendix discusses in detail our methodology for addressing our two research objectives examining (1) the activities that the Department of Health and Human Services (HHS) and states conduct to oversee the state and local agencies that receive Community Services Block Grant (CSBG) funds and (2) the extent to which HHS assesses the outcomes of the CSBG program. We scoped our review of the CSBG program to include the 50 states, American Samoa, the District of Columbia, Guam, Northern Mariana Islands, Puerto Rico, and the United States Virgin Islands, which are defined as states under the CSBG Act. In addition to the methods we discuss below, to address both our research objectives, we reviewed relevant federal laws, federal grants management guidance, and agency documents that describe the federal requirements and responsibilities for overseeing states’ CSBG programs and assessing program outcomes. We interviewed HHS, Office of Community Services (OCS) officials; and reviewed relevant research from OCS and the HHS Office of Inspector General, as well as our prior work on the CSBG and other federal grant programs. Further, we interviewed representatives of the National Association for State Community Service Programs (NASCSP); state officials from state agencies that oversee the CSBG program in New York, North Dakota, and Texas; and six local agencies that receive CSBG funds. We also analyzed CSBG annual reports to Congress and NASCSP data on local agency allocations. Federal Oversight of the CSBG Program To address the federal oversight aspect of our first objective, we reviewed available information on OCS’s policies and procedures, including the risk assessment criteria OCS uses to select states for onsite compliance evaluations and interviewed OCS officials about their oversight efforts. We also selected 12 states for an in-depth review of OCS’s oversight activities. These included six states (Indiana, Louisiana, Michigan, New York, North Carolina, and Texas) for which OCS conducted onsite compliance evaluations during fiscal years 2016 and 2017. We selected the six states where OCS had conducted onsite compliance evaluations based on which of the visited states OCS had prioritized as those in highest need of onsite reviews for fiscal years 2016 and 2017. We also randomly selected five states (Alaska, Colorado, Kentucky, Mississippi, and Rhode Island) where OCS did not conduct such evaluations, but conducted routine reviews. We also selected a sixth state—North Dakota—because OCS had not visited the state in several years. We compared the results to see if there were any notable differences between the two sets. While our findings are non-generalizable, they provide insight into the different levels of review OCS conducts and examples of OCS oversight actions. Our file documentation reviews included a review of: OCS’s comments on each section of the states’ program documents, including the state plan and annual reports; actions the states took to address OCS’s comments; and state’s fiscal controls, financial and program oversight documents. Table 6 provides a summary of the characteristics of the 12 states we selected for review. State and Local Oversight of the CSBG Program To address the state and local oversight aspect of objective one, for a more in-depth look at state oversight practices, including promising practices and challenges, we visited three states: two states (New York and Texas) for which OCS conducted onsite compliance evaluations and one (North Dakota) for which OCS conducted a routine review. We selected these states using several criteria, including state grant amounts, number of local agencies, whether the HHS Office of Inspector General findings had reviewed the state’s use of CSBG funds, the time since the state was last visited by OCS for a compliance evaluation visit, and recommendations from experts at NASCSP and at OCS, who based their recommendations, in part, on states that had promising practices for overseeing local agencies (see table 7). Our final state selections comprise a diverse sample based on these criteria. For example, our selected states include a state with a low number of local agencies, one with a large number of local agencies, states with high and medium amounts of funding, and a state with a low amount of funding. During our state site visits, we interviewed and collected information from state and local agency officials about state oversight efforts from fiscal years 2016 through 2017. For each of the three states, we interviewed state program officials and reviewed related documentation including state policies and procedures, state single audits, onsite oversight guides and reports, and reporting forms for local agencies. We also visited two local agencies in each state and interviewed staff to learn more about state oversight efforts, including fiscal and performance reporting, onsite visits, training and technical assistance, and promising practices and challenges to such oversight. We conducted these visits in November and December 2018. In each state we visited, we reviewed program files for the two local agencies we visited, including oversight, financial, and performance reports; and follow up correspondence concerning the findings from state agency visits to those local agencies. Information collected from state and local agency officials during our site visits are not generalizable to all state CSBG programs. In addition, we obtained information on state audit findings related to CSBG and met with state auditors during site visits to learn more about additional state oversight of CSBG and local agencies to learn whether any coordination occurred between the different federally funded programs offered by the local agencies to support state oversight efforts. We reviewed the Single State Audit findings for fiscal years 2016 through 2017 for each of three states and six local agencies we visited. We reviewed these audit reports to determine if there were findings pertaining to CSBG and if so, the nature of those findings. Assessment of Program Performance for the CSBG Program To address our second objective, we reviewed the program performance indicators OCS uses to measure program outcomes in relation to the stated goals of the CSBG program. We also reviewed OCS’s design and implementation plans for a new performance management approach, including revised performance measures for assessing program outcomes. We compared OCS’s previous performance management approach to its new one, including the types of data it collected and its methods of collecting data from state and local agencies. In conducting our work, we also interviewed OCS officials about the goal of, and changes to, the performance management approach and reporting requirements. Additionally, we interviewed state officials on their experience with CSBG program performance. We reviewed leading practices in grant performance management identified in federal guidance and in GAO reports and assessed OCS’s approach against federal performance and internal control standards. We conducted this performance audit from to May 2018 to November 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Table of Federal Community Services Block Grant (CSBG) Allocations to States, Fiscal Years (FY) 2016 through 2019 State Name Alabama Total Allocations for States Included in GAO’s Review This table includes all states as defined by the CSBG Act, which was the focus of our review. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mary Crenshaw (Assistant Director), Melissa Jaynes (Analyst-In-Charge), Sandra Baxter and Stacy Spence made key contributions to this report. Also contributing to this report were James Bennett, Grace Cho, Alex Galuten, Danielle Giese, Corinna Nicolaou, Monica Savoy, and Almeta Spencer.
Why GAO Did This Study CSBG is one of the key federal programs focused on reducing poverty in the United States. In fiscal year 2019, CSBG provided about $700 million in block grants to states. In turn, states provided grants to more than 1,000 local agencies, which used the funding to provide housing and other services to program participants. HHS is responsible for overseeing states' use of this funding, and states have oversight responsibility for local agencies. GAO was asked to review CSBG program management. This report examines (1) how HHS and selected states conduct their oversight responsibilities and (2) how HHS assesses the effectiveness of the CSBG program. GAO reviewed files for six of the 12 states where HHS conducted onsite compliance evaluations during fiscal years 2016 and 2017, and six states where HHS conducted routine monitoring—five of which were randomly selected. GAO visited three states, selected based on their CSBG funding amount and other factors, to conduct in-depth reviews of their monitoring activities. GAO also reviewed agency documents and interviewed HHS and selected state and local officials. What GAO Found The Department of Health and Human Services (HHS) and the selected states GAO reviewed provided oversight of the Community Services Block Grant (CSBG) program through onsite visits and other oversight activities to assess grant recipients' use of funds against program requirements. Specifically, GAO found: HHS and the selected states conducted required oversight activities. The Community Services Block Grant Act requires HHS to conduct compliance evaluations for several states each year and requires states to conduct onsite visits to local CSBG recipients at least once every 3 years to evaluate whether recipients met various goals. During fiscal years 2016 and 2017, HHS conducted onsite compliance evaluations for 12 states that it deemed most at risk of not meeting CSBG requirements. GAO's visits to three states found that all three had conducted onsite visits to local grantees during the same fiscal years. HHS and the selected states also conducted additional oversight activities. This included routine reviews and quarterly calls. HHS and state monitoring activities primarily identified administrative errors, instances of non-compliance and other issues, which grant recipients took steps to address. For example, a HHS fiscal year 2017 compliance evaluation found that in fiscal year 2015 one state neither implemented procedures to monitor and track findings from a state audit, nor monitored eligible entities as required. HHS uses state outcome data to report on CSBG's national effectiveness, but these data are not aligned with the national program goals to reduce poverty, promote self-sufficiency, and revitalize low-income communities. HHS recently redesigned its' performance management approach to improve its ability to assess whether the CSBG program is meeting these three goals, but several elements of the approach do not align with leading practices in federal performance management. GAO found that HHS's redesigned approach does not demonstrate: How the agency's newly developed national performance measure—intended to provide a count of the number of individuals who achieved at least one positive outcome through CSBG funds—will assess the program in meeting national program goals. How the state outcome data, consisting of more than 100 state and local program measures, relate to CSBG's three national goals. How data collected from state and local agencies will be assessed for accuracy and reliability. Without aligning its redesigned performance management approach with leading practices, OCS cannot properly assess its' progress in meeting CSBG's three national goals. What GAO Recommends GAO is recommending that HHS's new performance management approach include information on how its performance measure and state outcome measures align with program goals and how it will assess data reliability. HHS agreed with GAO's recommendation.
gao_GAO-19-343
gao_GAO-19-343_0
Background AOC is responsible for the maintenance, operation, and preservation of the buildings that comprise the U.S. Capitol complex, as shown in figure 1. AOC is organized into the following 10 jurisdictions, each of which is funded by a separate appropriation: (1) Capitol Building, (2) Capitol Grounds and Arboretum (hereafter the Capitol Grounds), (3) Capitol Police Buildings, Grounds, and Security (hereafter the Capitol Police), (4) Capitol Power Plant, (5) House Office Buildings (hereafter the House), (6) Library Buildings and Grounds (hereafter the Library), (7) Planning and Project Management (PPM), (8) Senate Office Buildings (hereafter the Senate), (9) Supreme Court Building and Grounds, and (10) U.S. Botanic Garden (hereafter the Botanic Garden). PPM provides consolidated services to all of AOC’s jurisdictions, such as long-range facility planning, historic preservation, and architectural and engineering design services. In addition, PPM manages systems that span jurisdictions including electrical distribution and emergency generators. PPM is also the parent organization of the Division, which provides construction and facility management support to all of AOC’s jurisdictions. Established in the 1970s, the Division’s mission is to “support AOC jurisdictions serving their Congressional and Supreme Court clients by providing high quality construction and craftsmanship with seamless flexibility, best value, and extraordinary customer service, while protecting our national treasures.” The Division’s operations are funded through a mix of appropriations and project funding from the jurisdictions. Specifically, according to AOC officials, the agency’s appropriation for Capital Construction and Operations provides the salaries and expenses of up to 13 permanent staff. The salaries and expenses of the remaining staff, as well as other costs (such as materials) are covered by the project funding the Division receives from the jurisdictions. According to AOC officials, essentially, the jurisdictions hire the Division to execute work on their behalf, and the Division charges the jurisdictions for its expenses. As a result, the number and type of temporary employees the Division employs at any given time is directly related to the projects it is performing for the jurisdictions. As of October 2018, of the Division’s 162 employees, 12 were permanent employees responsible for executive management and administrative functions. The remaining 150 were temporary employees—124 trade workers and 26 construction support employees—that it hired under temporary (e.g., 13- or 24-month) appointments. The trade workers include electricians, plumbers, masons, woodcrafters and carpenters, cement finishers, sheet metal mechanics, painters and plasterers, hazardous material abaters and insulators, laborers, and warehouse and material handlers. The construction support employees include personnel who perform activities such as construction management, purchasing, and timekeeping. The Division’s temporary employees are eligible for benefits. By law, AOC is generally required to provide all temporary employees with “the same eligibility for life insurance, health insurance and other benefits” to temporary employees who are hired for periods exceeding one year. The benefits AOC’s temporary employees receive may differ from what other federal temporary employees in the executive branch receive since these benefits vary depending on the type of temporary appointment and the employing agency, among other things. For example, employees serving under an appointment limited to 1 year or less are generally not eligible for the Federal Employees’ Group Life Insurance program. As previously stated, the Division pays for the salaries and expenses of its temporary employees with project funding from the jurisdictions. That project funding covers both the Division’s direct and indirect costs. Direct costs are those directly attributed to and expended on a project, such as labor (i.e., trade workers) and materials. In contrast, indirect costs are necessary costs that are not directly attributable to a specific project, such as employee leave and training, as well as salaries for construction support employees, such as supervisors and purchasing agents. To pay for its indirect costs, the Division charges the jurisdictions what it calls an “indirect rate.” As of October 2018, the Division’s indirect rate was 0.85. The Division applies this rate to every direct labor-hour associated with a project it executes for the jurisdictions. For example, for a trade worker with a hypothetical hourly cost of $45, the Division charges the jurisdictions about $83, as shown in figure 2. For more information on the Division’s direct and indirect costs, see appendix II. Jurisdictions Used the Division for a Wide Range of Projects, Citing Flexibility and Capacity as Key Factors, and Were Generally Satisfied with the Division’s Services Jurisdictions Used the Division for a Wide Range of Projects Based on our analysis of the Division’s data for projects completed during fiscal years 2014 through 2018, the jurisdictions used the Division to varying degrees for projects that ranged widely in terms of cost, complexity, and duration. Cost: There was a wide range in the nominal cost of individual projects the Division completed during fiscal years 2014 through 2018. The smallest individual project cost about $1,100 in 2017 dollars to perform hazardous materials testing in the Ford Office Building for the House jurisdiction in fiscal year 2016. Larger projects may be done in phases and when combined can cost millions of dollars. For example, in 2015 the Division completed a lighting project at the James Madison Building for the Library jurisdiction in two phases at a total cost of about $9.8 million in 2017 dollars. Complexity: During this period, the Division’s projects ranged from work involving one type of activity or trade to work involving several phases or many trades. For example, the Capitol Power Plant jurisdiction used the Division for paint projects and a door replacement. Other projects included the construction of a lactation suite at the Ford House Office Building. For this 4-month project, the Division performed carpentry, electrical work, hazardous materials abatement, and other tasks in order to demolish an existing women’s restroom and build a lactation suite with an adjacent, smaller women’s restroom (see fig. 3). Duration: During this period, the jurisdictions used the Division for projects that varied from quick turnaround projects that took a few days to complete to longer, multi-year projects. Most (about 88 percent) of the projects were completed within one year, while about 4 percent were completed between 1 and 2 years, and about 8 percent took 2 or more years to complete. For example, the Senate jurisdiction used the Division for an elevator repair project in 2016 at the Russell Office Building that took one day to complete while smoke detector upgrades in the James Madison Building for the Library jurisdiction took over 5 years and were completed in 2014. We also identified several examples of projects that the Division did for jurisdictions in phases. Sometimes the duration of the phases were less than one year but when combined the work spanned multiple years. For example, the Division built additional office spaces for staff displaced by the House jurisdiction’s renovation of the Cannon Office Building. Each phase of the work was completed within one year, but the work spanned almost 2 years from November 2014 to August 2016. The extent to which each of the jurisdictions used the Division also varied. Based on our analysis of the Division’s data and discussions with the jurisdictions, the Library, House, and Senate jurisdictions were the primary users of the Division during fiscal years 2014 through 2018, comprising more than 90 percent of the total work by cost for completed projects, as shown in figure 4. The Division completed projects exceeding $1 million for each of these jurisdictions. Jurisdiction Officials Cited the Division’s Flexibility and Capacity as Factors That Influence Decisions to Use the Division While jurisdictions have the option to use their own staff or a contractor for projects, jurisdiction officials said they consider a range of factors when determining whether to use the Division. They most frequently cited the Division’s flexibility in responding to scheduling and scoping changes and the jurisdictions’ own internal capacity to execute a project. They less frequently cited other factors, such as the availability of appropriations. Flexibility Schedule: Jurisdiction officials said the Division provided scheduling flexibility at no extra charge compared to using outside contractors. According to jurisdiction officials, when projects require a great deal of flexibility, the jurisdictions may be more likely to choose the Division over a contractor because the Division can start and stop work as needed and can work nights or on weekends if necessary to keep a project on schedule without charging extra fees. For example, work on projects may need to be stopped or delayed for a variety of reasons, such as for security purposes if there is a protest near the worksite, or during a presidential visit. Jurisdiction officials also noted that the Division can typically mobilize faster than a contractor, a consideration that can be an important factor in determining whether to use the Division. For example, jurisdiction officials noted that the Division’s employees can begin work faster than an outside contractor because they have employee identification badges that authorize access to most buildings across the Capitol complex without an escort. 'Contractors must obtain a badge prior to accessing a work site and require escorts in instances when they do not have an AOC or site- specific badge, and the process of obtaining a badge adds time to when a contractor can begin work. As another example, jurisdiction officials also told us that using a contractor requires that AOC develop full design specifications for a project, a process that takes time and resources. In contrast, the Division can execute work without full design specifications. For example, Capitol Power Plant officials told us they used the Division for renovations to their Administration Building because, according to the officials, the Division started the work sooner, without design specifications and thereby completed the project faster than a contractor likely could have. Capitol Power Plant officials explained that the work— which included new carpet and painting—was agreed upon with the Division without spending time developing detailed design specifications that would have been required to obtain a contractor for the work. Scope Changes: According to jurisdiction officials, the Division is typically more flexible than a contractor when dealing with issues that arise from unforeseen site conditions or changes to a project’s scope. For example, during the construction of the lactation suite discussed above, the Division uncovered lead paint in the walls, requiring the work to stop until the lead paint was removed. According to officials, contractors typically charge for making changes to a project’s scope, such as removing hazardous materials uncovered during construction or associated delays. The Division does not charge for making changes or associated delays. This flexibility is because the Division charges based on direct labor hours spent on a project, meaning its expenses are charged as they are incurred. Accordingly, while a project’s costs may increase if more labor is charged to a project, the Division also has the option of having its employees work on other projects if work on a particular project has to stop. Jurisdiction officials told us that the Division also works with the jurisdictions to save money on projects. According to officials, such savings were the case during a 2-year project that the Division completed at the Library jurisdiction’s Jefferson Building in 2018 with a cost of $3.5 million in 2017 dollars. The project involved reversing the direction of doors in high-occupancy areas to allow for more orderly evacuation of occupants in the event of an emergency, as shown in figure 5. It also involved replacing some of the building’s historic doors and associated hardware with replicas that meet modern safety standards. Officials told us the Division helped the jurisdiction save about $1.2 million (in current dollars) during the course of the project by identifying less expensive materials for the project than originally planned for. Internal staff: Jurisdiction officials also told us that they use the Division for projects when they lack the internal capacity to do so. Most of the jurisdictions have some trade workers, such as electricians and plumbers, on staff to handle their daily operations and maintenance needs. Jurisdictions may execute smaller projects with their own employees but may use the Division for projects beyond routine maintenance work that their own employees cannot fit into their schedules. For example, officials with the Senate jurisdiction told us that they have staff capable of performing cabinetry work but have used the Division in the past for cabinetry work so that their staff could focus on more routine maintenance work. Senate jurisdiction officials also told us that they primarily use their own staff for construction work, but will use the Division as an option to supplement their staff when the volume of the Senate jurisdiction’s own workload is higher than what can be handled internally. Skill and equipment: Jurisdictions may use the Division if they lack the skills or equipment to execute a particular project. Officials from five of the jurisdictions told us that they have staff within their jurisdiction who can execute small projects involving hazardous materials, such as lead paint abatement under 2 square feet in size. Larger projects have additional abatement requirements, and the jurisdictions have used the Division for these projects. As another example, the Capitol Grounds jurisdiction used the Division in 2016 to install the annual Christmas tree on the Capitol lawn because the jurisdiction lacked the necessary equipment to do so. The Botanic Garden jurisdiction, which does not employ any masons, used the Division for a project at its Conservatory in 2016 because of the Division’s masonry expertise. Officials with the Senate jurisdiction also cited the Division’s masonry expertise among other factors, such as the Division’s familiarity with the jurisdiction’s buildings, in selecting the Division to repair the steps at the Russell Senate Office Building in 2017, as shown in figure 6. Availability of appropriations: Jurisdiction officials told us that they might not use the Division if the work cannot be completed by the time the jurisdiction’s appropriations expire. Specifically, because the jurisdictions pay for the Division’s services as work is executed rather than upfront when the work is initiated, the jurisdictions must ensure that work by the Division can be completed before their appropriations expire. Jurisdiction officials told us that as a result, the Division may not be a realistic option when using 1-year appropriations near the end of the fiscal year. In contrast, when using a contractor, jurisdictions may obligate fixed period appropriations prior to the end of the fiscal year for work that will continue into the following fiscal year. Cost: Most jurisdiction officials said that a project’s cost was not a key factor they considered when determining whether to use the Division for a project. When the jurisdictions are considering using a contractor they are not required to obtain cost estimates from the Division first and generally do not do so. As a result, comprehensive information on the relative costs of using the Division compared to a contractor was not available. However, in cases where the jurisdictions told us they did obtain estimates from both the Division and a contractor, they said the cost to use the Division was sometimes more expensive than a contractor and sometimes less expensive, as illustrated in the following examples. Officials with the Supreme Court Building and Grounds jurisdiction told us they used the Division to install a new heating, ventilation, and air-conditioning system in one of its buildings after it obtained an estimate from a contractor. According to officials, the project required specialized skills that the Division’s trade workers did not have. However, once they received the contractor’s estimate, the jurisdiction officials determined it was cheaper to pay for the Division’s employees to get trained to do the project than using a contractor. Officials with the Senate jurisdiction told us they obtained cost estimates for lead abatement work from both the Division and a contractor several years ago. According to officials, the contractor’s estimate was less than that of the Division because the contractor proposed using different equipment for the project than the Division, and the jurisdiction used the contractor for the abatement. Jurisdiction Officials Reported They Were Generally Satisfied with the Division’s Services; A Few Suggested Making Changes to Its Operations The jurisdiction officials we interviewed said they were generally satisfied with the Division’s services, including the quality of its work, and were particularly satisfied with the flexibility the Division offers. Officials from seven of the nine jurisdictions we interviewed also told us they would not suggest making changes to how the Division currently operates. Officials from two of the jurisdictions suggested the organizational and cost-allocation changes discussed below. According to Division officials, implementing those suggestions would have implications for its operations and structure, and would require additional research and evaluation to determine if they are feasible. Transfer positions from the Division to its parent organization, PPM: Officials from one jurisdiction suggested that the Division could lower its indirect rate by transferring payroll responsibility for some supervisory positions, such as its construction or safety managers, from the Division to PPM. As discussed above, because the Division does not receive an appropriation for the salaries and expenses of its temporary construction support employees, it pays for those costs by charging the jurisdictions for direct labor hours and also an “indirect rate.” Division officials told us that payroll responsibility for some construction support positions could be transferred to PPM and that this transfer would reduce the Division’s indirect rate because that rate increases by about 1.1 percent for each employee captured in the rate. Because PPM is the parent organization of the Division, this step would not reduce the total costs of projects to AOC as an organization; rather, it would transfer the responsibility for paying certain costs from the jurisdictions to PPM. According to AOC officials, this could have several effects. First, PPM would need to find a way to fund those positions, which would likely require an increase in its appropriations to cover additional positions. Second, transferring supervisory positions to PPM could mean those personnel could be tasked to support other AOC-wide efforts, rather than supervising and managing the day-to-day execution of the Division’s projects. Similarly, Division officials told us that transferring supervisory positions or support personnel such as purchasing agents to PPM could reduce the Division’s flexibility, such as its ability to hire additional supervisors or support personnel if its workload increases in the future. Make the Division’s indirect rate variable: Officials with that same jurisdiction suggested that the Division consider making its indirect rate (which as of October 2018 was a fixed rate of 0.85) a variable rate. Under a variable rate approach, projects would have different rates depending on their needs. For example, a project requiring only labor would be charged one rate, but a project requiring labor and additional services, such as the purchasing of materials, would be charged a higher rate. According to Division officials, charging the jurisdictions varying rates depending on the extent to which a project utilizes the Division’s resources could reduce the cost for some jurisdictions but increase it for others since the Division must charge enough to recover all of its costs. Division officials told us AOC evaluated this option in 2017 but decided against it. AOC determined that making the Division’s indirect rate variable would result in increased administrative burden because the Division would have to determine which projects and workhours would be variable and which would not. It would then need to track and assess them differently for each project. Provide additional on-site supervisors for complex multi-trade projects: For most projects, the Division provides supervisors who manage the day-to-day execution of multiple projects. However, jurisdictions have the option to pay, as a direct cost, for dedicated, on- site supervisors to oversee and manage their projects exclusively. Officials with one jurisdiction suggested that the Division make it standard practice for complex, multi-trade projects to have a dedicated, on-site supervisor. Division officials told us that having a dedicated, on-site supervisor works best for complex, multi-trade projects such as the East Phase of the House jurisdiction’s 13-month, $15 million child care center project that the Division completed in December 2018 (see fig. 7). According to Division officials, having dedicated, on-site supervisors day and night during construction enabled the project to remain on schedule and below budget because the supervisors were responsible for overseeing all construction activities and could immediately address questions or concerns that arose, thereby resulting in increased efficiency and cost savings. Division officials told us that while the project’s scope increased during execution, the Division was able to work additional nights and weekends to meet the project’s deadline. Even with additional scope, Division officials estimated that they have saved the House jurisdiction about $500,000 (in current dollars) on the project through increased oversight and by identifying areas of cost savings, such as purchasing less expensive lighting fixtures than called for in the design. The Division Has Taken Steps to Strategically Manage Its Workforce but Does Not Have a Formalized Process for Collecting Some Information Uncertainties Make Anticipating the Division’s Workforce Needs Challenging The variability of the Division’s workload makes anticipating the necessary size (number of employees) and composition (mix of trades and number of employees within each trade) of its workforce challenging. AOC has reported to Congress that the primary drivers behind the size and composition of the Division’s workforce have been project demand and the availability of funding. As previously discussed, the Division’s workload is driven by projects the jurisdictions hire it to perform. Without projects to execute for the jurisdictions, the Division does not have funding to pay the salaries and expenses of most of its employees. Accordingly, the size of the Division’s workforce expands and contracts in response to the jurisdictions’ demand for work. For example, over the last 5 fiscal years, the size of the Division’s trade workforce has fluctuated between a high of 191 in fiscal year 2016 and a low of 121 in fiscal year 2018. During that period, the number of employees the Division employed within each trade also fluctuated. Several factors contribute to the variability of the Division’s workload and make determining its future workforce needs challenging. First, officials told us that the Division has no control over whether the jurisdictions use the Division for their projects. Second, even if a jurisdiction decides to use the Division, Division officials told us that projects are notional or uncertain until that jurisdiction signs a project agreement, among other things. Third, even with a signed agreement, jurisdictions can reduce a project’s scope or cancel it all together, a situation that can leave the Division searching to find work for the trade workers it planned to use for the project. Finally, differing project priorities also come into play, as both Division officials and representatives from three of the jurisdictions acknowledged that some projects and work for certain jurisdictions are a higher priority than others. According to officials, when priority or emergency projects arise, the timing and work for ongoing projects can be affected as trade workers are shifted to the priority or emergency. In some cases, the on-going project may continue at a slower pace with fewer workers and in other cases all work might be stopped for a period of time. The Division Has Taken Steps to Anticipate Its Workforce Needs but Lacks a Formalized Process for Collecting Information on the Jurisdictions’ Construction Priorities Over the last several years the Division has made efforts to strategically manage its workforce to help ensure that it has the right number and composition of employees to meet the jurisdictions’ needs. Our prior work has identified certain practices that, when implemented, can help federal agencies strategically manage their human capital. These practices include: (1) involving managers and stakeholders in decision-making, (2) basing workforce decisions on current needs and future projections, (3) having strategies to address workforce gaps, and (4) monitoring progress. As discussed below, we found that the Division has taken steps that generally align with those practices. However, it does not have a formalized process for collecting information that it uses to project future workforce needs, and we note that several of the steps it has taken date to the time of the March 2017 layoffs or more recently. Involve managers and stakeholders in decision-making: The Division has taken steps to involve AOC’s management, including the superintendents of the jurisdictions, in managing its workforce given the variability of its workload. According to Division officials, its staff are in frequent contact with the jurisdictions and meet periodically with the jurisdictions to discuss the status of ongoing and future projects. The officials said that Division staff meet bi-weekly with the larger jurisdictions—such as the Senate, House, and Library—and monthly or as needed with others as well as with PPM on a weekly basis to discuss the status of projects and workforce needs. According to Division officials, this regular communication with the jurisdictions is their primary and most important method of identifying and addressing workload issues or concerns. Jurisdiction officials echoed the Division’s comments, noting that they are in frequent contact with staff from the Division or as needed. Base workforce decisions on current needs and future projections: Over the last several years, the Division has taken steps to improve how it collects and tracks information from the jurisdictions upon which to base its future workforce projections. Prior to 2015, the Division used a paper- based process to collect information on the jurisdictions’ work requests and tracked information on a spreadsheet. In 2015, the Division implemented a software tool called the Construction Division Management System (CDMS) to streamline that process, making it easier for the jurisdictions to submit requests for work. For example, using CDMS, the Division can now electronically collect information for ongoing projects from the jurisdictions, such as change orders and schedule updates, and the jurisdictions can electronically submit requests for cost or schedule information for future projects. According to Division officials, Construction Managers, who are familiar with the resource needs of individual projects, are responsible for updating and validating the information in CDMS—typically bi-weekly—and the information in CDMS is available to the jurisdictions to review and verify. More recently, in July 2017, the Division hired a scheduler to develop resource-loaded schedules for ongoing projects. This involves assigning labor, materials, equipment, and other resources to a project’s schedule. According to Division officials, currently, the Division develops resource- loaded schedules for about 70 percent of its workload as the projects that comprise its remaining workload are too small or short-term for such schedules. In addition, in October 2017, the Division began collecting additional information on the jurisdictions’ construction priorities through a monthly data call. As part of this data call, which the Division performs via email, the Division requests updated information from the jurisdictions on their current projects, such as the expected start date or whether minor tasks remain, and the status of potential future projects. Using the information the Division collects from the jurisdictions, officials told us it then forecasts its workload and workforce needs out over the succeeding 12 months. According to officials, those projections are an “art, not a science,” because of the uncertainties surrounding the Division’s workload. However, the Division has not formalized the process it uses to collect information about the jurisdictions’ construction priorities. Specifically, we found that the Division lacks a written set of procedures for the monthly data call discussed above to help ensure that staff understand who is responsible for collecting information, what information should be collected, and when that information should be collected. This lack of procedures led to a situation in July 2018 where, according to officials, the Division did not conduct that data call but has since set calendar reminders for key staff in an effort to help ensure they do not miss it again. While setting such reminders may have some benefit now, it does not ensure that others within AOC will execute that data call in the future. Formalized processes, such as written procedures, can help ensure that steps an agency is taking can be implemented in a predictable, repeatable, and accountable way. Such procedures are also a key component of internal control designed to provide reasonable assurance that an organization’s operations are effective and efficient. AOC officials agreed that a more formalized process for collecting information about the jurisdictions’ construction priorities could help ensure the data is collected consistently. It would also better position AOC management to ensure that the Division’s process will be implemented consistently and that the jurisdictions understand what information is expected of them. It could also provide reasonable assurance to AOC management and Congress that the Division is taking the steps necessary to manage its workload and basing its workforce projections on the most current information available. Have strategies to address workforce gaps: The Division has a number of strategies it can employ if the size and composition of its workforce are not aligned with its workload requirements. For example, officials told us the Division can utilize direct-hire authority to quickly fill positions if there is a shortage of employees with specific skillsets to meet the jurisdictions’ needs. Officials told us employees may also work overtime to meet the jurisdictions’ needs if the Division’s workload projections do not show a need to hire additional employees. In instances where there is a lack of work, officials told us the Division has the options of not renewing the appointments of its temporary employees; helping affected employees in finding positions in jurisdictions to the extent practicable; or, if necessary, lay off affected employees, as it did in March 2017. Division officials told us they are also exploring additional strategies to help address potential instances where the size and composition of its workforce are not aligned with its workload requirements moving forward. One potential strategy involves using the Division to help address AOC’s backlog of deferred maintenance and capital renewal, which AOC estimated in 2017 was about $1.4 billion. Another potential strategy involves working with the jurisdictions to establish more large projects that provide a stable amount of work over a period of time. An example of a recent such project is the East Phase of the O’Neill Child Care Center project. According to Division officials, around 25–30 trade workers worked at the site at any given time, providing stability and work for multiple trades. When work on other projects was delayed or did not materialize, the Division was able to move the trade workers to the child care project. Monitor progress: Over the last several years the Division has taken steps to monitor the accuracy of its workload and workforce projections by discussing its projections with AOC management, including the Architect of the Capitol and the superintendents of the jurisdictions, each month. According to officials, the Division began these monthly briefings for AOC’s management in December 2016, when its workload decreased due to the completion of work related to the renovation of the Cannon Office Building. During these briefings, Division staff provide the Architect of the Capitol and the superintendents with information on the Division’s active, committed, and potential projects over the next several months. According to Division officials, these briefings provide an opportunity to discuss with AOC’s management any issues or concerns they have with the Division’s workload. The Division employed the practices described above in the months leading up to the March 2017 layoff of the 30 temporary employees. Division officials told us that 5 to 6 months prior to March 2017, they anticipated a potential decline in the Division’s workload and worked with the jurisdictions to identify potential projects that the Division could execute, but sufficient additional projects did not materialize. During this process, the Division involved PPM, the jurisdictions, and AOC’s management, among others. The efforts to minimize the number of employees affected by any layoffs included identifying job openings within the jurisdictions that employees could apply for. According to officials, one employee was hired by the Senate jurisdiction, another by the Capitol Grounds jurisdiction, and a third by the Office of the Chief Administrative Officer in the House, prior to the layoff. During the course of our review, we observed that the Division employed these strategies. Specifically, Division officials told us that they anticipated there might be a potential decline in the Division’s workload in early 2019. The Division raised this potential with AOC’s management during the summer of 2018, and officials told us the issue was resolved once the House and Library jurisdictions identified several projects that the Division could execute beginning in 2019. AOC’s Appointment and Subsequent Lay Off of Temporary Employees in March 2017 Followed Applicable Practices and Policy AOC’s authority to appoint and remove its employees is governed by title 2 of the U.S. Code and AOC has established various practices and policy related to their terms of employment. We found that AOC generally followed its practices when it appointed 30 temporary employees and adhered to its policy when it subsequently laid them off in March 2017. AOC Generally Followed Its Practices in Appointing and Renewing the Terms of Employment for Temporary Employees Our review of the appointment letters for 27 of the 30 temporary employees laid off in March 2017 found that the letters specified that the position was temporary and was for a term not-to-exceed 13-months. We also found that 10 of the 27 appointment letters included language stating that the position was dependent on the availability of work or funding. As part of our review, we met with five of the nine employees that AOC rehired following the March 2017 layoffs, all five employees told us that they were aware of the temporary nature of their positions and of the fact that they could be laid off at any time due to lack of work. Human capital officials told us that in April 2017, they developed a standard appointment letter to communicate the terms of employment for temporary employees more consistently. This letter includes language explaining that temporary appointments may be terminated at any time due to a lack of work, lack of funds, or failure to meet management’s expectations. For a copy of AOC’s standard appointment letter for temporary employees, see appendix III. AOC may renew the employment of temporary employees at the end of their 13-month appointment based on project needs and the availability of funding, according to human capital officials. We found that the 13 month appointments for 26 of the 30 temporary employees were routinely renewed prior to their March 2017 layoff. Of the 26 temporary employees, 12 had been employed from 13 months to 5 years, 9 had been employed from 6 to 10 years, and 5 had been employed for more than 10 years. The remaining four had been employed for less than 13 months. Human capital officials told us that there is no limitation on the number of times an employee’s appointment may be renewed. To ensure that employees serving under temporary appointments understand the terms of their employment, human capital officials told us that since March 2014 employees who have had their appointments renewed sign a standard Extension of Temporary Appointment form. This form states the position is temporary, may be shorter or longer than 13 months, and may end at any time. For a copy of this extension form, see appendix IV. AOC Generally Followed Applicable Policy When Laying Off the Division’s Temporary Employees AOC’s layoff policy allows the Director of PPM, as delegated by the Architect of the Capitol, to lay off the Division’s temporary employees for lack of work, lack of funds, or failure to meet management’s expectations. The policy does not specify which factors AOC should consider in selecting employees to be laid off, thereby allowing the agency discretion in this area. Our review of the layoff letters for the 30 temporary employees laid off in March 2017 confirmed that AOC communicated to the employees that the layoff was due to a lack of work. In this particular situation, the Division officials said they determined the number of temporary employees needed to carry out its projected workload and considered two factors equally: (1) the employees’ performance and skillset and (2) the employees’ ability to work independently and as part of a team. Human capital officials told us that AOC’s offices of Employee and Labor Relations and its General Counsel reviewed the Division’s request, and found no human-capital or legal concerns. The human capital officials drafted letters notifying the 30 employees of their layoff, effective upon receipt. Division supervisors provided the letters to employees at the start of their shifts on March 21, 2017. Figure 8 provides summary information by trade on the 30 temporary employees that AOC laid off in March 2017. At the time of the March 2017 layoff, AOC did not have a policy that required the Division to notify the Division’s temporary employees of an impending layoff. Human capital officials told us that they did not provide the Division’s temporary employees with advance notice of their layoff because of concerns that such advance notice could result in an unproductive and disruptive work environment. In terms of notifying relevant employee unions, human capital officials said they provided 12- hour advance notification of the layoff to one employee union, in accordance with that union’s collective-bargaining agreement. The five rehired employees we interviewed told us they were caught off guard by the March 2017 layoffs. None of the 30 temporary employees filed grievances related to the layoff, according to human capital officials. Since the layoff, human capital officials told us they recognized that AOC did not have a consistent policy for providing advance notice of layoffs to temporary employees across AOC’s 10 jurisdictions. According to AOC’s Chief Human Capital Officer, some jurisdictions provided advance notice of layoffs to temporary employees while others did not. To provide consistency with such notification and in response to our inquiries, in October 2018 AOC issued guidance standardizing the notification period for temporary employees laid off due to lack of work or lack of funds across all jurisdictions. This guidance directs jurisdictions to provide all temporary employees with a notification period of 2-weeks prior to the effective date of being laid off for these reasons. It also provides jurisdictions the option to request administrative leave so that the temporary employee may stop work immediately and be paid during the two week notification period. Conclusions The Division was created to serve as a flexible option that the jurisdictions can use to meet the facility needs of their congressional and Supreme Court clients. By design, the Division can hire employees if there is demand for its services and lay off employees, as it did in March 2017, if there is insufficient demand or project funding to pay them. In recent years, the Division has taken steps to more strategically manage its workforce and minimize disruptions to that workforce in part by increasing its communication with the jurisdictions. However, formalizing the process the Division uses to collect information on the jurisdictions’ construction priorities, such as by providing staff with a written set of procedures, which specifies what is required of staff and when, could help ensure that those staff consistently collect and use the best information to make decisions about the appropriate number of employees and the mix of trades. Formalizing that process in this manner could also help the Division provide reasonable assurance to AOC management and Congress that it is taking the steps necessary to manage its workload and basing its workforce projections on the most current information available. Recommendation for Executive Action The Architect of the Capitol should formalize the process the Construction Division uses to collect information on the jurisdictions’ construction priorities each month, such as through developing written procedures. (Recommendation 1) Agency Comments We provided AOC with a draft of this report for review and comment. AOC responded with a letter in which it concurred with our recommendation and said it intended to address our recommendation later this year. AOC’s letter is reprinted in appendix V. AOC also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees and the Architect of the Capitol. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-2834 or rectanusl@gao.gov or (202) 512-7215 or gurkinc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our objectives were to: (1) describe how the Architect of the Capitol’s (AOC) jurisdictions use the Construction Division (hereafter the Division) and the factors they reported considering when deciding whether to use the Division, (2) assess how the Division manages its workforce given the variability of its workload, and (3) assess whether AOC’s appointment and subsequent March 2017 layoff of temporary employees from the Division complied with applicable policy. To describe how the jurisdictions use the Division and the factors affecting this use, we obtained and analyzed data on projects the Division completed for the jurisdictions during fiscal years 2014 through 2018. We focused our discussion of these data to the cost, scope, and duration of projects and do not present information on the number of completed projects because of differences in how the jurisdictions identify projects. To assess the reliability of the Division’s data, we reviewed available documentation and interviewed agency officials. We determined that the Division’s project data were sufficiently reliable for the purposes of this report, which includes describing the type and cost of projects the Division completed for the jurisdictions over the last 5 fiscal years and identifying illustrative examples of those projects. For appropriate comparison, the costs of completed projects we present in our report have been adjusted for inflation and converted to 2017 dollars using the fiscal-year gross domestic product index, which is compiled by the U.S. Department of Commerce, Bureau of Economic Analysis. We attempted to obtain comparable data for projects where the jurisdictions used their own employees or a contractor, but these data were not readily available. With respect to the jurisdictions’ use of their own employees, the Capitol Building jurisdiction attempted to obtain this data for us, but the data that were available did not include the cost of all labor spent on projects. Further, according to AOC, the jurisdictions do not capture data on employees’ time spent on construction work so this data also included projects that were considered routine maintenance. With respect to the jurisdictions’ use of contractors, the data that were available also included purchase card transactions, among other unrelated costs. According to AOC, identifying just the contract costs of the jurisdictions’ construction projects would require that AOC conduct significant research and review every transaction associated with its contracts. To provide illustrative examples, we visited the sites of six projects that the Division was executing at the time of our review. To select these projects, we asked the agency to provide us with projects that would enable us to understand the nature of the work the Division performs for the jurisdictions. In addition to the 4 projects the agency provided, we selected 2 additional sites based on projects that were discussed during our interviews. During our visits, we met with Division officials and representatives from the jurisdictions to discuss the projects in detail. We visited the following projects: an abatement and insulation project at the Russell Senate Office repairs to the drainage system at the Russell Senate Office Building, the replacement of doors at the Library of Congress, demolition and construction activities associated with the construction of a new lactation suite at the Ford House Office Building, demolition and construction activities associated with the construction of a new child care center at the O’Neill House Office Building, and the replacement of light poles across the U.S. Capitol complex. We also interviewed officials from the Division and AOC’s 10 jurisdictions, including their respective superintendents. Except Planning and Project Management, we asked the jurisdictions if they had any suggestions for changing the Division’s operations. We did not ask Planning and Project Management because the Division is a component of that jurisdiction. We then discussed with Division officials the potential implications of making those changes. We did not independently evaluate the implications of implementing the superintendents’ suggestions as part of this review. To assess how the Division manages its workforce, we reviewed pertinent documents, such as AOC’s August 2017 report to Congress on the Division, the Division’s Organization and Operating Plan, user guides for the Construction Division Management System, and prior GAO reports. We also obtained and analyzed payroll data for the Division for fiscal years 2014 to 2018 and interviewed Division officials. To assess the reliability of the Division’s data, we interviewed agency officials. We determined that the Division’s payroll data were sufficiently reliable for the purposes of this report, which includes describing the size and composition of the Division’s workforce over the last 5 fiscal years. We compared the Division’s efforts to manage its workforce to strategic human capital-management activities or practices identified in our prior work and standards for internal control in the federal government. To assess whether AOC’s layoff of temporary employees from the Division in March 2017 complied with applicable policy, we reviewed relevant federal laws and agency policy, such as the Separation of Non- Permanent Employees Policy Memorandum (AOC Order 316-1). We also reviewed pertinent personnel documents, such as appointment letters, layoff letters, and Standard Form 50 personnel documentation. We compared AOC’s policy with AOC’s implementation during the March 2017 layoff of 30 temporary employees. We did not independently verify AOC’s application of the criteria used to determine which employees to lay off in March 2017. In addition, we interviewed officials from both AOC’s Human Capital Management Division and the Division. As part of our work, we requested interviews with the nine temporary employees that AOC subsequently rehired and interviewed the five who responded in order to obtain their perspective on AOC’s processes for laying off temporary employees. This information is not generalizable to all rehired temporary employees. We conducted this performance audit from March 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The Construction Division’s Direct and Indirect Costs The Construction Division’s (hereafter the Division) costs include both direct and indirect costs. Direct costs are costs directly attributed to and expended on a project, such as labor (i.e., trade workers) and materials. Indirect costs are costs that cannot be directly attributed to a single project, such as costs associated with employee leave and training. Table 1 shows the components of the Division’s direct and indirect costs. To pay for its indirect costs, the Division charges the jurisdictions what it calls an “indirect rate” as part of the work it performs. As of October 2018, the Division’s indirect rate was 0.85. The Division applies this rate to every direct labor-hour associated with a project it performs for the jurisdictions. For example, a trade worker that the Division employs who has a hypothetical hourly cost of $45 also has an indirect cost of about $38. Accordingly, that trade worker’s total hourly cost, which the Division charges the jurisdictions, is about $83. The Division developed the methodology for its indirect rate in 2012, in consultation with the Architect of the Capitol’s (AOC) Chief Financial Officer and the jurisdictions, after it determined that its funding model at that time did not adequately recover costs that were not directly attributable to projects. According to the Division, the primary driver for developing this indirect rate was employee leave. Specifically, the Division’s employees earn about 11 hours of leave per pay period, and funds to cover that leave need to be recovered because they cannot be obligated and charged to a project at the time the leave is earned but prior to its being taken by the employee. The Division allocates its indirect costs among the jurisdictions, using statutory authorities available to the Architect of the Capitol. According to AOC officials, historically, the Division’s indirect rate was determined by staff within the Division. The rate was determined by looking at historical cost and project data over the two prior fiscal years. As of fiscal year 2019, AOC established a steering committee to determine the Division’s indirect rate. This committee is comprised of five members: AOC’s Chief Financial Officer, the Director of the Division, the superintendent of the House Office Buildings jurisdiction, and a superintendent from another large jurisdiction and a small jurisdiction. According to AOC officials, the Division’s indirect rate is now based on projected costs and projects for the current fiscal year, and this rate will be monitored and may be adjusted throughout the year to address potential gaps or overages in funding for the Division’s annual indirect costs. Appendix III: Architect of the Capitol’s Standard Temporary Appointment Letter Appendix IV: Architect of the Capitol’s Acknowledgement Form for the Extension of Temporary Appointment Appendix V: Comments from the Architect of the Capitol Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact above, key contributors to this report included Mary Crenshaw (Assistant Director); Maria Edelstein (Assistant Director); Melissa Bodeau; Sarah Cornetto; Patrick Dibattista; Camille M. Henley; Wesley A. Johnson; Efrain Magallan; Josh Ormond; Cheryl Peterson; Kelly Rubin; and Laurel Voloder.
Why GAO Did This Study AOC is organized into 10 jurisdictions that operate and maintain the buildings and grounds of the U.S. Capitol complex. For projects such as renovations and repairs, the jurisdictions can use their own employees, a contractor, or AOC's Construction Division, which is staffed with trade workers such as electricians and plumbers. Most of the Division's staff are employed on a temporary basis and paid with funds the Division receives from the jurisdictions for projects it executes on their behalf. In March 2017, AOC laid off 30 of the Division's approximately 190 temporary employees, citing a lack of work from the jurisdictions. GAO was asked to review the Division's operations. This report examines the jurisdictions' use of the Division and the Division's management of its workforce, among other issues. GAO analyzed information on projects the Division completed during fiscal years 2014 through 2018, reviewed AOC policies, visited the sites of six projects that are illustrative of the work the Division performs for the jurisdictions, and interviewed AOC staff, including officials from AOC's 10 jurisdictions and five of the employees AOC laid off in 2017. What GAO Found The Architect of the Capitol's (AOC) Construction Division (hereafter the Division) is designed to serve as a flexible option that the 10 operational jurisdictions that comprise AOC can use to meet their facility needs. In their efforts to manage the buildings and grounds of the U.S. Capitol complex, AOC's jurisdictions have used the Division for projects that vary widely in cost, complexity, and duration (see figure). For example, over the last 5 fiscal years, the jurisdictions have used the Division for projects ranging in cost from about $1,000 to about $10 million and in scope from hazardous material testing to multiyear lighting-system upgrades. Jurisdiction officials cited the Division's flexibility in adjusting to scope and other changes to keep a project on schedule as one of the reasons they may decide to use the Division instead of an outside contractor. While jurisdiction officials said they were generally satisfied with the Division's services, officials from two jurisdictions suggested that the Division consider changing how it operates—for example, by transferring some positions to its parent organization in an effort to lower what it charges the jurisdictions. According to AOC officials, making changes such as this one to the Division's operations could have varying effects, such as increasing how much funding AOC would require from other sources beyond the jurisdictions. The Division has taken steps to strategically manage its workforce to help ensure that it has the right number and composition of staff to meet the jurisdictions' needs but has not formalized the process it uses for collecting information on the jurisdictions' construction priorities each month. Because the Division's workload is driven by projects the jurisdictions hire it to perform, such things as changes in projects' priorities and work to be performed make determining future workforce needs challenging. The Division's approach to managing its workforce generally aligns with practices that GAO has previously identified that help agencies strategically manage their human capital. This approach includes having strategies to address gaps if the size and composition of an agency's workforce are not aligned with its workload requirements. However, because the Division has not formalized the process it uses to collect information each month on the jurisdictions' construction priorities it may miss opportunities to obtain information that is critical to making informed decisions. The Division also cannot provide reasonable assurance to AOC management and Congress that it is taking the steps necessary to manage its workload and that it is basing its workforce projections on the most current information available. What GAO Recommends GAO recommends that AOC formalize the process the Division uses for collecting information on the jurisdictions' construction priorities each month, such as through developing written procedures. AOC concurred with GAO's recommendation.
gao_GAO-20-180
gao_GAO-20-180_0
Background NNSA’s Missions and Organization NNSA largely executes its missions at eight sites that comprise the nuclear security enterprise and that are managed by seven M&O contractors. These eight sites are three national security laboratories—Lawrence Livermore National Laboratory in California, Los Alamos National Laboratory in New Mexico, and Sandia National Laboratories in New Mexico and other locations; four nuclear weapons production plants—the Pantex Plant in Texas, the Y-12 National Security Complex in Tennessee, the Kansas City National Security Complex in Missouri, and tritium operations at DOE’s Savannah River Site in South Carolina; and the Nevada National Security Site, formerly known as the Nevada Test Site. As shown in figure 1, each of NNSA’s eight sites has specific responsibilities within the nuclear security enterprise. NNSA also executes portions of its missions across several other DOE sites, such as the Pacific Northwest National Laboratory in Washington and the Oak Ridge National Laboratory in Tennessee. At this time, NNSA’s common financial reporting efforts are focused on the eight sites, as required by the National Defense Authorization Act for Fiscal Year 2017. NNSA’s sites are owned by the federal government but managed and operated by M&O contractors. According to DOE, the use of M&O contracts is supported by an underlying principle: the federal government employs highly capable companies and educational institutions to manage and operate government-owned or -controlled scientific, engineering, and production facilities because these companies and educational institutions have greater flexibility than the government in bringing scientific and technical skills to bear. As we previously found, an M&O contract is characterized by, among other things, a close relationship between the government and the contractor for conducting work of a long-term and continuing nature. To support its missions, NNSA is organized into program offices that oversee the agency’s numerous programs. For example, the Office of Defense Programs oversees the B61-12 Life Extension Program, and the Office of Defense Nuclear Nonproliferation oversees the Nuclear Smuggling Detection and Deterrence Program. NNSA’s program offices are Defense Nuclear Nonproliferation; Safety, Infrastructure, and Operations; Defense Nuclear Security; Counterterrorism and Counterproliferation; and Naval Reactors. Mission-related activities are primarily overseen by these program offices, which are responsible for integrating the activities across the multiple sites performing work. NNSA field offices, co-located at the sites, oversee the day-to-day activities of the contractors as well as mission support functions such as safety. Cost Accounting Requirements and Methods of Accounting for and Tracking Costs NNSA is subject to different cost accounting requirements than its seven M&O contractors. NNSA is required to follow Managerial Cost Accounting Standards. The principal purpose of Managerial Cost Accounting Standards is to determine the full cost of delivering a program or output to allow an organization to assess the reasonableness of this cost or to establish a baseline for comparison. The standards state that federal agencies should accumulate and report the costs of their activities on a regular basis for management information purposes. The standards also state that agencies should allow flexibility for agency managers to develop costing methods that are best suited to their operational environment. Such information is important to Congress and to NNSA managers as they make decisions about allocating federal resources, authorizing and modifying programs, and evaluating program performance. Separate standards—referred to as federal Cost Accounting Standards—govern how NNSA’s M&O contractors structure and account for their costs. Federal Cost Accounting Standards provide direction for the consistent and equitable distribution of a contractor’s costs to help federal agencies more accurately determine the actual costs of their contracts and the contractor’s costs associated with specific projects and programs. To comply with federal Cost Accounting Standards, M&O contractors classify costs as either direct or indirect when they allocate these costs to programs. Direct costs are assigned to the benefitting program or programs. Indirect costs—costs that cannot be assigned to a particular program, such as costs for administration and site support—are to be accumulated, or grouped, into indirect cost pools. The contractor is to estimate the amount of indirect costs to distribute to each program (accumulated into indirect cost pools) and make adjustments by the end of the fiscal year to reflect actual costs. The contractor is then to distribute these costs proportionally across all programs based on a rate in accordance with the contractor’s cost allocation model. The final program cost is the sum of the total direct costs plus the indirect costs distributed to the program. In implementing these allocation methods, federal Cost Accounting Standards provide contractors with flexibility regarding the extent to which they identify incurred costs directly with a specific program and how they collect similar costs into indirect cost pools and allocate them among programs. Therefore, different contractors may allocate similar costs differently because the contractors’ cost allocation models differ—that is, a cost classified as an indirect cost at one site may be classified as a direct cost at another. Because each contractor can allocate similar indirect costs differently and contractors may change the way they allocate indirect costs over time, it is difficult to compare contractors’ costs among sites and accurately calculate total program costs when work for a program is conducted at multiple sites. The seven NNSA M&O contractors and NNSA’s program offices account for and track costs differently. We previously found that NNSA’s M&O contractors have historically developed their own processes to manage and track costs for work at each site even when their work contributes to the same program. These processes have generally differed from the ones NNSA program offices have developed to describe the scope of its programs. This makes it difficult for NNSA and others to track and compare costs for analogous activities across programs, contractors, and sites. For example, in May 2018, we found that NNSA’s work breakdown structure for the B61-12 Life Extension Program and its $7.6 billion cost estimate (at that time) did not include $648 million in activities that were undertaken by other NNSA programs, such as research and development, test and evaluation activities, and infrastructure elements. Leading practices for developing work breakdown structures state that a work breakdown structure should include all activities that contribute to a program’s end product, and should not treat contributing activities separately. DOE’s and NNSA’s financial management and accounting system—the Standard Accounting and Reporting System (STARS)—provides budget execution, financial accounting, and financial reporting capabilities for the department. STARS is also integrated with other agency systems for procurement, funds distribution, travel, and human resources. The M&O contractors’ financial systems must be able to directly provide cost reports to NNSA’s financial management system. The primary source of cost data contained in STARS comes from summary-level cost reports provided by M&O contractors, which they report for NNSA’s appropriations at the budget and reporting code level. Program offices access STARS financial data through the DOE Office of the Chief Financial Officer’s integrated data warehouse. While financial data collected through STARS represent DOE’s official financial data, the data are not detailed and therefore may not satisfy the information needs of NNSA’s program offices. For example, STARS financial data do not differentiate labor costs from other programmatic costs, nor do they provide detailed information about the costs of activities that contribute to program costs. In addition, according to M&O contractor representatives, if one M&O contractor provides funding to another contractor, such as to conduct testing, NNSA does not have the ability in STARS to identify that funding was transferred. In the absence of an automated managerial cost accounting system that collects data from financial systems and relevant operating systems to consistently and uniformly produce useful cost information, NNSA’s program offices developed various systems, tools, and spreadsheets to track relevant cost information. Specifically, NNSA’s program offices separately collect cost information from M&O contractors that is more detailed than costs reported through STARS. Collecting these data requires M&O contractors to map, or “crosswalk,” their cost data to the work breakdown structures of one or more of NNSA’s program offices. Some program offices collect financial data through ad hoc data calls, rather than regular data calls. Some tools the program offices use include program management systems or spreadsheets designed to meet each program office’s programmatic, budgetary, and project requirements. For example, the Office of Defense Programs built the Enterprise Portfolio Analysis Tool in 2007 to capture financial data from the M&O contractors for its programs. Also, in 2007, officials from the Office of Defense Nuclear Nonproliferation developed a program management system designed to integrate and manage data such as scope, schedule, budget, and cost at the program level with greater detail than the data in STARS. The Office of Safety, Infrastructure, and Operations later adopted this system and called it the G2 program management system. M&O contractors use the G2 system to upload crosswalks of financial data for those program offices’ work breakdown structures after the costs were incurred. This process allows M&O contractors to report detailed financial data to the respective program offices every month. The process to track cost information is different for each program office and depends on the tool used and the information collected. However, for all program offices the process to track cost information is in addition to the financial reporting that M&O contractors provide for STARS (see fig. 2). NNSA’s Approach to Implementing Common Financial Reporting To implement common financial reporting and standardize financial reporting by the M&O contractors across programs and sites, NNSA is pursuing an approach in which the agency collects M&O contractors’ financial data in a common reporting framework using an NNSA-wide data reporting and analysis tool. M&O contractors produce crosswalks of their financial data and submit the data to NNSA using a data reporting and analysis tool called CostEX. NNSA then stores the reported financial data in the DOE Office of the Chief Financial Officer’s integrated data warehouse. The Office of Defense Programs has used this process to collect financial data from the M&O contractors for its programs since fiscal year 2017. NNSA implemented this process for the broader common financial reporting effort in fiscal year 2018. Figure 3 illustrates NNSA’s data management process for common financial reporting. To implement common financial reporting, NNSA established a common reporting framework using agreed-upon work breakdown structures and common cost elements and definitions. However, in January 2019, we found that NNSA did not establish a common work breakdown structure for all of the participating program offices, although the agency had established 22 common cost elements and definitions. Specifically, the Offices of Defense Programs, Emergency Operations, Defense Nuclear Security, and Counterterrorism and Counterproliferation used NNSA’s common work breakdown structure, while the Offices of Safety, Infrastructure, and Operations and Defense Nuclear Nonproliferation used their own programmatic work breakdown structures. The M&O contractors crosswalk their internal financial data into a work breakdown structure for each of the participating program offices (either NNSA’s common work breakdown structure or a programmatic work breakdown structure) using common cost elements and definitions. The M&O contractors’ business systems capture their financial data at a more detailed level than is needed for common financial reporting. Each M&O contractor tracks financial data for its site based on how it manages the work using projects, tasks, and expenditure types. For example, M&O contractors collect time and attendance data from their employees based on the number of hours spent working on a project for the pay period. The M&O contractors aggregate this information across multiple employees to report on labor costs for a project. When the M&O contractors prepare their data for common financial reporting, site managers identify the component(s) of the applicable work breakdown structure and cost elements with which the project aligns and crosswalk their financial data to the NNSA structure using professional judgment. Figure 4 shows an example of how an M&O contractor crosswalks its financial data into an NNSA work breakdown structure in CostEX. After the M&O contractors submit their financial data in CostEX, NNSA performs data quality and accuracy checks of the M&O contractors’ data, referred to as “data validation” and “data reconciliation.” NNSA performs data validation using CostEX, which automatically checks each row for data quality—such as confirming that the correct contractor is entering data for the site—and formatting based on 45 validation checks. CostEX identifies data that do not pass the validation check as errors and rejects them, and the M&O contractor corrects and resubmits the data until it passes the validation check. NNSA performs data reconciliation with STARS using CostEX at the budget and reporting code level. CostEX extracts STARS data for selected budget and reporting codes and compares it with the data the M&O contractors submitted for common financial reporting. CostEX identifies data that differ from the STARS data by more than $1 as an error and rejects the data, and the M&O contractor corrects and resubmits the data until it passes the reconciliation check. According to NNSA officials, it is important for the agency to perform these data validation and reconciliation checks prior to accepting the M&O contractors’ financial data to ensure data quality. NNSA Made Progress toward Implementing Agency-Wide Common Financial Reporting but Faces Challenges in Fully Implementing the Effort NNSA has made progress toward implementing common financial reporting across the nuclear security enterprise since our last report in January 2019, but it faces challenges in fully implementing the effort. We identified seven steps related to NNSA’s efforts to implement common financial reporting in our January 2019 report: (1) identifying an approach and developing a tool to implement common financial reporting, (2) developing a policy, (3) establishing common cost elements and definitions, (4) identifying and reporting costs for programs of record and base capabilities, (5) implementing a common work breakdown structure, (6) collecting financial data from the M&O contractors, and (7) publishing and analyzing data. To date, the agency has completed three steps but has not yet completed four others, as shown in table 1. As required by the National Defense Authorization Act for Fiscal Year 2017, NNSA is to implement common financial reporting by December 23, 2020, to the extent practicable. NNSA’s progress to implement common financial reporting in these seven steps since our January 2019 report is described below: Identify an approach and develop a tool to implement common financial reporting. NNSA identified an approach and developed a tool to implement common financial reporting prior to our January 2019 report. NNSA continues to use CostEX to collect financial data from the M&O contractors and stores the data in DOE’s integrated data warehouse. Develop a policy. NNSA developed a policy for common financial reporting. NNSA began developing the policy in October 2016 and approved it in February 2019. Establish common cost elements and definitions. NNSA established common cost elements and definitions prior to our January 2019 report. An NNSA official said NNSA established the cost elements and definitions based on data that the M&O contractors could readily provide from their business systems. In fiscal year 2019, NNSA used the established cost elements to collect the M&O contractors’ data and added a requirement for the contractors to report data on unpaid commitments. NNSA officials are considering adding cost elements in the future, such as additional details on labor categories. NNSA is working with the M&O contractors to ensure they can provide the additional data. Identify and report costs for programs of record and base capabilities. NNSA has not yet identified and reported costs for all programs of record or costs for base capabilities. The National Defense Authorization Act for Fiscal Year 2017 required NNSA to establish definitions and methodologies for identifying and reporting costs for programs of record and base capabilities as part of its efforts to implement common financial reporting. According to the program director for financial integration, NNSA establishes its programs of record in its congressional budget justification and other documents to align with agency appropriations, which include Weapons Activities, Defense Nuclear Nonproliferation, and Federal Salaries and Expenses. Through common financial reporting in fiscal year 2018, NNSA collected financial data from the M&O contractors for $8.9 billion of $13 billion from these appropriations. In May 2018, NNSA issued guidance that identified 25 base capabilities that the M&O contractors used to develop their site strategic plans. We reviewed the M&O contractors’ site strategic plans for 2018 and found that the contractors identified base capabilities for their sites, but did not include information about the costs to maintain each sites’ base capabilities. NNSA is working to determine whether or how to collect information on the cost of base capabilities through the M&O contractor site strategic planning process in coordination with the common financial reporting effort. We will continue to monitor NNSA’s progress in addressing this requirement. Implement a common work breakdown structure. NNSA has not yet implemented a common work breakdown structure across the program offices in the nuclear security enterprise, but plans to assess the feasibility of implementing a common structure in fiscal year 2020. The National Defense Authorization Act for Fiscal Year 2017 requires NNSA to develop a common work breakdown structure as part of its efforts to implement common financial reporting. In January 2019, we found that NNSA decided not to pursue a common work breakdown structure. Rather, NNSA collected financial data from the M&O contractors using a common work breakdown structure for four program offices and used different, programmatic work breakdown structures for two other program offices. As we found in January 2019, these two offices did not want to change their work breakdown structures to the common structure. For example, the Office of Safety, Infrastructure, and Operations did not want to change its work breakdown structure because it uses the structure for scope, schedule, and risk management, in addition to budget and cost. We recommended that NNSA implement a common work breakdown structure across its participating program offices because without doing so, NNSA could not ensure that its efforts would result in the collection of reliable, enterprise-wide financial data that satisfies the needs of Congress and enables NNSA to report the total costs of its programs. At the time of that report, NNSA neither agreed nor disagreed with the recommendation. The agency stated that it would continue to use its current approach, while focusing on enhancing analysis and reporting to provide comparative data across the enterprise. Once this was completed, NNSA planned to assess the effectiveness of the approach and evaluate what changes, if any, were necessary to the work breakdown structures to meet the overarching objectives of common financial reporting. In May 2019, in response to our recommendation, NNSA changed its approach and decided to conduct an assessment in fiscal year 2020 of the feasibility of implementing a common work breakdown structure across all participating program offices. To do so, NNSA plans to collect M&O contractors’ financial data in fiscal year 2020 using both the common work breakdown structure for all program offices and— specifically for the Offices of Safety, Infrastructure, and Operations and Defense Nuclear Nonproliferation—the programmatic work breakdown structures while it assesses the feasibility of a common work breakdown structure. NNSA decided to take this approach to assess the potential benefits while mitigating potential risks to the program offices that use the data collected through the programmatic work breakdown structures to oversee their programs. NNSA officials said that reporting the same data using two different work breakdown structures will require additional resources for the M&O contractors to prepare their data submissions, which NNSA does not view as a long-term solution for common financial reporting. NNSA planned to collect data using these two approaches in parallel starting in November 2019 and make a decision on whether to implement a common work breakdown structure across the nuclear security enterprise in March 2020. NNSA plans to assess the feasibility of implementing a common work breakdown structure using criteria such as (1) whether using a common work breakdown structure reduces burden on the M&O contractors, (2) how much it will cost NNSA to update other program management systems, (3) whether NNSA can collect financial data quickly enough to meet the needs of the program offices, and (4) whether financial data collected using the common work breakdown structure provides program offices with comparable data to support existing program analysis. Collect financial data from M&O contractors. Since our January 2019 report, the M&O contractors submitted their financial data for fiscal years 2018 and 2019 for the participating program offices using CostEX. However, NNSA and the M&O contractors faced challenges in collecting accurate and consistent financial data for common financial reporting across the nuclear security enterprise. Specifically, NNSA faced challenges in (1) fully implementing its data validation and reconciliation process, (2) collecting financial data from each M&O contractor for all of the program offices, and (3) communicating information about changes in a timely manner. First, NNSA faced challenges fully implementing its data validation and reconciliation process for fiscal year 2018. NNSA designed CostEX to automatically validate the M&O contractors’ data to check data quality and formatting and perform data reconciliation with STARS. However, according to an NNSA official, for fiscal year 2018, the agency manually reconciled the M&O contractors’ fiscal year 2018 data with STARS to identify and fix issues with the process prior to automation. For example, an NNSA support contractor manually submitted and reconciled data for one M&O contractor that manages two sites because the M&O contractor submits combined data for the two sites into STARS, but NNSA collects financial data for common financial reporting by site. For the fiscal year 2019 data collection effort, NNSA officials said they corrected the submission issue and CostEX was able to automatically reconcile the M&O contractors’ data with STARS. Another M&O contractor’s fiscal year 2018 financial data did not reconcile each month with STARS. NNSA officials and representatives from the M&O contractor said the reconciliation issue was due to timing differences between when the contractor reported data into STARS and CostEX. Specifically, M&O contractor representatives for the site said that when NNSA is delayed in collecting data for common financial reporting in CostEX, the relationships between the data reported into STARS and CostEX will have changed, which may result in reconciliation errors. During that time, the site changed how it tracked some of the data, which led to differences in how the data were provided for STARS and common financial reporting, and which caused the reconciliation errors. NNSA officials said they resolved the issue with the M&O contractor for fiscal year 2019 and completed data collection in October 2019. Second, NNSA faced challenges in collecting data from each M&O contractor for all of the participating program offices. Specifically, the Office of Defense Nuclear Nonproliferation made ongoing changes to its work breakdown structure templates throughout the fiscal year 2018 data collection effort. This resulted in challenges for the M&O contractors when reporting data for this program office. NNSA did not collect complete fiscal year 2018 financial data for this office, in part because one of the contractors had significant data validation and reconciliation errors, resulting in data that NNSA could not validate and reconcile. Third, NNSA faced challenges in communicating information about changes to the work breakdown structure in a timely manner to M&O contractors. Leading project management practices emphasize the importance of establishing and implementing change control processes, which include reviewing and approving all change requests, documenting the changes, and communicating the decisions. In fiscal years 2018 and 2019, not all NNSA programs consistently ensured that changes to the work breakdown structure were approved, documented, or communicated to the M&O contractors in a timely manner because NNSA had not established and implemented a work breakdown change control process. NNSA established aspects of such a process, in which program offices submitted changes to the work breakdown structures to the financial integration team so the team could upload the changes into CostEX and notify the M&O contractors of the changes prior to their data submissions. However, according to officials with the financial integration team, the federal program managers did not always follow the process. Officials with the financial integration team said that in some instances, the sites’ program managers contacted the M&O contractors directly to request changes to their work breakdown structures. The financial integration team identified issues with the program offices’ work breakdown structures when the M&O contractors’ data could not be validated and reconciled. In such instances, the financial integration team contacted the program managers to request the updated work breakdown structures for CostEX. Further, the existing process does not include some aspects of change control processes that are consistent with leading practices. Approving changes. Under the existing process, the financial integration team does not check whether changes that federal program managers submit to them have been reviewed and approved, at a minimum, by program office management prior to making changes to the work breakdown structures in CostEX. The program director for financial integration said that they defer to the program offices to ensure that program office management review and approve changes to the work breakdown structure before the program managers submit these changes to the financial integration team. Documenting changes. NNSA officials said that not all program offices have tracked changes to their work breakdown structures over time. NNSA’s Office of Defense Programs has a process for tracking changes to its work breakdown structure, but that process—or a similar process—was not utilized consistently by all of NNSA’s other program offices. If the program offices do not track the changes to their work breakdown structures over time, they cannot ensure the data are comparable across fiscal years. According to officials, NNSA built a tool in CostEX to track work breakdown structure changes across fiscal years. NNSA officials said the tool was tested at the end of fiscal year 2019 by the Office of Defense Programs. NNSA plans to test using the tool to track changes for the other program offices in fiscal year 2020. Communicating decisions. NNSA did not always communicate changes to the work breakdown structure to the M&O contractors in a timely manner. Representatives from the seven M&O contractors stated that they encountered challenges in submitting their data in CostEX on multiple occasions throughout fiscal years 2018 and 2019 because federal program managers in some offices made frequent changes to the work breakdown structures that often were not communicated to the M&O contractors in a timely manner. When work breakdown structures change, representatives from the seven M&O contractors said they have to redo the crosswalk of their financial data to the new work breakdown structures before they submit the data— this takes time and additional resources and may result in delayed data submissions. Representatives from three of the M&O contractors said the frequency of changes to the work breakdown structures decreased for the fiscal year 2019 data collection effort, but representatives from six M&O contractors said they continued to encounter challenges when changes were made to the work breakdown structures. Without establishing and systematically implementing a work breakdown structure change control process, NNSA will not be able to verify that, at a minimum, program office management has approved changes to the work breakdown structure or that these changes have been documented, potentially leading to challenges in ensuring that the data are comparable over time. Furthermore, NNSA cannot ensure that changes to the work breakdown structures are communicated to the M&O contractors in a timely manner, which results in contractors using additional time and resources to address validation or reconciliation errors. Publish and analyze data. NNSA has published the M&O contractors’ financial data for fiscal years 2018 and 2019, but NNSA has not conducted agency-wide analysis of the data. The NNSA financial integration team has a website for common financial reporting from which the program offices can download financial data. However, an NNSA official stated that agency-wide analysis of the data was not feasible for fiscal years 2018 or 2019 because NNSA did not use a common work breakdown structure for all participating program offices. In addition, an NNSA official stated that the agency needs to collect at least 3 years of data to produce useful NNSA-wide findings. Some of the NNSA program offices have started to analyze the financial data collected through the common financial reporting effort. For example, the Office of Defense Programs is using financial data collected through common financial reporting for program evaluation and to make budgetary decisions. In addition, an NNSA official from the Office of Counterterrorism and Counterproliferation stated that the office has used financial data from common financial reporting to identify and address accounting issues, such as identifying previously unidentified unspent funds carried over from prior fiscal years and redirecting these funds to support program activities in fiscal year 2019. However, some of the program offices have not used the data collected through common financial reporting for various reasons. For example, officials from the Office of Safety, Infrastructure, and Operations stated that the fiscal year 2018 data were not useful for analysis because they were not collected in a timely manner. NNSA officials said they completed data validation and reconciliation of the M&O contractors’ fiscal year 2018 financial data in February 2019—nearly halfway through the following fiscal year—making the data late and not useful for that office’s purposes. Additionally, officials from the Office of Defense Nuclear Security stated that they have not used the data collected through the common financial reporting effort because they want to ensure that the data are accurate and consistent before using it for decision-making. NNSA’s Approach Provides Limited Assurance That Data Collected are Accurate and Consistent to Perform Agency-Wide Data Analysis NNSA Has Not Verified Whether Contractors Accurately Crosswalk Financial Data to Work Breakdown Structures As discussed previously, M&O contractors crosswalk their financial data into a reporting framework using work breakdown structures and common cost elements and definitions, and they submit their data to NNSA using CostEX. To help ensure the accuracy of the data, NNSA performs data quality checks of the M&O contractors’ financial data submitted using CostEX. If NNSA cannot validate and reconcile the submitted data using the agency’s processes, it rejects and returns the data to the M&O contractor to correct the errors. NNSA also provides the M&O contractors with error reports from CostEX that they can use to identify and correct errors. Each M&O contractor has established processes to check data quality prior to submitting the data to NNSA in CostEX. For example, representatives from all of the M&O contractors said they reviewed their data for missing information and errors before submitting the data into CostEX. In addition, all of the M&O contractors performed checks to compare their data submissions for common financial reporting with their STARS submissions before submitting the data into CostEX. After the M&O contractors complete their internal data quality checks, they submit their financial data into CostEX. At most sites, M&O contractor representatives said the way their site tracks financial data does not align with how NNSA requests the data be reported in the work breakdown structure and cost elements. Officials from NNSA’s Office of Cost Estimating and Program Evaluation said that because the M&O contractors do not track their financial data using NNSA work breakdown structures, the contractors have to make decisions using professional judgment as to how to crosswalk their project costs, raising concerns that each M&O contractor may make different decisions about how to allocate costs. The officials said this may result in data that are not accurate or comparable for conducting agency- wide analysis. We identified several limitations to the approach NNSA uses to collect common financial data that could affect the accuracy and consistency of the data: NNSA’s data reconciliation process does not ensure M&O contractors’ financial data are accurate. M&O contractors identified potential issues with using STARS for reconciliation to ensure data accuracy. For example, two M&O contractors said that errors can sometimes occur in their monthly STARS reporting. Errors in STARS can be created when a number is mistyped or corrections are made to purchase card or time sheet information. Once the M&O contractor submits its data to STARS, errors cannot be corrected until at least the following month. However, because the common financial reporting data must reconcile with STARS, the M&O contractor has to submit financial data into CostEX that includes the error. The program director for financial integration said a process is in place for the M&O contractors to identify any issues with STARS reporting and correct their reported data in the future. More significantly, some M&O contractors said they make changes to their data before submitting it into CostEX to ensure that the data reconcile. Specifically, representatives from two M&O contractors said they compare their financial data for common financial reporting with their STARS data submission. If data from the two systems do not match for small dollar amounts, the contractors manually make adjustments to the data for common financial reporting rather than making the corrections in their business systems. The representatives also said they do not notify NNSA officials of the manual changes. NNSA requires that financial data for common financial reporting reconcile with STARS. Specifically, NNSA rejects M&O contractor financial data that differs from the STARS data by more than $1. According to federal standards for internal control, management should define objectives clearly to enable the identification of risks and define risks tolerances. For the fiscal year 2018 data collection effort, NNSA documentation indicated that M&O contractors reported financial data for $8.9 billion of costs and reconciled the data with their STARS cost reporting to a total difference of $5.03. According to an NNSA official, M&O contractors reported financial data for $10.2 billion of costs and reconciled the data with STARS to a total difference of $8.97 for fiscal year 2019. However, NNSA has limited assurance that the financial data provided internally reconcile as required because the agency does not know the extent of changes that M&O contractors made to ensure the data reconcile with STARS or the potential effects of those changes on the accuracy of the data. Assessing the extent to which M&O contractors make manual changes to ensure reconciliation with STARS for common financial reporting and determining the effect of these changes could provide additional assurance that the financial data collected through common financial reporting are accurately reported. M&O contractors crosswalk site projects and tasks to NNSA work breakdown structures, resulting in the potential for differences in how costs are allocated. Each M&O contractor tracks financial data for its site based on how it manages the work using projects and tasks, as allowed by federal Cost Accounting Standards. When a site’s projects and tasks do not align with NNSA’s work breakdown structure, site program managers identify the component of the NNSA work breakdown structure with which the project and tasks best align and crosswalk their financial data to the NNSA structure using professional judgment. One site program manager said it is sometimes challenging to identify which of their internal projects and tasks aligns with the NNSA work breakdown structure, especially when internal projects have similar names to describe different project scopes. Another site program manager said the site’s projects and tasks closely align with the NNSA work breakdown structure approximately 30 to 40 percent of the time, and contractor representatives use professional judgment to crosswalk the remaining 60 to 70 percent of their projects and tasks. To create the crosswalk, site program managers consider which NNSA program the project mostly supports. It can be difficult to crosswalk the site data into NNSA’s work breakdown structure, especially for work that benefits multiple weapons programs. For example, a site program manager said that the site’s project to develop inert material for NNSA’s high explosives activities supports multiple weapons programs. The site tracks that work as one project, but NNSA’s work breakdown structure requires that the costs be reported across multiple programs. When M&O contractors make decisions to crosswalk their financial data using professional judgment, the contractors do not provide information to NNSA on how the costs are allocated. By verifying this information, NNSA could ensure that allocation decisions are made consistently across the nuclear security enterprise. M&O contractors provided different financial data for the same projects. M&O contractors continue to report financial data for some program offices into multiple systems, including the G2 program management system, WebPMIS, and spreadsheets. For fiscal year 2018, NNSA compared financial data that the M&O contractors reported, for two NNSA program offices, into the G2 program management system and the CostEX tool used for common financial reporting and found differences between the data reported for the same budget and reporting codes and levels of the work breakdown. The program director for financial integration said he worked with the program offices and identified the cause of the differences in the data. NNSA cannot ensure the accuracy of the data submitted for common financial reporting because NNSA does not have an internal process to verify whether M&O contractors crosswalk their financial data accurately from their business systems to the NNSA work breakdown structure. According to federal standards for internal control, management should use quality information to achieve the agency’s objectives. Under the financial integration policy, the program director for financial integration is responsible for executing a plan for NNSA to achieve enterprise-wide financial integration to collect standardized financial management data; increase transparency of financial accountability; and improve cost analysis, comparability, and reporting consistency among programs and M&O contractors. The program director for financial integration said that verifying whether the M&O contractors properly crosswalk their data to the work breakdown structure is an area in which the agency should improve its common financial reporting effort. NNSA officials stated that the common financial reporting effort does not have a process to validate financial data that are more detailed than STARS and indicated that until the agency has assurances the reported data are accurate, NNSA should not use that more detailed data for agency decision-making. By developing an internal process for NNSA to verify the M&O contractors’ crosswalks, the agency will have better assurance that the data collected through common financial reporting will produce accurate, enterprise- wide financial data that is comparable across the M&O contractors and that satisfies the needs of Congress and other stakeholders. Further, this would help address long-term issues with NNSA’s ability to report the total costs of its programs, in accordance with Managerial Cost Accounting Standards. NNSA Has Not Verified Whether Contractors Accurately Crosswalk Financial Data to Cost Elements As part of common financial reporting, M&O contractors crosswalk their financial data to NNSA’s cost elements. Cost elements capture discrete costs of a particular activity of work and include direct costs such as labor and equipment and indirect costs such as general and administrative costs. In March 2018, NNSA established 22 cost elements and definitions—including 10 indirect cost elements—that the M&O contractors use to report financial data. As we found in our January 2019 report, NNSA officials said this was a critical step toward implementing common financial reporting because without common cost elements, the agency was limited in its ability to report lower-level costs consistently across programs and sites. In addition, having the M&O contractors report financial data across common cost elements would allow NNSA to improve its management of programs across the enterprise. NNSA developed the cost elements and definitions in consultation with the M&O contractors based on the data they could provide because officials said it is important for the contractors to report accurate financial data using the NNSA cost elements. M&O contractors manage their sites’ financial data using expenditure types to track the costs of their projects. These expenditure types capture similar costs as the cost elements, but at a more detailed level, and are specific to each individual M&O contractor based on how the contractor manages its expenses. M&O contractors have flexibility to determine how they structure their work and the expenditures they track in their financial systems consistent with Cost Accounting Standards. Based on our review of M&O contractor documents, M&O contractors varied significantly in the number of expenditure types they tracked. For example, the M&O contractor for one of the national laboratories tracked its financial data using over 900 expenditure types, while another national laboratory used around 50 expenditure types. NNSA officials said that the number of expenditure types at the sites varies based on the nature of the work performed at each site. Most of the M&O contractors cannot crosswalk their expenditures to certain NNSA cost elements because of how they track costs in their systems. Specifically, representatives from five of the M&O contractors said they cannot accurately crosswalk their indirect expenditure types to NNSA’s indirect cost elements because their systems do not capture the data in the way that NNSA wants these data reported. M&O contractors have discretion to classify which costs are considered indirect, and costs for similar activities can be allocated differently by each contractor. In fiscal year 2018, NNSA’s M&O contractors reported spending $3.5 billion on indirect activities. Generally, in cases in which the M&O contractors could not crosswalk their indirect costs to specific NNSA cost elements, representatives from one of the M&O contractors said they allocated their indirect costs to NNSA’s cost elements using percentages, while others said they reported data that did not adhere to the NNSA cost elements. Below are examples of situations in which M&O contractors were not able to accurately report expenditures into NNSA’s indirect cost elements: Representatives from one M&O contractor said they could not accurately report financial data for the general and administrative cost element and site support from other overhead cost elements because the site did not capture its data in that way. As a result, the M&O contractor allocated its indirect costs using formulas and composite rates, rather than reporting actual cost data to NNSA. Representatives from two M&O contractors said they could not accurately report financial data across the site support and infrastructure support cost elements because the structure of their indirect cost pool did not allow them to track those expenditures separately. As a result, representatives from one of the M&O contractors said they reported all of their infrastructure expenditures to the site support cost element. NNSA officials said they were aware of the M&O contractors’ issues with reporting their expenditure types using the NNSA cost elements. Although M&O contractors are required to provide financial data using NNSA’s cost elements, the program director for financial integration said he was aware that M&O contractors report some indirect costs for separate cost elements to a single cost element in CostEX, meaning that they do not accurately report some indirect costs based on NNSA’s definitions. Additionally, the financial integration team identified differences between indirect cost data collected from the M&O contractors for common financial reporting and data reported to another group in NNSA’s Office of Management and Budget. NNSA plans to conduct a review of the data reported through the two efforts to determine the cause of the differences. Officials from the Office of Safety, Infrastructure, and Operations stated that it is important that the common financial reporting effort is able to collect accurate information on M&O contractors’ costs related to infrastructure spending. NNSA is aware of the challenges its M&O contractors have with accurately reporting their expenditure types against the NNSA cost elements. However, NNSA cannot ensure that the agency collects accurate financial data because NNSA does not have a process to verify how the M&O contractors crosswalk their expenditure types to NNSA’s cost elements, consistent with the previously described information quality standard under the federal standards for internal control and NNSA’s financial integration policy. M&O contractors reporting data based on allocated—as opposed to actual—costs is not ideal because NNSA cannot ensure that each M&O contractor is consistently applying the allocation and because the data may not be standardized and comparable across the sites, which affects the quality of the data. By developing an internal process for NNSA to verify how the M&O contractors crosswalk their expenditure types, the agency could better ensure that the data collected through common financial reporting will produce accurate financial data across the nuclear security enterprise that satisfies the needs of Congress and other stakeholders. Further, this would help address long-term issues with NNSA’s ability to report the total costs of its programs. Conclusions NNSA continues to make progress toward implementing agency-wide common financial reporting. However, NNSA faces challenges in fully implementing the effort. For example, NNSA has not consistently ensured that changes to the work breakdown structure are approved, documented, and communicated to the M&O contractors in a timely manner because NNSA has not established and implemented a change control process for the changes. Without establishing and fully implementing a work breakdown structure change control process, NNSA will not be able to verify that the changes to the work breakdown structure are approved by program office management, at a minimum; documented and tracked for accurate data analysis and comparison over time; and communicated to the M&O contractors on a timely basis. NNSA’s approach to implementing common financial reporting relies on M&O contractors to crosswalk their internal financial data into a common reporting framework using a work breakdown structure and common cost elements and definitions, with certain quality checks to help ensure the accuracy of the data. However, NNSA has limited assurance that the financial data that the M&O contractors provide for common financial reporting are accurate because the agency does not know the extent of the changes the M&O contractors make to the data so that the data reconcile to the agency’s accounting system or the potential effects of these changes. By determining the extent of these changes and whether these changes affect the accuracy of the data, NNSA could have greater assurance that the financial data collected through common financial reporting are accurate. Additionally, NNSA cannot ensure that M&O contractors accurately crosswalk their financial data to either the NNSA work breakdown structure or the common cost elements because NNSA has not established processes to verify the information. By developing internal processes that would allow NNSA to verify how the M&O contractors crosswalk their data to the work breakdown structure and common cost elements, NNSA could better ensure that the data collected through common financial reporting will produce accurate enterprise-wide financial data that is comparable across the M&O contractors and that satisfies the needs of Congress and other stakeholders. Further, this would help to address long-term issues with NNSA’s ability to report the total costs of its programs. Recommendations for Executive Action We are making four recommendations to NNSA: The Program Director for Financial Integration, with input from NNSA’s Office of Management and Budget and respective program offices, should establish and implement a work breakdown structure change control process for common financial reporting that ensures changes are approved by program office management, at a minimum; documented; and communicated to M&O contractors on a timely basis. (Recommendation 1) The Program Director for Financial Integration should assess the extent to which M&O contractors make manual changes to their financial data to reconcile with STARS and determine whether it has an effect on the accuracy of the data collected for common financial reporting. (Recommendation 2) The Program Director for Financial Integration should develop and implement an internal process for NNSA to verify how the M&O contractors crosswalk financial data from their systems to the appropriate NNSA work breakdown structure to ensure the reported data are accurate and consistent. (Recommendation 3) The Program Director for Financial Integration should develop and implement an internal process for NNSA to verify that the M&O contractors are consistently applying common cost element definitions at their sites and across the nuclear security enterprise. (Recommendation 4) Agency Comments We provided a draft of this report to NNSA for comment. In its written comments, which are reproduced in appendix II, NNSA agreed with the report’s four recommendations and described actions it intends to take to address them. NNSA also provided technical comments that we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix III. Appendix I: Status of GAO’s January 2019 Recommendations to the National Nuclear Security Administration on Its Common Financial Reporting Effort In our January 2019 report on the National Nuclear Security Administration’s (NNSA) efforts to implement common financial reporting, we made seven recommendations. Table 2 describes NNSA’s progress to implement these recommendations, as of December 2019. Appendix II: Comments from the National Nuclear Security Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report included Hilary Benedict (Assistant Director), Amanda K. Mullan (Analyst in Charge), Colette Alexander, Antoinette Capaccio, Jennifer Echard, Cindy Gilbert, Michael LaForge, Jason Lee, Holly Sasso, and Sheryl Stein.
Why GAO Did This Study NNSA has long faced challenges in determining and comparing the costs of its programs, which are principally performed by M&O contractors across eight sites. Congress needs this information to provide effective oversight and make budgetary decisions. The National Defense Authorization Act for Fiscal Year 2017 required NNSA to implement a common financial reporting system, to the extent practicable, across all sites by December 2020. NNSA's efforts began in 2016 and are ongoing. The Senate report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 includes a provision for GAO to periodically review NNSA's implementation of common financial reporting. This is GAO's second report on this issue. This report examines (1) the steps NNSA has taken to implement common financial reporting since GAO's January 2019 report, and (2) the extent to which NNSA's approach to data collection aligns with the purpose of common financial reporting, including collecting accurate and consistent data from its M&O contractors. GAO reviewed NNSA documents about implementing common financial reporting, including policy and briefing documents, and interviewed NNSA officials and M&O contractor representatives. What GAO Found The National Nuclear Security Administration (NNSA)—a separately organized agency within the Department of Energy (DOE)—is required to implement common financial reporting, to the extent practicable, across its sites to better understand the total costs of its programs. NNSA has taken additional steps to implement such reporting since January 2019 but faces challenges in fully implementing the effort (see table). For example, for fiscal years 2018 and 2019, NNSA used separate work breakdown structures—a method of dividing a project into successive levels of detail—to collect data for some offices. Without a common work breakdown structure, NNSA cannot ensure that it can collect reliable financial data across its sites. NNSA plans to assess the feasibility of implementing a common work breakdown structure, in response to GAO's January 2019 recommendation. In fiscal years 2018 and 2019, NNSA also faced challenges in collecting financial data from management and operating (M&O) contractors, including collecting complete data for all program offices. NNSA is working to resolve these issues. NNSA's approach to data collection provides limited assurance that the data collected for common financial reporting are accurate and consistent across the M&O contractors. At most sites, the M&O contractors track their financial data in a way that does not align with how NNSA requests the contractors report the data. M&O contractors use professional judgment to crosswalk, or map, the financial data from their business systems to the NNSA structures to report the data. NNSA's data quality checks on the M&O contractors' financial data focus on data formatting and ensuring the data match the agency's accounting system. NNSA does not have a process to verify whether the contractors accurately crosswalk their financial data. Under NNSA's financial integration policy, the program director for financial integration is to, among other things, execute a plan to improve cost analysis, comparability, and reporting consistency among programs and M&O contractors. By developing an internal process for NNSA to verify how the M&O contractors crosswalk their financial data to the work breakdown structures, NNSA will have better assurance that it is collecting accurate financial data that are comparable across the M&O contractors, that satisfy the needs of Congress and other stakeholders, and that address long-term issues with its ability to report the total costs of its programs. What GAO Recommends GAO is making four recommendations, including that NNSA implement an internal process to verify the M&O contractors' crosswalks of their financial data to NNSA's work breakdown structures for reporting information. NNSA agreed with the four recommendations.
gao_GAO-20-440SP
gao_GAO-20-440SP_0
Congress and Executive Branch Agencies Continue to Address Actions Identified over the Last 10 Years across the Federal Government, Resulting in Significant Benefits Congress and executive branch agencies have made consistent progress in addressing many of the actions we have identified since 2011, as shown in figure 2 and table 4. As of March 2020, Congress and executive branch agencies had fully or partially addressed nearly 80 percent of the actions we identified from 2011 to 2019. See GAO’s online Action Tracker for the status of all actions. Actions Taken by Congress and Executive Branch Agencies Led to Billions in Financial Benefits As a result of steps Congress and executive branch agencies have taken to address our open actions, we have identified approximately $429 billion in total financial benefits, including $166 billion identified since our last report. About $393 billion of the total benefits accrued between 2010 and 2019, while approximately $36 billion are projected to accrue in 2020 or later, as shown in figure 3. Since our first annual report in 2011, these benefits have contributed to missions across the federal government, as shown in figure 4. Table 5 highlights examples of results achieved in addressing actions we identified over the past 10 years. Other Benefits Resulting from Actions Taken by Congress and Executive Branch Agencies Our suggested actions, when implemented, often result in benefits such as strengthened program oversight; improvements in major government programs or agencies; more effective and equitable government; and increased international security. The following recent examples illustrate these types of benefits. Housing Assistance (2012-28): The federal government and state and local entities provide both rental assistance and affordable housing through a wide variety of programs. In February 2012, we found instances of fragmentation and overlap among federal rental assistance program. We recommended that the Secretary of the Department of Housing and Urban Development (HUD) work with states and localities to develop an approach for compiling and reporting on the collective performance of federal, state, and local rental assistance programs. In 2019, Executive Order 13878 established the White House Council on Eliminating Regulatory Barriers to Affordable Housing. The establishment of the council and the actions taken by HUD are positive steps for reaching out to states and localities and allowing Congress, decision makers, and stakeholders to evaluate collective performance data and provide mechanisms for setting priorities, allocating resources, and restructuring efforts, as needed, to achieve long-term housing goals. Military and Veterans Health Care (2012-15): The Departments of Defense (DOD) and Veterans Affairs (VA) play key roles in offering support to servicemembers and veterans through various programs and activities. In 2012, we found that the departments needed to improve integration across care coordination and case management programs to reduce duplication and better assist servicemembers, veterans, and their families. We recommended that the Secretaries of Defense and Veterans Affairs develop and implement a plan to strengthen functional integration across all DOD and VA care coordination and case management. The departments took several steps between 2012 and 2019 to address this, including establishing a Care Coordination Business Line within their joint Health Executive Committee. This function is intended to develop mechanisms for making joint policy decisions, involve the appropriate decision makers for timely implementation of policy, and ensure that outcomes and goals are identified and achieved, among other things. By taking these steps, DOD and VA strengthen their oversight and more closely integrate care coordination efforts. Tax Policies and Enforcement (2015-17): Since 1980, partnerships’ and S corporations’ share of business receipts increased greatly. These entities generally do not pay income taxes; instead, income or losses (hundreds of billions of dollars annually) flow through to partners and shareholders on their personal income tax returns. In 2014, we found that the full extent of partnership and S corporation income misreporting is unknown. Electronic filed (e-filed) tax returns provide the Internal Revenue Service (IRS) with digital information to improve enforcement operations and service to taxpayers. We recommended that Congress consider expanding the mandate that partnerships and corporations e-file their tax returns to cover a greater share of filed returns. In 2018, Congress passed and the President signed legislation lowering the e-file threshold for partnership and corporation returns. Requiring greater e-filing of tax return information will help IRS identify which partnership and corporation tax returns would be most productive to examine, and could reduce the number of compliant taxpayers selected for examination. Further, expanded e-filing will reduce IRS’s tax return processing costs. Coordination of Overseas Stabilization Efforts (2019-12): The United States has a national security interest in promoting stability in countries affected by violent conflict. We looked at how three federal agencies and an independent institute support conflict prevention, mitigation, and stabilization efforts, such as removing explosives hidden near homes. In 2019, we found that although these entities have worked together in Iraq, Nigeria, and Syria, they had not documented their agreement on key areas of collaboration, such as clarifying roles and responsibilities for stabilization efforts. We recommended that the Departments of State and Defense and the U.S. Agency for International Development should document their agreement to coordinate U.S. stabilization efforts. In 2019, the agencies took several steps to address this such as publishing a directive with the agreed upon definition of stabilization, description of agency roles and responsibilities, and related policies and guidance. Articulating their agreement in formal documents should help strengthen the agencies’ coordination of U.S. stabilization efforts and mitigate the risks associated with fragmentation, overlap, and duplication. Action on Remaining and New Areas Could Yield Significant Additional Benefits Congress and executive branch agencies have made progress toward addressing the 1,076 actions we have identified since 2011. However, further efforts are needed to fully address the 467 actions that are partially addressed, not addressed, or new. We estimate that at least tens of billions of dollars in additional financial benefits could be realized should Congress and executive branch agencies fully address open actions, and other improvements can be achieved as well. Open Areas Directed to Congress and Executive Branch Agencies with Potential Financial Benefits In our 2011 to 2020 annual reports, we directed 110 actions to Congress, including the three new congressional actions we identified in 2020. Of the 110 actions, 58 (about 53 percent) remained open as of March 2020. Appendix V has a full list of all open congressional actions. We also directed 966 actions to executive branch agencies, including 165 new actions identified in 2020. As shown in figure 5, these actions span the government and are directed to dozens of federal agencies. Six of these agencies—DOD, IRS, OMB, VA, and the Departments of Health and Human Services (HHS) and Homeland Security, have more than 20 open actions. Of the 966 actions, 409 (42 percent) remained open as of March 2020. A significant number of open actions are directed to four agencies that made up about 79 percent of federal outlays in fiscal year 2019—HHS, the Social Security Administration, the Department of the Treasury (Treasury), and DOD. Figure 6 highlights agencies with open actions as well as their fiscal year 2019 share of federal outlays. We identified potential financial benefits associated with many open areas with actions directed to Congress and the executive branch. These benefits range from millions of dollars to tens of billions of dollars. For example, DOD could potentially save hundreds of millions of dollars annually by accurately measuring and reducing excess funded, unfinished work at military depots. In another example, IRS should establish a formal collaborative mechanism with the Department of Labor to better manage fragmented efforts and enhance compliance for certain individual retirement accounts that engaged in prohibited transactions, and thereby potentially increase revenues by millions of dollars. Table 6 highlights examples of areas where additional action could potentially result in financial benefits of $1 billion or more. Open Areas with Other Benefits Directed to Congress and Executive Branch Agencies Table 7 shows selected areas where Congress and executive branch agencies can take action to achieve other benefits, such as increased public safety, and more effective delivery of services. This report was prepared under the coordination of Jessica Lucas-Judy, Director, Strategic Issues, who may be reached at (202) 512-9110 or lucasjudyj@gao.gov, and J. Christopher Mihm, Managing Director, Strategic Issues, who may be reached at (202) 512-6806 or mihmj@gao.gov. Specific questions about individual issues may be directed to the area contact listed at the end of each summary. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Appendix I: Objectives, Scope, and Methodology Section 21 of Public Law 111-139, enacted in February 2010, requires us to conduct routine investigations to identify federal programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide. This provision also requires us to report annually to Congress on our findings, including the cost of such duplication, with recommendations for consolidation and elimination to reduce duplication and specific rescissions (legislation canceling previously enacted budget authority) that Congress may wish to consider. Our objectives in this report are to (1) identify potentially significant areas of fragmentation, overlap, and duplication and opportunities for cost savings and enhanced revenues that exist across the federal government; (2) assess to what extent have Congress and executive branch agencies addressed actions in our 2011 to 2019 annual reports; and (3) highlight examples of open actions directed to Congress or key executive branch agencies. For the purposes of our analysis, we used the term “fragmentation” to refer to circumstances in which more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need. We used the term “overlap” when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We considered “duplication” to occur when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. While fragmentation, overlap, and duplication are associated with a range of potential costs and benefits, we include them in this report only if there may be opportunities to improve how the government delivers these services. This report presents 18 new areas of fragmentation, overlap, or duplication where greater efficiencies or effectiveness in providing government services may be achievable. The report also highlights 11 other new opportunities for potential cost savings or revenue enhancements. In addition to these 29 new areas, we identified 88 new actions related to 10 existing areas presented in our 2011 to 2019 annual reports. To identify what actions, if any, exist to address fragmentation, overlap, and duplication and take advantage of opportunities for cost savings and enhanced revenues, we reviewed and updated our prior work and recommendations to identify what additional actions Congress may wish to consider and agencies may need to take. For example, we used our prior work identifying leading practices that could help agencies address challenges associated with interagency coordination and collaboration and with evaluating performance and results in achieving efficiencies. To identify the potential financial and other benefits that might result from actions addressing fragmentation, overlap, or duplication, or taking advantage of other opportunities for cost savings and enhanced revenues, we collected and analyzed data on costs and potential savings to the extent they were available. Estimating the benefits that could result from addressing these actions was not possible in some cases because information about the extent and impact of fragmentation, overlap, and duplication among certain programs was not available. Further, the financial benefits that can be achieved from addressing fragmentation, overlap, or duplication or taking advantage of other opportunities for cost savings and enhanced revenues were not always quantifiable in advance of congressional and executive branch decision- making. In addition, the needed information was not readily available on, among other things, program performance, the level of funding devoted to duplicative programs, or the implementation costs and time frames that might be associated with program consolidations or terminations. As possible, we used partial data and conservative assumptions to provide rough estimates of potential savings magnitude, when more precise estimates were not possible. Appendix VI provides additional information on the federal programs or other activities related to the new areas of fragmentation, overlap, duplication, and cost savings or revenue enhancement discussed in this report, including budgetary information when available. We assessed the reliability of any computer-processed data that materially affected our findings, including cost savings and revenue enhancement estimates. The steps that we take to assess the reliability of data vary but are chosen to accomplish the auditing requirement that the data be sufficiently reliable given the purposes for which they are used in our products. We review published documentation about the data system and inspector general or other reviews of the data. We may interview agency or outside officials to better understand system controls and to assure ourselves that we understand how the data are produced and any limitations associated with the data. We may also electronically test the data to see whether values in the data conform to agency testimony and documentation regarding valid values, or we may compare data to source documents. In addition to these steps, we often compare data with other sources as a way to corroborate our findings. For each new area in this report, specific information on data reliability is located in the related products. We provided drafts of our new area summaries to the relevant agencies for their review and incorporated these comments as appropriate. Assessing the Status of Previously Identified Actions To examine the extent to which Congress and executive branch agencies have made progress in implementing the 908 actions in the approximately 325 areas we have reported on in previous annual reports on fragmentation, overlap, and duplication, we reviewed relevant legislation and agency documents such as budgets, policies, strategic and implementation plans, guidance, and other information between April 2019 and March 2020. We also analyzed, to the extent possible, whether financial or other benefits have been attained, and included this information as appropriate (see discussion below on the methodology we used to estimate financial benefits.) In addition, we discussed the implementation status of the actions with officials at the relevant agencies. Throughout this report, we present our counts as of March 2020 because that is when we received our last updates. The progress statements and updates are published on GAO’s Action Tracker. We used the following criteria in assessing the status of actions: In assessing actions suggested for Congress, we applied the following criteria: “addressed” means relevant legislation has been enacted and addresses all aspects of the action needed; “partially addressed” means a relevant bill has passed a committee, the House of Representatives, or the Senate during the current congressional session, or relevant legislation has been enacted but only addressed part of the action needed; and “not addressed” means a bill may have been introduced but did not pass out of a committee, or no relevant legislation has been introduced. Actions suggested for Congress may also move to “addressed” or “partially addressed” with or without relevant legislation if an executive branch agency takes steps that address all or part of the action needed. At the beginning of a new congressional session, we reapply the criteria. As a result, the status of an action may move from partially addressed to not addressed if relevant legislation is not reintroduced from the prior congressional session. In assessing actions suggested for the executive branch, we applied the following criteria: “addressed” means implementation of the action needed has been completed; “partially addressed” means the action needed is in development or started but not yet completed; and “not addressed” means the administration, the agencies, or both have made minimal or no progress toward implementing the action needed. Since 2011, we have categorized 80 actions as “other” and are no longer assessing these actions. We categorized 48 “other” actions as “consolidated or other.” In most cases, “consolidated or other” actions were replaced or subsumed by new actions based on additional audit work or other relevant information. We also categorized 32 of the “other” actions as “closed-not addressed.” Actions are generally “closed-not addressed” when the action is no longer relevant because of changing circumstances. Methodology for Generating Total Financial Benefits Estimates To calculate the total financial benefits resulting from actions already taken (addressed or partially addressed) and potential financial benefits from actions that are not fully addressed, we compiled available estimates for all of the actions from GAO’s Action Tracker, from 2011 through 2019, and from reports identified for inclusion in the 2020 annual report, and linked supporting documentation to those estimates. Each estimate was reviewed by one of our technical specialists to ensure that estimates were based on reasonably sound methodologies. The financial benefits estimates came from a variety of sources, including our analysis, Congressional Budget Office estimates, individual agencies, the Joint Committee on Taxation, and others. Because of differences in time frames, underlying assumptions, quality of data and methodologies among these individual estimates, any attempt to generate a total will be associated with uncertainty that limits the precision of this calculation. As a result, our totals represent a rough estimate of financial benefits, rather than an exact total. For actions that have already been taken, individual estimates of realized financial benefits covered a range of time periods stretching from 2010 through 2029. To calculate the total amount of realized financial benefits that have already accrued and those that are expected to accrue, we separated those that accrued from 2010 through 2019 and those expected to accrue between 2020 and 2029. For individual estimates that span both periods, we assumed that financial benefits were distributed evenly over the period of the estimate. For each category, we summed the individual estimates to generate a total. To account for uncertainty and imprecision resulting from the differences in individual estimates, we present these realized savings to the nearest billion dollars, rounded down. There is a higher level of uncertainty for estimates of potential financial benefits that could accrue from actions not yet taken because these estimates are dependent on whether, how, and when agencies and Congress take our recommended actions, or due to lack of sufficiently detailed data to make reliable forecasts. As a result, many estimates of potential savings are notionally stated using terms like millions, tens of millions, or billions, to demonstrate a rough magnitude without providing a more precise estimate. Further, many of these estimates are not tied to specific time frames for the same reason. To calculate a total for potential savings, with a conservative approach, we used the minimum number associated with each term. To account for the increased uncertainty of potential estimates and the imprecision resulting from differences among individual estimates, we calculated potential financial benefits to the nearest $10 billion, rounded down, and presented our results using a notional term. This report is based upon work we previously conducted in accordance with generally accepted government auditing standards. Generally accepted government auditing standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: New Areas in Which GAO Has Identified Fragmentation, Overlap, or Duplication This appendix presents 18 new areas in which we found evidence of fragmentation, overlap, or duplication among federal government programs. 1. Army Small Business Engagement Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DOD for review and comment. In its response, DOD officials stated that Army Futures Command is taking the steps necessary to implement GAO’s recommendations, as reflected above. Related GAO Product Army Modernization: Army Futures Command Should Take Steps to Improve Small Business Engagement for Research and Development. GAO-19-511. Washington, D.C.: July 17, 2019. 2. DOD Privatization of Utility Services Agency Comments and GAO’s Evaluation GAO provided DOD with a draft of this report section for comment. DOD provided technical comments, which GAO incorporated as appropriate. Related GAO Product DOD Utilities Privatization: Improved Data Collection and Lessons Learned Archive Could Help Reduce Time to Award Contracts. GAO-20- 104. Washington, D.C.: April 2, 2020. 3. SBA’s Microloan Program Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to SBA for review and comment. SBA stated it plans to continue to explore opportunities for collaboration with USDA and Treasury. SBA also provided technical comments, which GAO incorporated as appropriate. Related GAO Product SBA Microloan Program: Opportunities Exist to Strengthen Program Performance Measurement, Collaboration, and Reporting. GAO-20-49. Washington, D.C.: November 19, 2019. 4. Bank Secrecy Act Implementation Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to FinCEN for review and comment. In its comments, FinCEN continued to disagree with the recommendations and stated that no futures industry association had applied for BSA advisory group membership and that it advised CFTC staff on the areas that the National Futures Association should include as part of a request for direct BSA data access. GAO maintains that the recommendations are both valid, believes that FinCEN advising CFTC is a good first step, and will continue to monitor the implementation of these recommendations. Related GAO Product Bank Secrecy Act: Agencies and Financial Institutions Share Information but Metrics and Feedback Not Regularly Provided. GAO-19-582. Washington, D.C.: August 27, 2019. 5. DATA Act Data Governance Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to OMB for review and comment. OMB did not provide comments on this report section. Related GAO Products DATA Act: Quality of Data Submissions Has Improved but Further Action Is Needed to Disclose Known Data Limitations. GAO-20-75. Washington, D.C.: November 8, 2019. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015. 6. Federal Agencies’ Evidence-Building Activities Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to the Office of Management and Budget (OMB), CNCS, Education, HHS, DOL, and USAID for review and comment. In its response, HHS provided documentation in February 2020 about the actions it plans to take—as part of its implementation of the Foundations for Evidence-Based Policymaking Act—to address the two recommendations directed to it. GAO will monitor HHS’s actions, which GAO believes would likely address its recommendations, if effectively implemented. CNCS and DOL informed GAO they had no comments on this report section. USAID provided technical comments, which GAO incorporated, as appropriate. OMB and Education did not provide comments. Related GAO Product Evidence-Based Policymaking: Selected Agencies Coordinate Activities, but Could Enhance Collaboration. GAO-20-119. Washington, D.C.: December 4, 2019. 7. Individual Retirement Accounts Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to IRS and DOL for review and comment. In their March 2020 responses, IRS and DOL stated that they agreed to formalize collaboration on IRA prohibited transactions. The new information sharing process will be documented in forthcoming DOL procedures. In addition, IRS provided technical comments, which GAO incorporated as appropriate. Related GAO Products Individual Retirement Accounts: IRS Could Better Inform Taxpayers About and Detect Noncompliance Related to Unconventional Assets, GAO-20-210. Washington, D.C.: January 27, 2020. Individual Retirement Accounts: Formalizing Labor’s and IRS’s Collaborative Efforts Could Strengthen Oversight of Prohibited Transactions, GAO-19-495. Washington, D.C.: June 7, 2019. 8. IRS Third Party Cybersecurity Practices Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to IRS for review and comment. IRS did not provide comments on this report section. Related GAO Product Taxpayer Information: IRS Needs to Improve Oversight of Third Party Cybersecurity Practices. GAO-19-340. Washington, D.C.: May 9, 2019. 9. Tax-Exempt Entities Compliance Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to IRS for review and comment. In February 2020, IRS sent GAO a response saying the agency is working to eliminate conditions that inhibit the agency’s ability to identify abusive tax schemes by evaluating existing database project codes to link data across audit divisions and improve the analysis of data monitoring and mining. Related GAO Product Tax-Law Enforcement: IRS Could Better Leverage Existing Data to Identify Abusive Schemes Involving Tax-Exempt Entities. GAO-19-491. Washington, D.C.: September 5, 2019. 10. Public Health and Medical Emergency Response Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to the Department of Health and Human Services (HHS) for review and comment. HHS commented that ASPR will explore funding opportunities to support an exercise of its federal patient movement framework with its support agencies. In addition, HHS officials stated that ASPR would continue to support interagency liaison officers to provide updates on available resources. While GAO agrees that HHS should continue this practice, the misalignment GAO identified underscores that this was not adequate during the response to Hurricanes Irma and Maria in the U.S. Virgin Islands and Puerto Rico. Moreover, ASPR officials acknowledged that more needs to be done to better understand the resources available. Finally, HHS commented that ASPR has implemented air transportation contracts to begin decreasing its reliance on DOD. GAO will continue to monitor the implementation of these recommendations. Related GAO Product Disaster Response: HHS Should Address Deficiencies Highlighted by Recent Hurricanes in the U.S. Virgin Islands and Puerto Rico. GAO-19- 592. Washington, D.C.: September 20, 2019. 11. VA Long-Term Care Fragmentation Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to VA for review and comment. VA provided technical comments, which GAO incorporated as appropriate. Related GAO Product VA Health Care: Veterans’ Use of Long-Term Care Is Increasing, and VA Faces Challenges in Meeting the Demand. GAO-20-284. Washington, D.C.: February. 19, 2020. 12. Coast Guard Specialized Forces Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to Coast Guard, through DHS, for review and comment. The Coast Guard, through DHS, provided technical comments, which GAO incorporated as appropriate. The Coast Guard did not agree with the recommendation in its November 2019 response to GAO’s draft report. At that time, DHS further stated that GAO’s conclusions illustrate a fundamental misunderstanding of the corresponding missions of Specialized Forces units. GAO continues to maintain that overlapping capabilities among units could indicate inefficiencies in how units are used as well as missed opportunities for use in others. In its technical comments provided in March 2020, the Coast Guard indicated that as of February 2020 it had not conducted the analysis necessary to fully identify potential overlap among the units. The Coast Guard stated that it is planning to begin analyzing the units this fiscal year. In line with GAO’s recommendation to analyze potential overlap in capabilities, the Coast Guard should include the cost savings of shutting down a unit from each Specialized Force type and explain the impacts. Related GAO Product Coast Guard: Assessing Deployable Specialized Forces’ Workforce Needs Could Improve Efficiency and Reduce Potential Overlap or Gaps in Capabilities. GAO-20-33. Washington, D.C.: November 21, 2019. 13. DHS’s Processes for Apprehended Families Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DHS for review and comment. DHS provided technical comments, which GAO incorporated as appropriate. Related GAO Product Southwest Border: Actions Needed to Address Fragmentation in DHS’s Processes for Apprehended Family Members. GAO-20-274. Washington, D.C.: Feb. 19, 2020. 14. National Strategy for Transportation Security Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DHS for review and comment. TSA provided technical comments, which GAO incorporated as appropriate. Related GAO Product Transportation Security: DHS Should Communicate the National Strategy’s Alignment with Related Strategies to Guide Federal Efforts. GAO-20-88. Washington, D.C.: November 19, 2019. 15. Surface Transportation Security Training Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DHS for review and comment. DHS officials said DHS has taken initial actions to address GAO’s recommendation, including updating the related Standard Operating Procedure. GAO believes this is a good first step and will continue to monitor the implementation of this recommendation. Related GAO Product Surface Transportation: TSA Should Improve Coordination Procedures for Its Security Training Program. GAO-20-185. Washington, D.C.: November 20, 2019. 16. U.S. Assistance to Central America Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to State for review and comment. In its February 2020 response, State did not comment specifically on whether it agreed with GAO’s revised recommendation. However, it reiterated its disagreement with aspects of the underlying GAO report’s objectives, scope, and methodology. GAO addressed this disagreement in detail in its report and maintains that GAO’s approach provided a reliable and reasonably comprehensive review of the results of U.S. assistance to the Northern Triangle toward achieving key U.S. objectives set forth in the Strategy. Related GAO Product U.S. Assistance to Central America: Department of State Should Establish a Comprehensive Plan to Assess Progress toward Prosperity, Governance, and Security. GAO-19-590. Washington, D.C.: September 26, 2019. 17. Public Access to Federally Funded Research Results Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to Defense, Energy, and the Departments of Commerce (for the National Oceanic and Atmospheric Administration) and Health and Human Services (for the National Institutes of Health), as well as the National Science Foundation and OSTP for review and comment. The National Institutes of Health and the National Science Foundation provided technical comments, which GAO incorporated as appropriate. OSTP said it had no further comments, and the National Oceanic and Atmospheric Administration, Defense, and Energy did not provide comments. Related GAO Product Federal Research: Additional Actions Needed to Improve Public Access to Research Results. GAO-20-81. Washington, D.C.: November 21, 2019. 18. USDA’s Nutrition Education Efforts Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to USDA for review and comment. USDA officials took issue with the characterization of their nutrition education efforts as fragmented, stating that coordination must consider the legislative authority each program has to deliver nutrition education to meet the needs of program target populations and audiences. GAO agrees that a consideration of each program’s legislative authority is important. However, GAO believes that USDA could address the fragmentation GAO identified, which refers to the involvement of multiple USDA agencies and programs in administering the department’s nutrition education efforts, consistent with a consideration of program authority. USDA officials continue to agree that the department needs to improve coordination of its nutrition education efforts. USDA officials described initial actions the department has taken to address GAO’s recommendation, including establishing a nutrition education working group that represents agencies across the department and planning an intradepartmental workshop that will include a focus on nutrition education. In addition, USDA issued the USDA Science Blueprint to outline the department’s nutrition science implementation strategies and nutrition and health promotion objectives. GAO will continue to monitor implementation of this recommendation. Further, GAO will monitor the role of the nutrition education working group going forward and consider the extent to which it provides cross- department leadership for USDA’s nutrition education efforts. Related GAO Product Nutrition Education: USDA Actions Needed to Assess Effectiveness, Coordinate Programs, and Leverage Expertise. GAO-19-572. Washington, D.C.: July 25, 2019. Appendix III: New Areas in Which GAO Has Identified Other Cost Savings or Revenue Enhancement Opportunities This appendix summarizes 11 new areas for Congress or executive branch agencies to consider taking action that could either reduce the cost of government operations or enhance revenue collections for the Treasury. This page is intentionally left blank. 19. Defense Agencies and DOD Field Activities Reform Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DOD for review and comment. DOD commented that the department will continue to work on improving its monitoring and evaluation of its efficiency and reform initiatives. Related GAO Product DOD Needs to Address Inefficiencies and Implement Reform across Its Defense Agencies and DOD Field Activities. GAO-18-592. Washington, D.C.: September 6, 2018. 20. DOD Maintenance Depot Funding Agency Comments and GAO’s Evaluation GAO provided a draft of this section to the DOD for review and comment. DOD did not provide comments on this report section. Related GAO Product Depot Maintenance: DOD Should Adopt a Metric That Provides Quality Information on Funded Unfinished Work. GAO-19-452. Washington, D.C.: July 26, 2019. 21. Ginnie Mae’s Mortgage-Backed Securities Program Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to Ginnie Mae for review and comment. Ginnie Mae did not have any comments on the draft but noted that it is working diligently on the recommendations. Related GAO Product Ginnie Mae: Risk Management and Staffing-Related Challenges Need to Be Addressed. GAO-19-191. Washington, D.C.: April 3, 2019. 22. IRS Tax Debt Collection Contracts Agency Comments and GAO’s Evaluation Related GAO Product GAO provided a draft of this report section to IRS for review and comment. IRS’s technical comments were incorporated above. Tax Debt Collection Contracts: IRS Analysis Could Help Improve Program Results and Better Protect Taxpayers. GAO-19-193. Washington, D.C.: March 29, 2019. 23. Virtual Currency Tax Information Reporting Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to IRS for review and comment. In its response, IRS stated that it is working with Treasury on guidance to address third-party reporting on certain taxable transactions involving virtual currency. GAO will review this guidance when it is available. Related GAO Product Virtual Currencies: Additional Information Reporting and Clarified Guidance Could Improve Tax Compliance. GAO-20-188. Washington, D.C: February 12, 2020. 24. Medicaid Provider Enrollment Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to CMS for review and comment. In its written comments, CMS provided an update on its actions to address the first recommendation, which GAO incorporated. CMS did not provide information on the second recommendation. GAO will continue to monitor CMS’s implementation of these recommendations. Related GAO Product Medicaid Providers: CMS Oversight Should Ensure State Implementation of Screening and Enrollment Requirements. GAO-20-8. Washington, D.C.: October 10, 2019. 25. VA Allocation of Health Care Funding Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to VA for review and comment. VA did not provide comments on this report section. Related GAO Product Veterans Health Care: VA Needs to Improve Its Allocation and Monitoring of Funding. GAO-19-670. Washington, D.C.: September 23, 2019. 26. Open Source Software Program Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DOD for review and comment. DOD explained that current policy allows and encourages the use of open source software where it meets agency needs. In addition, DOD stated that GAO’s recommendations focus on DOD's role as a producer, rather than as a consumer, of open source software. A DOD official explained that it is not reasonable to conclude that the projected savings will result from the implementation of GAO’s recommendations. However, DOD entities can consume open source software that other DOD entities produce. GAO maintains that this very consumption of open source software developed elsewhere in DOD could reduce development costs and potentially produce overall cost savings. Related GAO Product Information Technology: DOD Needs to Fully Implement Program for Piloting Open Source Software. GAO-19-457. Washington, D.C.: September 10, 2019. 27. DOD Oversight of Foreign Reimbursements Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to DOD for review and comment. DOD did not provide comments on this report section. Related GAO Product Defense Logistics Agreement: DOD Should Improve Oversight and Seek Payment from Foreign Partners for Thousands of Orders It Identifies as Overdue. GAO-20-309. Washington, D.C.: March 4, 2020. 28. Drawback Program Modernization Agency Comments and GAO’s Evaluation GAO provided a draft of this report section to CBP for review and comment. CBP provided technical comments, which GAO incorporated, as appropriate. Related GAO Product Customs and Border Protection: Risk Management for Tariff Refunds Should Be Improved. GAO-20-182. Washington, D.C.: December 17, 2019. 29. Student Loan Income-Driven Repayment Plans Agency Comments and GAO’s Evaluation Related GAO Product GAO provided a draft of this report section to Education for review and comment. Education provided technical comments, which GAO incorporated. Federal Student Loans: Education Needs to Verify Borrowers’ Information for Income-Driven Repayment Plans. GAO-19-347. Washington, D.C.: June 25, 2019. Appendix IV: New Actions Added to Existing Areas We are adding 88 new actions based on GAO reports that fall within the scope of 10 existing areas identified in prior annual reports. Navy Shipbuilding In March 2020, GAO identified 12 actions for the Navy to improve its acquisition practices and ensure ships can be efficiently sustained, potentially saving billions of dollars. addressed, one action has been partially addressed, and one action has not been addressed. See the Action Tracker for more information. GAO reported in March 2020 on challenges identifying, evaluating, and mitigating ship sustainment risks during the acquisition process for every new warship class—such as aircraft carriers and submarines—that, if fixed, could save billions of dollars. GAO found 150 examples of systemic maintenance problems, such as failed engines and non-functional plumbing. To correct just 30 percent of these problems, GAO found that it would cost the Navy $4.2 billion. Many of these problems could have been prevented with some attention to future maintenance concerns when designing and building the ships. GAO also found that the Navy underestimated the costs to maintain some ships by $130 billion. GAO made 11 recommendations to help the Navy focus on maintenance earlier and one suggestion to Congress to enhance oversight. New Actions: GAO recommended in March 2020 that the Department of Defense (DOD) improve its policy for setting sustainment requirements and the Navy then revisit its requirements to comply with the new policy. GAO also recommended that DOD and the Navy take steps to improve sustainment in the acquisition process. GAO also asked Congress to consider developing an oversight mechanism for evaluating shipbuilding programs sustainment cost estimate growth during the acquisition process. While GAO cannot precisely estimate the financial benefits from these actions, if the Navy could eliminate some of the sustainment problems and even 1 percent of the maintenance cost growth GAO identified, it could amount to billions of dollars in savings. Agency Comments and GAO’s Evaluation: DOD agreed with eight and partially agreed with three recommendations. GAO provided a draft of this report section to DOD for comment. DOD provided technical comments, which GAO incorporated as appropriate. In September 2019, GAO identified two new actions to improve the Department of the Interior’s valuations of offshore oil and gas resources, each of which could increase the amount of revenue collected by tens of millions of dollars annually. partially addressed, and two actions have not been addressed. See the Action Tracker for more information. Production of oil and gas in federal waters generated about $90 billion in revenue from 2006 through 2018 including from industry bids for leasing rights. However, GAO found that the Department of the Interior’s (Interior) Bureau of Ocean Energy Management (BOEM) undervalues federal offshore oil and gas resources, leading it to collect less bid revenue than it otherwise would. Specifically, the bureau (1) forecast unreasonably high levels of depreciation on lease value between lease sales, which lowered bid revenue by about $873 million from March 2000 through June 2018; and (2) adjusted some valuations downward to justify accepting bids, which lowered bid revenue by about $567 million over the same time period. New Actions: The bureau Director should (1) enlist an independent third party to examine the extent to which the bureau's depreciation forecasts assure the receipt of fair market value, and make changes as appropriate; and (2) take steps to ensure that the bureau’s bid valuation process is not biased toward adjusting valuations downward. In its comments on the report, Interior disagreed with the first recommendation and partially agreed with the second, disagreeing with GAO’s characterization of BOEM’s delayed valuations and valuation process, respectively. GAO maintains that taking each of the recommended actions would better ensure a fair return on the sale of offshore oil and gas leases by better ensuring BOEM’s thresholds for accepting bids are sound and unbiased. Agency Comments and GAO’s Evaluation: GAO provided a draft of this report section to Interior for review and comment. In its March 2020 response, Interior indicated that (1) although it disagrees with the first recommendation, it will conduct an in-house review and have it peer-reviewed; and (2) it now agrees with the second recommendation. In April 2019, GAO identified two actions the U.S. Army Corps of Engineers and U.S. Coast Guard can take to improve fragmented interagency coordination of lessons learned following disasters. and two actions have been partially addressed. See the Action Tracker for more information. GAO found that the U.S. Army Corps of Engineers (USACE) and U.S. Coast Guard (USCG) had fragmented approaches to identifying interagency challenges and lessons learned related to disaster contracting, resulting in these findings not being communicated to the Federal Emergency Management Agency’s (FEMA) Emergency Support Function Leadership Group—the group tasked with identifying interagency lessons learned following disasters. FEMA officials stated that it is up to each agency to elevate issues to the group; however, GAO found that neither USACE nor USCG had formal processes for doing so. Identifying and communicating lessons learned would help better manage fragmentation and enhance agencies’ abilities to address weaknesses in disaster response. New Actions: To help address fragmentation and ensure that challenges are communicated across departments, GAO recommended in April 2019 that the Secretary of the Army should direct the Commanding General of USACE to, and that the Commandant of USCG should, establish formal processes to solicit input from officials directly involved in the agencies' response and recovery following a disaster and to share that input with the Emergency Support Function Leadership Group. Agency Comments and GAO’s Evaluation: USACE and USCG concurred with GAO’s recommendations and planned to implement them this year. GAO provided a draft of this report section to USACE and USCG for review and comment. USCG said it is reviewing lessons learned and the after-action reporting process to update its policy. USACE indicated it updated its guidance to incorporate specific steps to communicate lessons learned with FEMA’s Emergency Support Function Leadership Group and that the guidance would be finalized spring 2020. GAO will continue to monitor the implementation of these recommendations. In March 2020, GAO identified a new action to improve the Department of Housing and Urban Development’s working capital fund and better position it to achieve over $1 million in previously identified potential annual savings. Three actions have not been addressed, and one action has been partially addressed. See the Action Tracker for more information. Housing and Urban Development’s (HUD) Working Capital Fund (WCF) provides a mechanism to centralize and fund federal shared services used across offices and agencies within HUD. One of the WCF’s goals is to support the efficient delivery of goods and services. GAO found that HUD does not assess the results of the WCF’s business process analyses, which are used to identify opportunities for efficiencies. For example, these analyses identified actionable ways to reduce high volumes of transactions for certain services, such as calls to help desks to manually reset passwords, which contribute to increased costs. Assessing the results of these analyses would help HUD better understand how the WCF’s efforts contribute to its goal. New Action: GAO recommended that the Secretary of HUD, in conjunction with the Office of the Chief Financial Officer, should ensure that the results of the business process analyses are assessed to better determine how these analyses contribute to its goal of efficient delivery of goods and services. While GAO cannot estimate the potential savings that would result, taking this action could help the WCF achieve over $1 million in potential annual savings already identified by WCF recommendations and to identify additional potential savings. Agency Comments and GAO’s Evaluation: GAO provided a draft of this report section to HUD for review and comment. HUD agreed and said it would address this recommendation in 2020, including adding the results of the business process analyses to its performance measures. In June 2019, GAO identified a new action that could improve oversight of disaster relief funds and long- standing problems of improper payments, which could result in significant cost savings. consolidated, one action has been partially addressed, and two actions have not been addressed. See the Action Tracker for more information. Agencies must distribute disaster relief aid quickly following hurricanes, wildfires, or other natural disasters, but quickly spending billions of dollars can increase the risk of improper payments. In June 2019, GAO reported that one of six selected agencies did not submit required internal control plans to Congress for funds appropriated following the 2017 disasters. Of the five agencies that did submit the required plans, four were not timely and all lacked necessary information, such as how they met Office of Management and Budget (OMB) guidance and federal internal control standards. These issues were caused, in part, because OMB lacked an effective strategy for helping agencies develop internal control plans for overseeing these funds. New Action: GAO recommended in June 2019 that the Director of OMB, after consulting with key stakeholders, should develop a strategy for ensuring that agencies communicate sufficient and timely internal control plans for effective oversight of disaster relief funds. Agency Comments and GAO’s Evaluation: OMB disagreed with this recommendation and stated that it does not believe timeliness and sufficiency of internal control plans present material issues that warranted OMB action; however, GAO continues to believe that future internal control plans could serve as a critical transparency tool for controls over disaster funds. GAO provided a draft of this report section to OMB for review and comment. In its response, OMB continued to disagree that this recommendation is needed. GAO believes this action is needed for oversight of disaster funds. In January 2020, GAO identified three new actions to help the Internal Revenue Service prevent refund fraud associated with identity theft. If implemented, these actions could potentially save millions of dollars. more information. Business identity theft refund fraud (business IDT) occurs when thieves create, use, or try to use a business’ identifying information to claim a tax refund. Between January 2017 and August 2019, the Internal Revenue Service’s (IRS) fraud detection tools helped prevent $384 million from being paid to fraudsters. However, GAO found IRS could do more to combat business IDT. In January 2020, GAO found that, inconsistent with leading practices, IRS had not designated an entity to design and oversee business IDT fraud risk management efforts, conducted a fraud risk assessment, or developed a fraud risk profile to document the results of its risk assessment. Addressing these issues could help IRS identify and implement more effective controls to detect and prevent business IDT. While GAO cannot precisely estimate the financial benefits associated with this action, even a 1 percent increase in fraud prevention could amount to millions in financial benefits. New Actions: In January 2020, GAO recommended that consistent with leading practices, IRS (1) designate a dedicated entity to oversee agency-wide business IDT efforts; (2) develop a fraud risk profile for business IDT; and (3) document and implement a strategy for addressing fraud risks identified in its fraud risk profile. Agency Comments and GAO’s Evaluation: GAO provided a draft of the January 2020 report to IRS for review and comment. IRS generally agreed, but did not provide details on the actions it plans to take to address these recommendations. IRS also did not provide comments on this report section. In January 2020, GAO identified a new action to help the Department of Veterans Affairs assess duplication in its medical supply program. partially addressed. See the Action Tracker for more information. In January 2020, GAO found that the Medical Surgical Prime Vendor program duplicates parts of the Department of Veterans Affairs (VA) Federal Supply Schedule program. VA spends billions of dollars annually on procurement of medical supplies to support care for veterans at its 170 medical centers but has not assessed whether its efforts are duplicative. VA procures medical supplies through both its own Medical Surgical Prime Vendor program and through the Federal Supply Schedule program—a government-wide program, parts of which the General Services Administration has long delegated to VA. However, VA has not assessed whether duplication across these programs is necessary or if efficiencies could be gained. GAO cannot estimate the savings that might be associated with this action because such savings will be dependent on whether, when, and how VA takes action. New Action: GAO recommended that the Secretary of Veterans Affairs should take steps to assess duplication between VA’s Medical- Surgical Prime Vendor and Federal Supply Schedule programs to determine if this duplication is necessary or if efficiencies can be gained. Agency Comments and GAO’s Evaluation: VA agreed with this recommendation. GAO provided a draft of this report section to VA for review and comment. VA provided technical comments which GAO incorporated as appropriate. In November 2019, GAO identified two new actions to help reduce the risk of duplicate funding in emergency relief assistance for transit agencies. addressed, two actions have been partially addressed, and one action has not been addressed. See the Action Tracker for more information. In 2017, Hurricanes Harvey, Irma, and Maria caused hundreds of millions of dollars in damage to U.S. public transit facilities. Both the Federal Transit Administration (FTA) and the Federal Emergency Management Agency (FEMA) have the authority to provide disaster assistance funding to transit agencies, but FTA has primary responsibility if it receives an appropriation from Congress for its Public Transportation Emergency Relief program. FTA did not receive an appropriation until roughly 6 months after the first hurricane’s landfall, thus transit agencies could initially apply to FEMA for assistance. In November 2019, GAO found that although FTA and FEMA coordinated efforts, both agencies still approved about $35,000 to one applicant for the same expenses in 2019. While the amount of funding in question was relatively small, without addressing the challenge of identifying transit expenses in FEMA applications, FTA and FEMA will continue to face the risk that both agencies will approve funding for the same expense in the future. New Actions: GAO recommended in November 2019 that FTA and FEMA identify and develop controls, such as methods to more easily identify transit expenses within applications FEMA receives, to address the risk of duplicate funding. Agency Comments and GAO’s Evaluation: The Department of Transportation (DOT) and the Department of Homeland Security (DHS) agreed with this recommendation and outlined steps they plan to take to address it. GAO provided a draft of this report section to DOT and DHS for review and comment. DOT said it did not have comments on this report section. DHS provided technical comments, which GAO incorporated as appropriate. In April 2019, GAO identified 28 new actions to help agencies save millions of dollars through better planning and implementation of cloud-based computing solutions. Two actions have been addressed. See the Action Tracker for more information including applicable agencies. Beginning in 2012, federal agencies were required to assess all IT investments for cloud computing services, and from 2014 to 2018, agencies reported $291 million in cloud-related savings. For example, agencies reported saving as much as $15 million migrating email systems to cloud services. However, GAO reported that 12 of the 16 agencies reviewed had not completed their assessments and that savings data were unavailable for 84 percent of the 488 cloud investments reviewed. Improving the assessment of investments for cloud services and tracking related savings can help agencies make better decisions regarding cloud acquisitions and potentially save millions of dollars from implementing cloud services. New actions: GAO made 28 recommendations in April 2019 to all 16 agencies, including that (1) 12 agencies should complete an assessment of all of their IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance; and (2) 16 agencies should ensure that their respective Chief Information Officers establish a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. Fourteen agencies agreed with all recommendations, the Department of the Treasury neither agreed nor disagreed, and the Department of Defense agreed with the recommendation on completing assessments, but not with the recommendation on tracking savings. Agency comments and GAO’s evaluation: GAO provided a draft of this report section to the 16 agencies for review and comment. One agency agreed, 13 agencies had no comments, and two neither agreed nor disagreed. Additionally, seven of the 16 agencies are taking actions to address GAO’s recommendations. In April 2019, GAO identified 36 new actions to help federal agencies meet the Office of Management and Budget’s data center consolidation and optimization goals, resulting potentially in hundreds of millions of dollars in savings. original two actions in this area. See the Action Tracker for more information. Federal agencies operate thousands of data centers and, since 2010, have been required to close unneeded facilities and improve the performance of the remaining centers. This effort is currently known as the Data Center Optimization Initiative (DCOI). Since 2010, agencies have closed 6,250 centers and reported $4.2 billion in savings. However, only two of 24 agencies in GAO’s review planned to fully meet the Office of Management and Budget’s (OMB) September 2018 government-wide optimization goals, such as determining how much time data servers sit unused and how effectively data centers use power. New actions: GAO made 36 recommendations in April 2019 to 22 of the 24 agencies in its review, including that (1) 11 agencies should meet DCOI’s data center closure targets; (2) four agencies should meet DCOI’s data center-related cost savings targets and one should identify additional cost savings opportunities; and (3) 20 agencies should meet DCOI’s data center optimization metric targets. While GAO cannot precisely estimate the potential savings of taking these actions, combined estimates from agencies for similar prior actions exceeded $100 million per year, suggesting potential for hundreds of millions of dollars in additional savings over time. In June 2019, OMB significantly revised DCOI’s goals and performance measures and GAO continues to monitor agencies’ progress against these new targets. Agency comments and GAO’s evaluation: GAO provided a draft of this report section to 22 agencies for review and comment. Two agencies agreed, seven neither agreed nor disagreed, and 13 agencies had no comments. Additionally, two agencies have taken action to fully address the recommendations and the remaining 20 agencies are taking actions to address GAO’s recommendations. In our 2011 to 2020 annual reports, we directed 110 actions to Congress, of which 58 remain open. Thirty-five have been addressed and 17 were closed as not addressed or consolidated. Of the 58 open congressional actions, 15 are partially addressed and 43 are not addressed, as of March 2020 (see figure 10). The tables below have more information on the 58 open congressional actions. Our Action Tracker downloadable spreadsheet (available in XLSX or CSV formats) has information on all actions. Appendix VI: Additional Information on Programs Identified This appendix provides additional information on the federal programs or other activities related to the new areas of fragmentation, overlap, duplication, cost savings, or revenue enhancement discussed in this report, including budgetary information when available. “Programs” may include grants, initiatives, centers, loans, and other types of assistance or projects. This information can provide useful context for the issues we identified, but limitations should be noted. It is not always possible to report budgetary information at the specific program or activity level because agency budgets are not organized by programs, but rather by appropriations accounts. In those instances, we reported the most reliable and available data for the most recent fiscal year or we did not report budgetary information. Further, because this report discusses various programs or activities, each table may report different types of budgetary information, such as obligations, collections, or outlays. Because of the limitations described above, the budgetary information reported in this appendix should not be totaled and does not represent potential cost savings for all programs.
Why GAO Did This Study The federal government has made an unprecedented financial response to the COVID-19 pandemic. At the same time, opportunities exist for achieving billions of dollars in financial savings and improving the efficiency and effectiveness of a wide range of federal programs in other areas. Congress included a provision in statute for GAO to identify and report on federal programs, agencies, offices, and initiatives—either within departments or government-wide—that have duplicative goals or activities. GAO also identifies areas that are fragmented or overlapping and additional opportunities to achieve cost savings or enhance revenue collection. This report discusses the new areas identified in GAO’s 2020 annual report—the 10th report in this series; the progress made in addressing actions GAO identified in its 2011 to 2019 reports; and examples of open actions directed to Congress or executive branch agencies. To identify what actions exist to address these issues, GAO reviewed and updated prior work, including matters for congressional consideration and recommendations for executive action. What GAO Found GAO’s 2020 annual report identifies 168 new actions for Congress or executive branch agencies to improve the efficiency and effectiveness of government in 29 new mission areas and 10 existing areas. For example: The Department of Defense could potentially save hundreds of millions of dollars annually by accurately measuring and reducing excess funded, unfinished work at military depots. The Centers for Medicare & Medicaid Services could better ensure that states implement Medicaid provider screening and enrollment requirements, which could potentially save tens of millions of dollars annually . The Government National Mortgage Association could enhance the efficiency and effectiveness of its operations and risk management and reduce costs or enhance federal revenue by tens of millions of dollars annually . The Internal Revenue Service should establish a formal collaborative mechanism with the Department of Labor to better manage fragmented efforts and enhance compliance for certain individual retirement accounts that engaged in prohibited transactions, and thereby potentially increase revenues by millions of dollars . Improved coordination and communication between the Department of Health and Human Services’ Office of the Assistant Secretary for Preparedness and Response and its emergency support agencies—including the Federal Emergency Management Agency and Departments of Defense and Veterans Affairs—could help address fragmentation and ensure the effective provision of public health and medical services during a public health emergency. The Department of Education should analyze data and use it to verify borrowers’ income and family size information on Income-Driven Repayment plans to safeguard the hundreds of billions of dollars in federal investment in student loans and potentially save more than $2 billion . The Internal Revenue Service could increase coordination among its offices to better manage fragmented efforts to ensure the security of taxpayer information held by third-party providers. GAO identified 88 new actions related to 10 existing areas presented in 2011 through 2019 annual reports. For example: The Department of the Navy could achieve billions of dollars in cost savings by improving its acquisition practices and ensuring that ships can be efficiently sustained. The Office of Management and Budget could improve oversight of disaster relief funds and address government-wide improper payments, which could result in significant cost savings. The U.S. Army Corps of Engineers and the U.S. Coast Guard could better identify and communicate lessons learned in contracting following a disaster to improve fragmented interagency coordination. Significant progress has been made in addressing many of the 908 actions that GAO identified from 2011 to 2019 to reduce costs, increase revenues, and improve agencies’ operating effectiveness. As of March 2020, Congress and executive branch agencies have fully or partially addressed 79 percent of all actions (721 of 908 actions)—57 percent (519 actions) fully addressed and 22 percent (202 actions) partially addressed. This has resulted in approximately $429 billion in financial benefits. About $393 billion of these benefits accrued between 2010 and 2019, and $36 billion are projected to accrue in future years. This is an increase of $166 billion from GAO’s 2019 annual report. These are rough estimates based on a variety of sources that considered different time periods and utilized different data sources, assumptions, and methodologies. While Congress and executive branch agencies have made progress toward addressing actions that GAO has identified since 2011, further steps are needed. GAO estimates that tens of billions of additional dollars could be saved should Congress and executive branch agencies fully address the remaining 467 open actions, including the new ones identified in 2020. Addressing the remaining actions could lead to other benefits as well, such as increased public safety, and more effective delivery of services. For example:
gao_GAO-19-297
gao_GAO-19-297_0
Background Student Visa and School Certification Process Foreign students interested in studying in the United States must first be admitted to an SEVP-certified school or university before applying for a nonimmigrant visa at a U.S. embassy or consulate overseas to authorize travel to the United States. A visa holder must present himself or herself for inspection at a U.S. port of entry by an officer with DHS’s U.S. Customs and Border Protection to determine admissibility. Nonimmigrants, including foreign students, are permitted to enter the United States for an authorized period of stay. Schools seeking to enroll foreign students on F and M visas must pay an application fee and petition for SEVP certification by submitting an electronic certification petition and supporting documentation to ICE through SEVIS. Among other things, SEVIS assists ICE in tracking and providing oversight of foreign students—while they are approved to study in SEVP-certified U.S. educational institutions—and their accompanying dependents. Figure 1 outlines the steps required for schools seeking to obtain and maintain SEVP certification and the process for foreign nationals to pursue a course of study in the United States. More specifically, during the initial certification process, a school must provide ICE, specifically SEVP’s School Certification Unit (Certification Unit), with evidence of the school’s legitimacy (or bona fides) and eligibility. Such evidence includes the following: proof of any requisite licensure or approval by an appropriate state- level licensing or approving agency; proof of accreditation by an accrediting agency recognized by the Department of Education, if accreditation is required or otherwise claimed; DSO’s attestation statement that he or she is familiar, and intends to comply, with program rules and regulations for admission under, and maintenance and change of, nonimmigrant student status; and confirmation by the school that it is eligible for certification, among other things (willful misstatements in a school certification petition may constitute perjury); and DSOs’ proof of U.S. citizenship or lawful permanent residency. In addition, petitioning schools must generally submit a school catalog or written statement including certain information with respect to the qualifications of teaching staff, and attendance and grading policies, among other things. However, the requirement for a school catalog or written statement is not applicable to a public school or school system, a school accredited by a Department of Education–recognized accrediting agency, or a secondary school operated by or as part of such an accredited school. Moreover, an institution of higher education that is not a public educational institution or system, or not accredited by a recognized accrediting body, must provide evidence “in lieu of” meeting those criteria. Such evidence must show either that the school of higher learning confers recognized degrees upon its graduates or its credits have been and are unconditionally accepted by at least three public or accredited institutions of higher education. Schools nominate individuals to serve as DSOs, who act as liaisons between foreign students, the DSOs’ employing school, and federal government agencies. DSOs support school compliance with record- keeping, reporting, and other requirements, and provide recommendations to foreign students regarding the maintenance of their immigration status. In addition to entering and maintaining complete information on students in SEVIS in a timely manner, DSOs are responsible for using SEVIS to submit their school’s certification petition and update the information, as necessary. To demonstrate eligibility, DSOs must, among other things, provide to ICE statements certifying their familiarity and intent to comply with the program rules and regulations relating to the requirements for nonimmigrant students’ admission, maintenance of status, and change of status, and requirements for school approval. ICE’s regulations provide that willful misstatements in certification and recertification submissions may constitute perjury. Once ICE has received a complete petition from a school seeking SEVP certification, staff from SEVP’s Field Representative Unit are to conduct a site visit to the school, including each instructional site foreign students will attend, to interview school officials and review the facilities. After receiving all necessary evidence and a site-visit report from the field representatives, ICE staff in the Certification Unit analyze the documentation, determine the school’s eligibility, and certify those schools that they determine meet all of the program’s requirements. Further, DHS is required to conduct a review, every 2 years, of certified schools’ continued eligibility and compliance with the program’s requirements. To be eligible for recertification, an SEVP-certified school must demonstrate at the time of filing that it remains eligible for certification and has complied during its previous period of certification or recertification with record-keeping, retention, reporting, and other program requirements. During the recertification process, the Certification Unit requires schools to submit the same type of evidence that was required for certification, including, among other things, proof of state licensing and accreditation and DSO attestation statements and citizenship documentation. The Certification Unit also evaluates how the school has ensured that its foreign-student records are accurate and in compliance with statutory record-keeping requirements. However, site visits are not required for recertification. The Enhanced Border Security and Visa Entry Reform Act of 2002 states that a material failure of an SEVP-certified school to comply with the record-keeping and reporting requirements to receive foreign students shall result in the suspension for at least 1 year, or termination, of the school’s approval to receive such students. SEVP’s Analysis and Operations Center (Compliance Unit) conducts ongoing monitoring of SEVP-certified schools for compliance with these regulatory record- keeping and reporting requirements, as well as schools’ continued eligibility for certification. Under federal regulation, SEVP can deny an SEVP-certified school’s recertification petition or, subsequent to out-of- cycle review, withdraw certification if the school or its programs are no longer eligible for certification. Denial of recertification or withdrawal on notice as a result of out-of-cycle review may be for any valid and substantive reason, including failure to comply with record-keeping and reporting requirements, willful issuance by a DSO of a false statement, or not operating as a bona fide institution of learning, among other bases. Fraud Risk-Management Leading Practices and Requirements According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that, as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. In July 2015, we issued the Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt, as depicted in figure 2. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires the Office of Management and Budget (OMB), in consultation with the Comptroller General of the United States, to establish guidelines for federal agencies to create controls to identify and assess fraud risks and design and implement antifraud control activities. The act further requires OMB to incorporate the leading practices from the Fraud Risk Framework in the guidelines. In July 2016, OMB published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. Further, the act requires federal agencies to submit to Congress a progress report each year for 3 consecutive years on the implementation of the controls established under OMB guidelines, among other things. ICE Has Strengthened Fraud Risk Management for SEVP but Has Not Fully Developed a Fraud Risk Profile or Employed Certain Data Tools That Can Help Guide Its Efforts ICE Has Taken Steps to Enhance Fraud Risk Management ICE developed a risk-assessment framework and other tools to assist in its efforts to manage fraud risks to SEVP. For example, in 2014, ICE began developing an SEVP Risk Assessment Model and Framework, which provides an overview of how SEVP identifies, assesses, responds to, and reports on identified internal and external risks to the program. Specifically, SEVP’s Risk Assessment Model and Framework—which was updated several times between 2014 and 2017—discusses categories of fraud risks to the program, including fraud associated with schools, DSOs, and students. Moreover, in 2014, ICE developed a Risk Assessment Tool for SEVP that uses data from SEVIS records to identify potential fraud and other noncompliance issues among certified schools. The tool prioritizes different risk indicators—such as the proportion of the school that consists of foreign students—and ranks schools by risk level. SEVP officials stated that schools identified as high risk receive additional administrative review by the Compliance Unit. According to SEVP officials and documentation we reviewed, ICE has continued to update and refine the tool since 2014 to improve its effectiveness in helping to identify program risks, including fraud risks. Through these and its oversight efforts, ICE has identified various fraud risks in SEVP; such risks may take various forms, including immigration benefit fraud, which involves the willful or knowing misrepresentation of material facts for the purpose of obtaining an immigration benefit, such as a nonimmigrant student status, without lawful entitlement. According to ICE documentation we reviewed and officials we spoke to, the fraud risks to the program generally fall into four broad categories: schools, students, DSOs, and third-party brokers, who are individuals engaged in the fee- or commission-based recruitment of foreign students, among other activities. Figure 3 illustrates the types of fraud that may occur in these four categories during different stages of a certified school’s involvement in the program, as we identified in ICE documentation and through our interviews with ICE officials. For specific examples of fraud risks that ICE has identified in SEVP, see figure 4. ICE has also taken steps since 2012 to strengthen its fraud risk- management efforts in response to our prior recommendations. For example, in our 2012 report on SEVP risks, we found that, among other things, ICE did not have a process to assess risks in SEVP and did not consistently implement existing internal controls for determining school eligibility. To address this and other findings, we made eight recommendations to enhance ICE’s ability to assess program risks, prevent and detect school certification fraud, and improve the controls over SEVP. ICE took action that addressed these eight recommendations and has developed various tools designed to strengthen its fraud risk- management efforts (see app. I). Further, ICE has taken steps to improve collaboration and coordination to enhance fraud risk management between SEVP and CTCEU, the unit within ICE responsible for managing criminal investigations. More specifically, ICE has embedded agents within SEVP’s Compliance Unit, and these agents help provide law-enforcement expertise within the unit and act as liaisons with ICE agents located in the field to provide information and support ongoing criminal investigations. According to a senior ICE official with CTCEU, the embedded agents have helped streamline processes and provide expertise to aid administrative and investigative efforts. Figure 5 shows the process for coordination between CTCEU and SEVP. Further, ICE officials with CTCEU stated they have acquired specialized software tools to manage fraud tips and to conduct open-source and related research on certified schools suspected of acting fraudulently. To help identify and prioritize leads, ICE officials stated that they use a software tool to efficiently help review and prioritize tips received through ICE’s tip line, which gathers tips from the general public on suspicious or potential criminal activity. To aid investigations of schools, ICE explored the use of another specialized software to aid the review of online social media associated with schools or individuals, among other things. In addition, changes to SEVIS have aided ICE’s efforts to manage fraud risks in the program. In 2008, ICE initiated an effort to modernize SEVIS to address identified system vulnerabilities, such as the inability to capture detailed school data that would allow the detection of patterns and anomalies that could indicate fraud. Although SEVIS modernization is not yet complete, changes made in the system have helped to improve system usability and the ability to identify suspected fraud in the program, according to program officials. For example, system edit checks implemented in 2015 and 2016 to verify user-entered names and addresses have enhanced data quality by helping to identify and prevent likely data-entry errors. SEVP officials also stated that improved data quality can help make it easier to distinguish potential fraud from unintentional data-entry errors. ICE officials we spoke to and related documentation we reviewed stated that SEVIS modernization efforts may include additional functionality, such as the ability to create person-centric records for each student. ICE Does Not Have Some Components of a Fraud Risk Profile Needed to Fully Assess and Manage Fraud Risks Although ICE has developed a Risk Assessment Model and Framework and taken other action to improve fraud risk management in SEVP, ICE has not fully developed and implemented a “fraud risk profile” that would help guide its efforts. According to our Fraud Risk Framework, an effective antifraud entity tailors the approach for carrying out fraud risk assessments to its programs. This approach allows an agency to, among other things, develop a fraud risk profile that identifies the inherent fraud risks affecting the program, assesses the likelihood and effect of each type of fraud risk that the determines the agency’s tolerance for certain types or levels of fraud risks in the program, examines the suitability of existing controls for each fraud risk, and documents the program’s fraud risk profile. Effective managers of fraud risks use this profile to help decide how to allocate resources to respond to fraud risks. Further, Federal Internal Control Standards require managers to respond to identified risks. Appendix III provides additional information on the key elements in the fraud risk-assessment process including the development of a fraud risk profile. Our assessment of SEVP’s Risk Assessment Model and Framework found that while it describes the program’s approach for managing fraud risks, it does not include all of the key elements of a fraud risk profile: First, SEVP’s Risk Assessment Model and Framework identifies three broad categories of inherent fraud risks that affect the program (those posed by schools, DSOs, and students), but does not include all risks that the program or its stakeholders have identified, such as the risk of third-party brokers. As noted previously, ICE agents and program officials identified brokers as a risk to the program because brokers have helped facilitate school and student fraud and misused or stolen student funds in the past. However, according to ICE officials with SEVP, SEVP’s Risk Assessment Model and Framework was not designed to define all of the risks posed to SEVP. Second, while SEVP’s Risk Assessment Model and Framework assesses the potential effect of its risk posed by students, schools, and DSOs, it does not discuss the likelihood of the risk’s occurrence. For example, the Risk Assessment Model and Framework contains a narrative outlining the potential negative consequences of each of the three broad risk categories but does not address the likelihood of those risks occurring. According to SEVP officials, SEVP’s Risk Register helps identify and determine the likelihood of identified program risks. However, our review of the Risk Register found that it is used to track program-wide risks and does not identify or discuss specific fraud risks. Further, these officials stated that many of the components in a fraud risk profile are included in SEVP’s Risk Assessment Tool, but this tool was developed to prioritize the review of SEVP-certified schools that have potential compliance issues and was not designed to address all SEVP fraud risks such as the risks posed by students or brokers. Using information on the likelihood of risk occurrence can help managers decide how to allocate resources. For example, managers can use this information to make decisions to allocate resources to addressing fraud risks that are most likely to occur or have relatively high impact. Third, SEVP’s Risk Assessment Model and Framework does not assess the agency’s tolerance for all fraud risks to the program. For example, while SEVP officials stated that students represent a significant risk to the program, they have not fully assessed the extent of risks associated with student fraud or the agencies’ tolerance for it. In October 2017, the SEVP Director stated that SEVP was just beginning to get a better understanding of student risks, but had not done an assessment of their likelihood and tolerance. However, SEVP officials acknowledged the importance of fully assessing student risks because of the challenges that can be associated with detecting, preventing, and responding to student fraud. Fourth, SEVP’s Risk Assessment Model and Framework does not examine the suitability of existing fraud controls or prioritize all residual risks that remain after inherent risks have been mitigated by existing control activities. We found that, while the Risk Assessment Model and Framework discusses different internal controls and tools used to prioritize and address risks in the school certification and recertification process, such as the Risk Assessment Tool, it does not explicitly identify any internal controls or tools used to prioritize or address student risks. In addition, the Risk Assessment Model and Framework does not identify and prioritize residual fraud risks that ICE has flagged as being vulnerabilities to the program. According to ICE agents in four field offices and officials in the Compliance Unit, limitations to SEVP’s ability to prevent some schools that present fraud risks from obtaining certification or continuing to participate in the program after fraud risks have been identified represent residual risks to the program. For example, officials in the Compliance Unit stated that certified schools that have been accredited through an accrediting body recognized by the Department of Education generally represent a lower fraud risk, but ICE has still experienced noncompliance and cases of fraud with these schools. At one point several fraud cases were tied to the same accrediting body. In another example of a potential residual risk to the program, ICE field agents stated that potentially fraudulent schools may continue to operate during criminal investigations, which can take several years to investigate and prosecute. During the investigation, schools may remain in operation and continue to enroll foreign students, provided their certification is not withdrawn through other administrative actions. As one example, ICE’s investigation into Prodee University—a case that involved hundreds of students—began in 2011, but warrants were not issued until 2015. The school continued to operate and accept foreign students during the 4-year investigation, creating residual risk to the program during these years. According to SEVP’s Director, the program has not developed a fraud risk profile that fully addresses all identified risks because the program has not yet developed the maturity needed to manage its risks in this way, but she noted that doing so could be a good next step in the process. Without a fraud risk profile consistent with leading practices—which identifies all fraud risks, discusses the likelihood of those risks, assesses the agency’s risk tolerance, and determines the suitability of related controls—ICE cannot ensure it has taken all necessary actions to address SEVP risks. ICE Is Exploring the Use of Data Analytics to Aid Fraud Detection in SEVP ICE is exploring the use of better data analytics to help detect fraud in SEVP but has not yet employed techniques, like network analysis, to help detect and prevent fraud prior to certification. ICE officials with SEVP stated that they are exploring the use of additional data-analytics tools to help mitigate fraud in the program, including tools that can perform network analysis. However, these efforts are in their early stages and have been limited to conversations between program staff. While previously noted efforts to improve SEVIS may also include additional data analytics to mitigate fraud, these efforts have remained underway since 2008. Agency officials told us they recognize that better analytic tools can help them detect and prevent fraud in the certification process and are seeking additional resources to support this effort. According to agency documentation, SEVP awarded a contract in September 2018 to help establish a data-governance framework within SEVP. Among other things, the contract will examine the tools, skill sets, and number of people needed to support the data-related needs for SEVP, to include operational data and analytics. According to agency officials, SEVP plans to award a contract in the first quarter of fiscal year 2019 to provide better data-analytics support. Data-analytics approaches, such as network analysis, have the potential to enable ICE to identify high-risk schools prior to initial certification, thus allowing SEVP to apply increased oversight, as needed, during the adjudication process. Network analysis involves a quantitative approach for analyzing, summarizing, and graphically representing complex patterns of relationships among individuals or organizations. Such a technique is useful for identifying associations, such as between schools with current or past administrative and criminal concerns and those schools seeking certification. Information about the connections and relationships among schools—developed through network analysis—may then provide leads in reviews and investigations in the certification and recertification processes, which are important controls for preventing fraudulent schools from entering and remaining in the SEVP program. ICE field agents with two of five field offices we visited stated that it can be challenging to identify fraudulent schools as compared to legitimate ones during the initial certification of schools. For example, agents familiar with one investigation stated that after ICE began investigating a school for suspected fraud, the owner tried to establish another school, which was only identified because of a lead provided through interviews conducted during the investigation. Further, because tools such as the Risk Assessment Tool use data analytics, but rely on information collected from current SEVP-certified schools, it can be difficult to identify schools with fraud concerns before they are certified to participate in the program. Using a network approach in our analysis of 2,439 SEVP-certified schools, we identified 11 connections that could raise fraud concerns. Specifically, we conducted a network analysis utilizing both public and proprietary information associated with certified schools as of September 2017. We obtained basic information on these schools from ICE, such as school names and addresses. We also used public records associated with these schools related to businesses and people, such as past and current executives. Using this information and freely available public software, we identified relationships among certified schools that ICE had previously identified as having potential compliance or fraud concerns and other certified schools that did not have such concerns. For example, in 11 connections, we identified instances in which an executive appeared to have been employed by a school under active criminal investigation or administrative review who was either previously or later employed by a different school not under investigation or review. Moreover, for 2 of the 11 connections, we found additional derogatory information associated with executives tied to SEVP-certified schools that could raise fraud concerns. For instance, one executive had employment terminated from a previous school and was under investigation for misappropriating school funds for personal use. While these connections do not prove fraud or noncompliance, they do provide information about potential risks, which can inform the prioritization of administrative and investigative resources during certification. ICE currently has limited ability to identify associations among schools with potential fraud concerns before they are certified to participate in the program. According to our Fraud Risk Framework, federal managers should design and implement specific control activities to prevent and detect fraud. These control activities can include data analytics, among other things, and should emphasize fraud prevention to the extent possible. A network approach provides the capability to better prevent and detect fraud by identifying potentially fraudulent schools before they are certified by SEVP and by detecting associations that pose a fraud risk among those already certified. ICE Has Processes for School Certification and Ongoing Compliance Monitoring, but Long- Standing Delays in Recertifying Schools Pose Fraud Risks ICE has processes in place for school certification, recertification, and ongoing compliance monitoring, and has taken steps to improve school certification controls since our 2012 report. We also found that ICE followed its established procedures and specifically identified GAO’s fraudulent petitions or otherwise took appropriate steps to prevent the petitions from moving forward in the process during our three independent covert tests of SEVP internal controls over the school certification process. However, the agency continues to face long- standing delays in conducting recertification reviews every 2 years to ensure that SEVP-certified schools continue to meet program requirements—one of its important fraud risk controls. As a result of these delays, ICE has a queue of recertification petitions awaiting adjudication, which creates additional fraud risks to the program if higher-risk schools continue to operate pending recertification. However, the agency has not assessed the magnitude of these risks. ICE Assesses Schools’ Initial and Continued Eligibility to Enroll Foreign Students through the Certification and Recertification Processes and Ongoing Compliance Monitoring ICE’s certification and recertification processes are designed to assess schools’ initial and continued eligibility to enroll foreign students and, as previously discussed, once a school is certified, ICE is to monitor its continued program eligibility. SEVP-certified schools are to undergo recertification reviews every 2 years (see fig. 6). Initial certification: As previously discussed, to be eligible for SEVP certification, a petitioning school must establish at the time of filing that it is a bona fide institution of learning or other recognized place of study that possesses the necessary facilities, personnel, and finances to conduct, and is in fact engaged in, instruction in recognized courses. SEVP officials stated that they address potential fraud risks during the initial certification process by verifying the schools’ information and documentation through web-based research and a site visit to interview the school’s DSO and observe the school’s facilities. According to SEVP officials and guidance, as of October 2016, field representatives are responsible for conducting and documenting site visits for certifications. When conducting the visits, field representatives are to gather evidence on school eligibility for certification, review the facilities, and interview personnel nominated on the petition to become DSOs. They may also report back any anomalies or areas of concerns they may notice for further vetting by the compliance unit. SEVP received approximately 2,000 certification petitions from fiscal years 2013 through 2017. See figure 7 for details on the number of approved and denied petitions during this period. ICE has implemented several controls to address fraud risks in the school certification process since our 2012 report on SEVP program risks, but long-standing delays in the recertification process create additional fraud risks. In particular, ICE strengthened its processes for verifying and monitoring schools’ accreditation and states’ licensing statuses. For example, since December 2012, SEVP adjudicators are to verify all “in lieu of” letters during the school’s initial-certification and recertification processes. In May 2015, SEVP developed a continuous process for verifying schools’ state licensing and accreditation status and updated its Adjudicator’s Manual with specific actions adjudicators must take to consistently verify evidence provided by schools, including “in lieu of” letters and states’ licensing documentation. In addition, SEVP took steps to ensure that all flight schools had the appropriate Federal Aviation Administration certification. Recertification: To be eligible for recertification, an SEVP-certified school must demonstrate at the time of filing that it remains eligible for certification and has complied during its previous period of certification or recertification with record-keeping, retention, reporting, and other program requirements. SEVP received approximately 14,000 recertification petitions from fiscal years 2013 through 2017. See figure 8 for details on the number of approved and denied petitions during this period. The recertification process is an important fraud risk control, according to ICE officials, since they may determine that some certified schools are potentially noncompliant during the recertification process. For example, SEVP denied 105 recertification petitions from fiscal year 2013 through fiscal year 2017. On the basis of our review of recertification denial data, the majority of denials were due to the school’s abandoning its petition for recertification by not responding to SEVP’s request for further information. Appendix IV provides additional details on the withdrawal and denial of certification and recertification petitions as outlined in federal statute and regulation. For the remaining schools, SEVP issued a formal recertification denial notice for a variety of reasons, including those that highlight fraud risks in the program, such as improper issuance of Forms I-20, including the issuance of forms to foreign students who will not be enrolled in or carry a full course of study; DSO conduct did not comply with program regulations; willful issuance by a DSO of a false statement; failure to timely report school or course of study information, including material changes; and failure to maintain the accreditation or licensing necessary to qualify graduates as represented in the school’s Form I-17. Ongoing compliance monitoring: The Enhanced Border Security and Visa Entry Reform Act of 2002 provides that SEVP-certified schools are to comply with record-keeping and reporting requirements to enroll nonimmigrant students. Between schools’ initial certifications and their subsequent recertification reviews, ICE uses a variety of mechanisms to monitor ongoing compliance with program requirements and mitigate fraud risks. For example: SEVP deployed its first group of field representatives in 2014. As of June 2018, ICE had 57 field representatives across 60 different geographic areas of responsibility nationwide. According to SEVP guidance, field representatives are to act as direct liaisons between SEVP and certified schools and are to try to meet with all certified schools in their territory at least once per year if the school has foreign students enrolled, or once every 2 years if no foreign students are enrolled. According to SEVP officials, the field representatives are to have a customer-service focus and assist DSOs in adhering to program requirements and, as a result, do not have law-enforcement or investigative responsibilities. However, if field representatives learn of potential fraud while visiting a school, they are to document and send this information to SEVP headquarters. All of the eight field representatives we interviewed reported that they primarily have a customer-service role but have also identified and reported suspected fraud to SEVP headquarters. For instance, one representative stated that she reported a language school because its stated level of student enrollment did not appear to correspond with the number of students in class during her visits to the school. SEVP adjudicators are to verify and adjudicate changes that occur at an SEVP-certified school that require an update to the school’s Form I-17 petition information in SEVIS. These changes include the school’s name, location, or new areas of study offered, among others. According to Certification Unit officials, adjudicators review information from both SEVP’s risk tools and field-representative school-visit reports when adjudicating updates to identify any indications of noncompliance or fraud that need to be further reviewed and researched by the Compliance Unit. Compliance Unit staff are to vet tips provided by external parties (such as DSOs from other schools) or internal stakeholders (such as field representatives or Certification Unit adjudicators) to determine whether they indicate the need to open an administrative or criminal investigation on the school. Compliance Unit staff may also identify schools for additional monitoring. The Compliance Unit is also responsible for extracting and analyzing data from SEVIS on an ongoing basis, including data related to certified schools and foreign students suspected of noncompliance and fraud, among other things. According to ICE officials, staff are responsible for researching schools with high-risk scores provided by the Risk Assessment Tool. ICE may conduct an out-of-cycle review of a school at any time to help determine whether the school is complying with its reporting and record-keeping requirements and to ensure the school’s continued eligibility for SEVP certification. ICE may initiate an out-of-cycle review as a result of receiving information regarding potential noncompliance or fraud. The out-of-cycle review process may include a review of student records, a request for the submission of documentation to verify accreditation, a request for proof of state licensure, or a request for any other required evidence that establishes a school’s continued eligibility for SEVP certification. ICE officials stated that they may, pending the result of this review, issue a remedial action plan to the school describing the areas of noncompliance, such as correcting student records, that the school is required to address to maintain its program eligibility. If, upon completion of an out-of-cycle review, SEVP determines that a certified school has failed to sustain eligibility or has failed to comply with the record-keeping, retention, reporting, and other requirements, SEVP will institute withdrawal proceedings by serving the school a notice of intent to withdraw SEVP certification. At the conclusion of withdrawal proceedings, a school found to be ineligible for continued SEVP certification as a result of an out-of-cycle review will receive a notice of withdrawal (see app. IV for additional information on the withdrawal process). ICE Followed Its Procedures during Three GAO Covert Tests of ICE’s School Certification Controls ICE followed established procedures during our three covert tests of the internal controls over the SEVP school certification process by either successfully identifying GAO’s fraudulent petitions or by taking appropriate steps to prevent the petitions from moving forward in the process. Therefore, we did not identify any significant deficiencies during our testing of these controls. We submitted certification petitions and conducted other covert investigative work for three fictitious schools, all of which have differing certification requirements. Using these schools, GAO agents applied for SEVP certification. For one of the fictitious schools, we tested SEVP certification controls that require schools to submit complete documentation by submitting an application for the school that was missing several of the required documents. Consistent with its procedures, ICE flagged our petition as incomplete and sent us a notification stating that our petition was canceled because we failed to submit all supporting evidence as outlined in the regulations. For our second school, we tested SEVP controls requiring schools to schedule and complete a site visit conducted by an SEVP field representative, by submitting a completed petition, but avoiding the site visit and requesting that our paperwork move forward without it. SEVP’s field representative subsequently notified us that our petition would not move forward until a site visit was performed. For our third fictitious school, we submitted an application, and participated in a site visit with SEVP officials. We tested SEVP controls related to verifying application documentation, and whether SEVP site- visit officials followed established procedures for the site visit. The field representative toured the facilities and interviewed GAO agents posing as school officials. During its review of our petition, ICE took steps to verify our school’s information and discovered that documentation we submitted was fictitious. As a result, SEVP officials subsequently referred our school to ICE agents for further investigation, consistent with ICE policies and procedures. Upon learning that ICE followed its documented internal control processes, we concluded our covert testing. Long-Standing Delays in Recertifying Schools Create Additional Fraud Risks in SEVP ICE faces long-standing challenges in conducting school recertification on a 2-year basis consistent with statute and regulation, which may allow potentially fraudulent schools to operate for a longer period without detection. The Enhanced Border Security and Visa Entry Reform Act of 2002 states that DHS must conduct compliance reviews every 2 years, during which ICE reviews a school’s records to verify that it continues to comply with program-eligibility requirements. ICE began the first recertification cycle in May 2010—8 years after the enactment of the statutory requirement for periodic review of SEVP-certified schools. As of March 2012—nearly 10 years after statutory enactment—ICE reported that it had recertified approximately 19 percent of certified schools. In October 2016, ICE reported that it had completed its first round of recertification (in other words, all existing certified schools had been recertified at least one time) and had used recertification to address a number of issues, including gathering missing data for some school records. ICE has continued to recertify schools. However, Certification Unit officials told us that, while recertification should be conducted every 2 years, ICE has been unable to meet a 2-year time frame for all certified schools. ICE has been extending schools’ certification expiration dates since officials began recertifying schools in 2010, according to Certification Unit officials, to provide additional time for adjudicating recertification petitions. According to ICE regulations, schools should be notified 180 days before their certification expiration date and must file a completed petition for recertification by such date, which is 2 years from the date of their previous SEVP certification or the recertification expiration date. However, as described in figure 9, SEVP has been extending schools’ certification expiration dates by 180 days beyond the 2-year mark as defined in ICE’s regulation. Under this process, schools must submit their complete petition and supporting documentation to SEVP within 180 days after the 2-year mark. Extending certification expiration dates increases the period between each recertification review, resulting in a decrease in the number of recertification reviews conducted in a given time frame, as shown in the hypothetical example of two schools in figure 10. For instance, if SEVP initially certified a school in January 2016, by providing an extension SEVP is setting the school’s certification expiration date to July 2018—2 years and 180 days after the initial certification—as opposed to 2 years after the initial certification, which would be consistent with ICE regulations. After receiving the school’s documentation, Certification Unit staff need time to review and adjudicate the petition. If this school submits a complete petition to SEVP in June 2018—1 month before its revised expiration date—SEVP staff may and do take additional time, depending on the facts and circumstances of the specific petition, beyond the revised expiration date to adjudicate the petition. SEVP officials stated that, if necessary, they can further extend the certification expiration date to accommodate the time needed for their review. For instance, SEVP may not adjudicate this school’s petition until December 2018. Once SEVP completes its adjudication in December 2018, the school’s new certification expiration date would be June 2021 (2 years and 180 days after December 2018). Thus, rather than potentially being able to complete two rounds of recertification during this 5-year period consistent with ICE regulation, SEVP would recertify the school only once. As we reported in 2012, according to SEVP officials, ICE delayed the recertification process until after SEVIS was deployed in 2003 and the program fee was increased in 2008 to support hiring additional staff. Further, with regard to resources, ICE officials stated that they are cross- training adjudicative staff across all of their program areas to help address the recertification workload, and creating regional adjudication teams with assigned territories similar to the field representatives’ territories to allow the adjudicators to work with the same schools throughout the school’s participation in the program. In addition, in February 2018, SEVP’s Director stated that ICE was expecting to hire additional adjudicators for a total of 10. In July 2018, ICE identified the need to increase initial certification fees and add a new recertification fee to, among other things, hire additional adjudicators to address longer recertification processing times. Specifically, ICE stated that, at present staffing levels, SEVP is able to process 1,939, or 44 percent, of the required annual projected 4,400 recertification cases. ICE’s actions to allocate additional resources to the recertification process are a step in the right direction toward addressing its recertification delays. However, it is unclear whether these actions alone will be adequate to address the delays. As of June 2018, ICE officials told us that there were 3,281 recertification petitions that needed to be adjudicated. As previously discussed, recertification reviews are an important fraud risk control because they are one of ICE’s primary means of reviewing each school’s data and identifying potential school noncompliance and fraud, especially since an out-of-cycle review may not be conducted for each school. As Federal Internal Control Standards state, management should: (1) establish and operate activities to monitor the internal control system and evaluate the results, and (2) identify, analyze, and respond to risks related to achieving the defined objectives. By not requiring schools to submit their petitions within the 180-day period prior to the 2-year expiration date, as required by regulation, ICE has limited assurance it is leveraging the recertification process effectively to identify and respond to potential fraud risks to the program, including those risks associated with allowing a fraudulent school to operate for a longer period. ICE’s plan to increase the number of SEVP adjudicators may help it meet the 2-year recertification requirement, but without monitoring and evaluating the efficacy of these actions, ICE will not have reasonable assurance it can effectively manage the recertification process and associated fraud risks. ICE Does Not Assess Residual Risk Posed by Schools in Its Recertification Queue As previously discussed, ICE’s queue of recertification petitions awaiting adjudication creates additional fraud risks to the program if higher-risk schools continue to operate pending recertification. However, ICE has not assessed the magnitude of such risks. As of June 26, 2018, ICE had 3,281 recertification petitions in a queue for review, according to SEVP officials, petitions that ICE adjudicates in the order in which they were filed. As discussed, ICE uses a variety of mechanisms to monitor schools’ ongoing compliance with program requirements and mitigate fraud risks. In addition, ICE assesses and considers schools’ risks during the adjudication process for recertification. Specifically, according to SEVP’s recertification standard operating procedures, case analysts in the Certification Unit are to review the recertification packages once submitted to determine whether they are complete and prepare them for adjudication. Further, SEVP officials stated that the Certification Unit staff use an assessment of the school’s risk to help prioritize further analysis and review efforts. When adjudicating recertification petitions, adjudicators are to confirm that they have assessed the school’s risk and whether any identified risks have previously led to any further action, according to Certification Unit officials. If case analysts determine that compliance issues are present (e.g., the school has closed or the school has made updates to the Form I-17 that are awaiting adjudication), they are to notify their supervisors. For higher-risk schools, Certification Unit officials stated that adjudicators may request more detailed evidence from schools as part of recertification, consistent with their standard operating procedures, than they would for lower-risk schools to help make more efficient use of the resources in this unit. These processes have helped SEVP consider and address potential risks during the recertification process. However, SEVP has not determined risks posed by schools in its recertification queue and, according to Certification Unit officials, does not prioritize the review of schools’ recertification petitions in its queue based on risk. As previously noted, ICE is required to conduct periodic reviews every 2 years to determine SEVP-certified schools’ continued program eligibility and compliance. The statute governing recertification does not, by its terms, preclude ICE from considering a school’s relative risk as part of the compliance review process. However, SEVP’s Director and Certification Unit officials stated that a recertification process that prioritizes reviews based on school risk would not be particularly helpful or add value in addressing school compliance concerns because the officials already have a number of mechanisms they can use, as previously discussed, to address potential noncompliance, including conducting out-of-cycle reviews of high-risk schools. Although ICE considers schools’ risk-related information during the adjudication process and may identify noncompliant or potentially fraudulent schools through ongoing monitoring activities, ICE has not determined the extent to which there are residual fraud risks posed by schools in the recertification queue that ICE has identified as higher-risk than other schools awaiting recertification. According to GAO’s Fraud Risk Framework, managers should rank residual fraud risks in order of priority, using the likelihood and impact analysis, as well as risk tolerance, to help decide how to allocate resources to respond to residual fraud risks, all of which is documented in a fraud risk profile. As previously discussed, a fraud risk profile (1) identifies the inherent fraud risks affecting the program, (2) assesses the likelihood and effect of each type of fraud risk that it has identified, (3) determines the agency’s tolerance for certain types or levels of fraud risks in the program, and (4) examines the suitability of existing controls for each fraud risk. Given SEVP’s long- standing delays in recertifying schools, without an assessment of residual risks posed by the recertification queue—as part of its fraud risk profile, as previously noted—ICE cannot ensure that it is effectively addressing the risks posed by higher-risk schools awaiting recertification, a situation that does not help further strengthen ICE’s fraud risk-management efforts in SEVP. ICE Has Implemented Controls That Mitigate Fraud Risks Related to the Eligibility, Suitability, and Training of DSOs, but Weaknesses Exist ICE has identified fraud risks related to DSOs and implemented controls to mitigate these risks, but weaknesses exist in four key areas: (1) verification of information provided by DSOs in support of their eligibility, (2) background checks, (3) mandatory compliance training, and (4) fraud- risk training. Prior to approval of schools’ nomination of individuals to serve as DSOs, these nominees must meet eligibility requirements and pass a criminal-background check, but weaknesses exist in both of these controls. In addition, once ICE approves prospective DSOs, it has controls for oversight and training; however, this training is not mandatory and does not address fraud risks. ICE Does Not Routinely Verify DSO-Submitted Eligibility Information in Support of Their Immigration or Citizenship Status ICE has eligibility requirements for school employees seeking to serve as DSOs at SEVP-certified schools, as discussed earlier, but does not routinely verify DSO-submitted eligibility information in support of their immigration or citizenship status. According to ICE regulations, to be eligible to participate as a DSO, an individual must be a regularly employed member of the school administration whose office is located at the school and must meet two primary eligibility criteria. First, a DSO’s compensation may not include commissions for recruitment of foreign students. To verify that requirement, a field representative is to interview a school’s principal DSO during an initial certification site visit, and ask whether any prospective DSOs receive compensation from commissions for recruitment of foreign students. In addition, a field representative is to review the school’s website for recruitment-related activities and evaluate the DSO’s job title and position description, according to ICE officials. Second, DSOs must be U.S. citizens or lawful permanent residents, but the Certification Unit does not routinely verify the evidence provided to meet this eligibility requirement. Specifically, DSOs are to submit documentation during the school’s certification or recertification process— such as a passport, birth certificate, Permanent Resident Card or Alien Registration Receipt Card, or copy of naturalization/citizenship certificate—as evidence of their U.S. citizenship or lawful permanent resident status. The Certification Unit is to review this documentation to verify that the biographic details match the information provided on the school’s Form I-17. According to ICE officials, if the Certification Unit suspects that a prospective DSO’s documentation may not be valid, it will send the information to the Compliance Unit for additional review. However, neither the Certification Unit nor the Compliance Unit routinely verify the information reported by DSOs in support of their immigration or citizenship status because they do not have access to the type of information needed to independently verify this information for all prospective DSOs, according to ICE officials. Certification Unit officials told us that verifying information on naturalized U.S. citizens and lawful permanent residents would be beneficial. They said that they have previously asked for access to information, such as other DHS databases that contain information on naturalized U.S. citizens or lawful permanent residents, to strengthen their process for determining the eligibility of prospective DSOs. However, they have yet to receive access to this information. In addition, verifying eligibility information for U.S.-born citizens would also be valuable, but is more difficult than for naturalized U.S. citizens or lawful permanent residents, according to ICE officials. This is because ICE does not collect DSOs’ Social Security numbers— key information necessary to verify U.S. citizenship—in part because SEVIS does not have the necessary security features needed to collect and house those data, and adding those features would be costly. In June 2018, ICE management officials stated that they were reviewing databases that may be useful to verify DSOs’ self-reported eligibility information but did not provide any additional support or documentation of those plans or a time frame for completing this review. As outlined in our Fraud Risk Framework, as part of an effective antifraud strategy, managers should take steps to verify reported information, particularly self-reported data. Specifically, managers can benefit from conducting data matching to verify key information, including self-reported data and information necessary to determine eligibility, using government or third-party sources to verify data electronically. Until ICE routinely verifies the eligibility information submitted by prospective DSOs in support of their immigration or citizenship status, particularly for naturalized U.S. citizens and lawful permanent residents, ICE will not be able to ensure that it is preventing ineligible individuals, including those who represent a fraud risk, from becoming DSOs and providing them with access to SEVIS to maintain student records. ICE Plans for More- Comprehensive Vetting of Prospective DSOs’ Suitability for the Position Remain Incomplete ICE has taken some initial steps to strengthen the process for vetting prospective DSOs but has not implemented comprehensive background checks on DSO nominees prior to approving them to carry out the DSOs’ reporting, record-keeping, and other functions. ICE officials told us that they have been working since December 2016 to develop a plan to conduct comprehensive background checks on prospective DSOs to address past concerns about DSO vetting. Specifically, in 2011, ICE expressed concerns that DSOs, who were not required to undergo background checks, were responsible for maintaining updated information of foreign students in SEVIS. According to ICE officials, they have taken initial steps to address these concerns by implementing criminal-background checks on prospective DSOs. Specifically, in May 2017, ICE started conducting background checks on all school employees nominated to be DSOs at the time of petitioning for initial SEVP certification or whenever a school requests to add a new DSO. For these types of checks, ICE officials within CTCEU are to review the prospective DSO’s biographic information from both the Form I-17 and the proof of U.S. citizenship or immigration status documentation received by the school. After ICE officials in CTCEU complete this check, they are to forward the findings to SEVP for review. If SEVP determines that a prospective DSO is unsuitable for participation in the program, ICE officials in SEVP are to send a notice of rejection to the nominating school. From April 2017 to March 2018, ICE screened approximately 4,750 prospective DSOs and identified 68 individuals with a criminal history. ICE rejected the nomination of 15 of these prospective DSOs, because, for example, they had criminal histories that included instances of identity theft, fraud in obtaining U.S. citizenship, and conspiracy, among other crimes. ICE officials stated that certain crimes will not necessarily disqualify a candidate, such as misdemeanors, traffic- related infractions, or other lesser crimes. As of June 2018, ICE officials told us that they are developing a more- comprehensive background-check process to screen prospective DSOs against additional government data sources. Specifically, ICE officials told us that they are seeking to partner with DHS’s Transportation Security Administration (TSA) to collect biometric information (e.g., fingerprints) on prospective DSOs at TSA’s enrollment provider locations nationwide during the school certification process. ICE officials stated that they intend to provide the biometric information they collect through TSA’s enrollment provider to ICE’s Office of Professional Responsibility (OPR), and OPR officials will review such information to determine DSOs’ suitability. According to agency documentation, ICE’s OPR would vet such information against data sources to screen these individuals for prior criminal histories such as sexual misconduct, terrorist activities, and immigration violations. According to ICE officials, they also intend to use this process to periodically review the suitability of incumbent DSOs. While ICE officials have told us they intend to expand the screening of prospective DSOs, ICE does not have a documented implementation plan that outlines how the project will be executed. The Project Management Institute’s A Guide to the Project Management Body of Knowledge (PMBOK® Guide) identifies standards related to project-management processes, including the need to have documented implementation plans describing how the project will be executed, monitored, and controlled, as well as requirements and techniques for communication and establishing agreements among stakeholders. In addition, GAO’s Schedule Assessment Guide identifies best practices associated with developing and maintaining a reliable, high-quality schedule. ICE provided us with a draft of its revised background-check policy, talking points on its plans for these checks, and draft requirements it shared with TSA in December 2016. However, these documents do not provide a detailed project- implementation plan to guide ICE’s effort. As of June 2018, ICE and TSA officials have met twice in the last 2 years, and ICE officials do not have any documents or other written details on their planned coordination with TSA. SEVP’s Director acknowledged that SEVP will need to develop a project plan to help guide its coordination with TSA and ICE’s OPR. Without a documented implementation plan for this effort that outlines how the project will be executed, monitored, and controlled, ICE does not have reasonable assurance that it will be able to implement a more- comprehensive DSO background-check process. ICE Has Mechanisms to Monitor and Support DSOs but Does Not Have Mandatory Training for Them ICE has established mechanisms for monitoring SEVIS usage by approved DSOs and providing support to DSOs to help them ensure their schools comply with SEVP requirements but does not mandate training for DSOs. Once DSOs are approved by SEVP, they are authorized to make changes to student records in SEVIS and to create Forms I-20, which enable students to apply for nonimmigrant student status. To detect noncompliance and fraud that may be committed by DSOs during this process, ICE has established mechanisms to monitor information entered and identify data for computers used by DSOs through SEVIS compliance checks, among other things. For example, according to agency officials, ICE monitors DSO actions in SEVIS to help prevent noncompliance and fraud. In addition to monitoring DSOs’ use of SEVIS, ICE provides support and training to DSOs to help ensure they can effectively update and maintain student records in SEVIS and provide recommendations to students regarding the maintenance of their status, according to our review of ICE documentation and interviews with ICE and school officials. According to program rules, DSOs are responsible for understanding SEVP regulations related to the requirements for foreign students’ admission, maintenance of status, and change of status and requirements for school approval. To assist them, ICE officials and DSOs that we interviewed told us that SEVP uses its field representatives to provide DSOs with a point of contact for questions related to the program. According to SEVP’s internal guidance, field representatives are expected to visit the schools within their areas of responsibility at least once a year to provide in-person guidance and training to DSOs. DSOs at 15 of the 17 schools we visited stated that the field representatives were helpful, including with providing guidance on how to comply with SEVP rules and regulations. In addition, SEVP internal guidance encourages DSOs to take its web- based training course on the responsibilities and obligations for both DSOs and foreign students in SEVIS. However, this course is voluntary. According to ICE officials and field representatives, the extent to which DSOs take the voluntary training varies—some DSOs receive additional training beyond the voluntary SEVP training, but other DSOs do not complete any training. ICE officials noted that the voluntary online training may be perceived as cumbersome and that, since it is not required, many DSOs instead reach out to field representatives or call the SEVP Response Center to get answers to questions that are covered by existing training materials. ICE officials also stated that they do not know the extent to which DSOs have completed the online training because they do not track this information. Further, the officials acknowledged that since training is voluntary, some DSOs may not complete it before assuming their responsibilities and gaining access to SEVIS. ICE officials we interviewed told us they encounter problems with DSOs complying with record-keeping requirements; however, they believe most of these issues are a result of DSOs not understanding program rules or their own responsibilities within the program. According to agency documentation, in 2014 SEVP found that some DSOs were inconsistently reporting school information in several SEVIS data fields. In addition, SEVP’s Risk Assessment Tool includes a number of high-risk indicators that may stem from DSO record-keeping errors within SEVIS, including students listed as enrolled in an academic program not available at that school (e.g., doctoral students at schools without doctorate degrees available) and students listed as active who have long exceeded their program’s end date or authorized employment’s end date. Errors such as these make it difficult for ICE officials to know whether the information in SEVIS is inaccurate due to unintentional mistakes by the DSO or whether the school or its employees may be engaged in potential fraud. For additional examples of potential noncompliance or fraud, see the box below. Potential Designated School Official Noncompliance or Fraud Student and Exchange Visitor Program officials cited the following examples of potential noncompliance or fraud that they have encountered, among others: the reported foreign-student enrollment listed in the Student and Exchange Visitor Information System (SEVIS) does not seem to correspond with the number of students attending class or the size of the school’s physical space, all enrolled foreign students listed in SEVIS are living at the same address, and students repeatedly transfer to several different schools. Field representatives at one location we visited noted that DSOs with multiple job responsibilities may not have time to keep up with SEVP rules and policy updates. Similarly, DSOs at 7 of the 17 schools we spoke with mentioned that they have multiple job responsibilities beyond their DSO duties. In addition, SEVP officials indicated that DSOs have a high rate of turnover, especially at small schools, and may lack the expertise to effectively follow program requirements. SEVP officials acknowledged that mandatory training could help reduce the number of unintentional violations by DSOs who may not adequately understand the program’s regulations, thus allowing SEVP staff to focus their monitoring efforts on schools and individuals who may be engaged in intentional noncompliance and fraud. In June 2018, ICE officials told us that they recently received internal agreement to require all new DSOs to complete training prior to gaining full access to SEVIS once the officials release a new version of their DSO training program. However, SEVP officials could not provide documentation on their plans, including time frames for completing the revised DSO-training program, whether to require DSO training, or how they will track DSO compliance. Federal Internal Control Standards calls for agencies to demonstrate a commitment to competence, including recruiting, developing, and retaining competent individuals. Further, it recommends that agencies establish expectations of competence for key roles, including possessing the necessary knowledge, skills, and abilities, and training individuals appropriately. Without mandatory training and a process to verify that training is completed, SEVP does not have reasonable assurance that DSOs are familiar with, and understand, their roles and responsibilities as outlined in program regulation. Most DSOs Do Not Receive Fraud Training SEVP’s voluntary DSO training emphasizes student and school compliance with program rules and the DSOs’ responsibilities to enter and maintain complete and accurate information in SEVIS in a timely manner but does not address fraud risks to the program, including previously identified fraud schemes or trends. According to ICE officials, some DSOs may receive fraud-specific training from ICE agents through the Project Campus Sentinel initiative; however, these visits are limited to a small portion of certified schools each year. During a Project Campus Sentinel visit, ICE guidance states that an ICE agent will meet with DSOs and provide information on how to detect potential fraud, including student visa exploitation and national security vulnerabilities. In addition, ICE guidance encourages ICE agents to remind DSOs to contact them when they encounter these instances. In fiscal year 2017, ICE officials reported that ICE agents visited 400 of the more than 18,000 SEVP- certified school campuses in existence at that time. According to ICE officials, the agency can only conduct a limited number of Project Campus Sentinel visits to schools each year due to competing investigative priorities. The DSOs we spoke with varied in their understanding of the role they should play in identifying and reporting fraud to SEVP. Specifically, DSOs at 8 of 17 schools told us they did not receive training on SEVP-related fraud risks or could not identify SEVP-provided, fraud-specific training. For example, DSOs at one school told us that there is confusion among DSOs about their role to prevent and report fraud and that this issue has been discussed at past training events and conferences. Specifically, they stated that there is some confusion over the difference between fraud and noncompliance. According to these DSOs, they are responsible for addressing issues of noncompliance, but they do not actively look for SEVP-related fraud. A DSO from another school told us she interprets the DSO role as providing program oversight, including oversight related to fraud, and that she previously reported an instance of potential student fraud to ICE when she encountered suspicious immigration paperwork. In addition, DSOs at another school told us that they were not aware of any training related to fraud risks within SEVP but noted that guidance about fraud trends or potential red-flag indicators could be useful. The Fraud Risk Framework identifies training as one way of demonstrating an agency’s commitment to combating fraud. Training and education intended to increase fraud awareness among stakeholders, managers, and employees serves as a preventive measure to help create a culture of integrity and compliance within the agency. Specifically, the Fraud Risk Framework discusses leading practices for training and education, including communicating responsibilities for implementing fraud controls and details on how and where to report fraud. In addition, increasing awareness of fraud schemes, including red flags and risk indicators, through training and education can serve a preventive purpose by helping create a culture of integrity and compliance within the program and can enable managers, employees, and stakeholders with responsibility for implementing aspects of the program to better detect potential fraud. According to ICE officials, DSOs can serve as the front line against SEVP-related fraud, and they provide a significant portion, if not the majority, of fraud-related tips. In June 2018, ICE officials told us that, in response to discussions that we had during our review, they plan to incorporate fraud training into the revised DSO training. However, because ICE officials just recently made that decision, they had not yet developed documented plans for this training or timelines for when it would be completed. While agreeing to incorporate fraud training into the revised DSO training is a good first step, the development and execution of those plans will be needed to strengthen fraud controls. Until ICE develops and implements a plan for fraud-specific DSO training, ICE will not have reasonable assurance that this training will be delivered and DSOs will have the information they need to address fraud within the program. Conclusions Through SEVP, ICE oversees over 1.2 million foreign students at nearly 9,000 SEVP-certified schools across more than 18,000 campuses. Past instances of fraud and noncompliance in the program have resulted in ICE taking some steps to address fraud risks in the program, such as developing a Risk Assessment Model and Framework. However, ICE does not have a fraud risk profile that identifies all of SEVP’s fraud risks, discusses the likelihood of those risks, assesses related controls, and identifies the agency’s tolerance for risk. Such a fraud risk profile would help ICE more effectively assess whether additional internal controls or changes to policies or regulations are needed. Moreover, ICE has not yet fully employed the use of data analytics, such as network analysis, to help it identify potentially fraudulent schools before they become certified to enroll foreign students and help it better use its administrative and investigative resources. ICE has also made improvements to its processes for certifying and recertifying SEVP schools and monitoring DSOs—all of which can help reduce the risk of fraud in the program. However, ICE continues to delay the recertification process by initiating the school recertification reviews after the 2-year certification expiration date, which is not consistent with ICE regulations. Further, ICE has not included an assessment of residual risks posed by the current recertification queue—as a part of the fraud risk profile previously noted—and as a result does not have a full understanding of the risks associated with schools awaiting recertification. Although DSOs play an important role in helping ICE oversee students in the program, ICE has recognized they can pose fraud risks to the program. However, ICE does not routinely verify DSO-submitted eligibility information and DSO suitability for participation in SEVP, and therefore does not have reasonable assurance that only eligible and suitable DSOs are participating in the program. Finally, ICE has not developed or implemented mandatory and fraud-specific training to improve DSOs’ compliance with program requirements and aid its efforts to detect fraud in the program. Recommendations for Executive Action We are making the following seven recommendations to ICE: The Director of ICE should develop a fraud risk profile that aligns with identifies inherent fraud risks affecting the program, assesses the likelihood and impact of inherent fraud risks, determines fraud risk tolerance, and examines the suitability of existing fraud controls and prioritizes residual fraud risks, including residual risks posed by the recertification queue. (Recommendation 1) The Director of ICE should build on existing efforts to use data analytics by employing techniques, such as network analysis, to identify potential fraud indicators in schools petitioning for certification. (Recommendation 2) As ICE works to complete its efforts to hire additional SEVP adjudicators, the Director of ICE should begin notifying certified schools 180 days prior to, and requiring submission of complete recertification petitions by, the 2-year certification expiration date, consistent with regulation, and evaluate whether additional resources are needed. (Recommendation 3) The Director of ICE should, as practicable, verify the eligibility information provided to establish the immigration or citizenship status of lawful permanent residents and naturalized U.S. citizens, as well as U.S.-born citizens, who have been nominated or renominated to serve as DSOs. (Recommendation 4) The Director of ICE should develop an implementation plan for the project aimed at strengthening background checks for DSOs; that plan should outline how the project will be executed, monitored, and controlled. (Recommendation 5) The Director of ICE should implement mandatory DSO training and verify that the training is completed. (Recommendation 6) The Director of ICE should complete the development and implementation of its plans for mandatory fraud-specific training for DSOs. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to DHS for its review and comment. In its written comments, reproduced in appendix V, DHS concurred with our recommendations and described specific steps it plans to take in response to all seven of our recommendations. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Rebecca Shea at (202) 512-6722 or shear@gao.gov or Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: GAO’s 2012 Recommendations on the Student and Exchange Visitor Program, and the Agency’s Response Table 1 contains information on the eight recommendations that we made to U.S. Immigration and Customs Enforcement (ICE) in our 2012 report, and ICE’s actions to address them. We closed each of these recommendations as implemented. Appendix II: Objectives, Scope, and Methodology This report is a public version of a sensitive report that we issued on November 20, 2018, which examined the efforts that U.S. Immigration and Customs Enforcement (ICE) has taken since our 2012 report to address fraud risks, including the extent to which ICE has (1) taken steps to strengthen its management of fraud risks in the Student and Exchange Visitor Program (SEVP), (2) implemented controls to address fraud risks in the school certification and recertification processes, and (3) implemented fraud risk controls related to the eligibility, suitability, and training of Designated School Officials (DSO).The sensitive report included information related to SEVP internal controls used to help prevent and identify noncompliance or fraud in the program. The sensitive report also discussed some planned actions to improve these internal controls, some of which the Department of Homeland Security (DHS) deemed to be sensitive and must be protected from public disclosure. This public report omits the information that DHS deemed to be sensitive including details associated with (1) the oversight of schools during the certification and recertification process, (2) our covert testing of SEVP certification internal controls, and (3) current and planned actions to oversee DSOs. Although the information provided in this report is more limited, it addresses the same objectives and uses the same methodology as the sensitive report. For our first objective, to evaluate the extent to which ICE has taken steps to strengthen its management of fraud risks in SEVP, we assessed actions ICE, particularly SEVP and the Counterterrorism and Criminal Exploitation Unit (CTCEU), have taken since 2012 to design and implement controls to address fraud in the postsecondary, vocational, and English language school certification and recertification process. We reviewed documents including regulations, processes and procedures, and guidance related to fraud risk management, school certification, and recertification processes, and the role of DSOs. We evaluated the extent to which ICE’s practices were consistent with Standards for Internal Control in the Federal Government and GAO’s A Framework for Managing Fraud Risks in Federal Programs. In particular, we analyzed ICE documentation, such as standard operating procedures, policy statements, and guidance for adjudicators to determine how ICE’s processes and systems identify and assess risk in SEVP, including the SEVP Risk Assessment Model and Framework, Risk Assessment Tool, Risk Register, and other internal guidance. In addition, we reviewed information from ICE’s current SEVP administrative, watch, and criminal investigative cases and analyzed information on past cases of SEVP fraud, including indictments. Also, we interviewed ICE officials within SEVP to evaluate the extent to which the program has taken steps to strengthen its management of fraud risks since 2012. We met with senior officials from SEVP, including SEVP’s Director and management of the Risk Management Support Team, School Certification Unit (Certification Unit), Analysis and Operations Center (Compliance Unit), Policy Team, and Field Representative Unit. We interviewed officials from ICE’s Office of the Principal Legal Advisor to discuss regulatory priorities and legal authorities related to fraud prevention and detection. We also interviewed officials from ICE’s Identity Benefit Fraud Unit and Domestic Operations to discuss their roles in SEVP-related fraud prevention. In addition, we met with officials from CTCEU headquarters, including the Student and Exchange Visitor Information System Exploitation Section and criminal investigators from 5 of the 26 ICE field offices to discuss past cases of SEVP-related fraud and steps taken to identify and prioritize fraud risk. We visited ICE field offices in Washington, D.C.; Los Angeles and San Francisco, California; Newark, New Jersey; and New York, New York. We selected these locations based on a mix of criteria, including the following characteristics: (1) number of ongoing investigations of certified schools; (2) reported previous and current experience investigating SEVP-related fraud; (3) number of field representatives assigned to or located near the field office; and (4) number of schools that were located proximate to the field office and that were either pending recertification, as of July 2017, or have been recertified since August 2016. As we did not select a probability sample of ICE field offices to interview, the results of these interviews cannot be generalized to all of ICE’s 26 field offices. However, the interviews provided us with perspectives of ICE officials responsible for conducting school fraud investigations, including their views on the process SEVP has established for certifying and monitoring schools, fraud, and national security vulnerabilities related to foreign students, and any challenges field offices have faced in their investigations. We conducted a network analysis utilizing both public and proprietary information associated with currently certified schools to determine the potential to utilize additional data analytics to aid fraud risk-management efforts in SEVP. To develop this analysis, we identified a list of schools that, as of July 2017, had been identified by ICE as either being under active criminal investigation or subject to additional oversight or administrative action due to compliance concerns. We also selected a list of SEVP-certified postsecondary schools without such identified concerns as of September 2017. We restricted our set of schools to those with at least 20 foreign students as of September 2017. In total, 2,439 schools comprising 170 with concerns and 2,269 without such concerns were analyzed. We then used an outside vendor to provide public and proprietary information such as descriptive information associated with these schools including addresses, businesses, and past executives. Using these data, we used network-analysis techniques to identify connections between both those schools with criminal or compliance concerns and schools without such identified concerns. We determined whether each of the postsecondary schools without compliance concerns were linked to any of those with compliance concerns via executive employment. Specifically, we identified instances in which an official associated with a school with criminal or compliance concerns was associated with another school not identified as having those concerns. The underlying logic behind this focus was that schools associated with an official linked to a school of concern may potentially indicate the need for further review of possible criminal or compliance concerns. To further validate this information, we conducted additional research using investigative databases and the Internet to try to verify the instances identified in our analysis such as by ensuring the time frames of the connection appeared relevant or to verify the identity of individuals and schools involved. While such connections are not proof of criminal or compliance problems, they may potentially be indicative of them. This is a diagnostic that has been used in other fraud-related network research. For our second objective, to evaluate the extent to which ICE has implemented controls to address fraud risks in the school certification and recertification processes, we assessed documentation describing SEVP’s school certification and recertification controls, interviewed headquarters and selected field-office ICE officials, and analyzed agency-provided recertification data. Specifically, we assessed SEVP’s standard operating procedures, including its Adjudicator’s Manual, training materials, and other guidance to determine whether the certification and recertification controls described in these documents addressed the high-risk indicators ICE identified in its Risk Assessment Tool. We used this analysis to determine any potential noncompliance and fraud vulnerabilities in these controls. We also assessed SEVP’s controls in these areas against Standards for Internal Control in the Federal Government related to risk management, as well as principles of the Framework for Managing Fraud Risks in the Federal Government. Additionally, we interviewed ICE officials in SEVP’s Certification Unit, which is responsible for adjudicating certification and recertification petitions, and the Compliance Unit, which is charged with monitoring schools for ongoing compliance with regulatory record-keeping and reporting requirements. To understand how ICE Homeland Security Investigations agents in the field offices work with officials in SEVP and the CTCEU to investigate school fraud, we conducted semistructured interviews with ICE agents in five field offices. We also interviewed ICE officials from SEVP’s Field Representative Unit as well as eight field representatives assigned to or located near the selected field offices to gather information on the representatives’ roles and activities in identifying and reporting potential school fraud. Further, we conducted covert testing of SEVP’s internal control activities related to the school certification process. Specifically, we submitted certification petitions and conducted other covert investigative work for three fictitious schools, each of which are subject to particular petition requirements. For one of the fictitious schools, we tested SEVP certification controls that require schools to submit complete documentation by submitting a petition for the school that was missing several of the required documents. For our second school, we tested SEVP controls requiring schools to schedule and complete a site visit conducted by an SEVP field representative, by submitting a completed petition for the accredited business school, but avoiding the site visit and requesting that our paperwork move forward without it. For our third fictitious school, we submitted a petition and participated in a site visit with SEVP officials, using a rented space as a fictitious school location. We tested SEVP controls related to verifying petition documentation, and whether SEVP site-visit officials followed established procedures for the site visit. For all three petitions, we used publicly available information to construct our scenarios. We then documented any actions taken by SEVP on the submitted petitions, such as completeness checks, investigative steps, adjudication decisions or requests to provide additional supporting documentation, among other things. Results for all three covert tests, while illustrative, cannot be generalized to the full population of petitions. For our third objective, to determine the extent to which ICE implemented fraud risk controls related to the eligibility and suitability of DSOs, we assessed guidance, training, and policies related to DSOs. Specifically, we reviewed regulations for DSO eligibility and SEVP guidance and standard operating procedures to determine whether supporting evidence provided to meet these requirements is being verified, including the Field Representative Unit’s Site Visit Standard Operation Procedure and the Certification Unit’s Adjudicator’s Manual. We evaluated the extent to which ICE’s practices for verifying eligibility were consistent with the Framework for Managing Fraud Risks in the Federal Government. In addition, we reviewed the current and planned documentation and procedures on ICE’s existing and planned background checks, including the existing documentation for DSO vetting against relevant databases, initial requirements for planned biometric screening, and a draft policy document for the planned checks. To gather additional perspectives, we interviewed ICE officials in headquarters and selected field offices. We also interviewed selected DSOs in the field. We identified leading practices for project planning in the Project Management Institute’s A Guide to the Project Management Body of Knowledge. In addition, we reviewed the best practices associated with developing and maintaining a reliable, high-quality schedule in the GAO Schedule Assessment Guide. In assessing current training and oversight for DSOs, we examined guidance, policies, and procedures for the SEVP Field Representative Unit and CTCEU’s Project Campus Sentinel. We assessed the implementation of these controls against criteria in Standards for Internal Control in the Federal Government and A Framework for Managing Fraud Risks in Federal Programs. We reviewed DSO training materials, including the Online Training for DSOs and the Study in the States website. To determine how ICE identifies fraud risk associated with DSOs, the controls in place for addressing and mitigating these risks, and its efforts to identify potential vulnerabilities in its controls, we met with ICE officials at headquarters and five selected field offices, as discussed above. To identify the extent to which they have DSO training and antifraud responsibilities and requirements, we interviewed selected field representatives. Furthermore, we interviewed DSOs at 17 selected certified postsecondary schools on their roles and responsibilities and training resources. We selected these officials because, as of September 2017, they constituted a group of representatives from certified schools of various types and sizes and were located in proximity to our previously selected ICE field-office locations. As we did not select a probability sample of DSOs to interview, the information we obtained from these school officials cannot be generalized. These interviews provided us with the perspectives of DSOs on their roles and responsibilities, training, and fraud risks within the program. Further, we interviewed officials from NAFSA, an association of international educators, to discuss the organization’s views on fraud risks within SEVP, and we reviewed an extract from the NAFSA Advisor’s Manual of federal regulations affecting foreign students and scholars. Appendix III: Key Elements of the Fraud Risk-Assessment Process GAO’s A Framework for Managing Fraud Risks in Federal Programs states that, in planning the fraud risk assessment, effective managers tailor the assessment to the program by, among other things, identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process (see fig. 11). Appendix IV: Withdrawal or Denial of Certification or Recertification On the basis of our analysis of U.S. Immigration and Customs Enforcement (ICE) data, the Student and Exchange Visitor Program (SEVP) withdrew certification for approximately 2,600 schools during the period of fiscal years 2013 through 2017 (see fig. 12). The Enhanced Border Security and Visa Entry Reform Act of 2002 states that a material failure of an SEVP-certified school to comply with the record-keeping and reporting requirements to receive nonimmigrant students shall result in the suspension for at least 1 year, or termination, of the school’s approval to receive such students. Under federal regulation, SEVP can deny an SEVP-certified school’s recertification petition or, as a result of a subsequent out-of-cycle review, can withdraw certification, if the school or its programs are no longer eligible for certification. Denial of recertification or withdrawal on notice as a result of out-of-cycle review may be for any valid and substantive reason, including failure to comply with record-keeping and reporting requirements, willful issuance by a DSO of a false statement, or not operating as a legitimate institution, among other bases. According to SEVP officials, denials resulting from recertification reviews are often based on historical discrepancies in the DSO’s data entry, record-maintenance and Form I-20 issuance issues, or a negative change in the school’s operating status, such as a loss of state licensure. By regulation, an appeal of a notice of denial or withdrawal must be made within 15 days after service of the decision. Schools denied recertification must, according to regulations, wait at least 1 calendar year from the date of denial of recertification or withdrawal notice before being eligible to petition again for certification. If, upon the completion of an out-of-cycle review, SEVP determines that a school has failed to sustain eligibility or has failed to comply with the record-keeping, retention, reporting, or other requirements, SEVP will institute withdrawal proceedings by serving the school a notice of intent to withdraw SEVP certification. Failure of a school to respond to a notice of intent to withdraw within 30 days will result in an unappealable withdrawal of the school’s certification. At the conclusion of withdrawal proceedings, a school found to be ineligible for continued SEVP certification as a result of an out-of-cycle review will receive a notice of withdrawal. SEVP withdrew on notice approximately 211 certifications from fiscal years 2013 through 2017 (see fig. 12). If SEVP staff identify an issue during an out- of-cycle review that seems to be an error not warranting withdrawal, SEVP could issue a Remedial Action Plan to the school describing the issues it needs to address to retain its program eligibility. According to SEVP officials, once they have gathered enough evidence and made the decision to withdraw the school’s certification, SEVP can temporarily terminate the school’s ability to issue Forms I-20 to students. For example, SEVP officials explained that if a school that is otherwise in compliance lets its accreditation lapse, SEVP may revoke its authority to issue Forms I-20 until it renews its accreditation. Regarding automatic withdrawals, SEVP will serve a notice of intent to withdraw SEVP certification to the school 30 days prior to its certification expiration date if, up until that point the school has failed to file a complete petition for recertification. From fiscal year 2013 through fiscal year 2017, SEVP automatically withdrew 1,763 certifications (see fig. 12). SEVP will not accept a petition for recertification and the school will be automatically withdrawn immediately if such school has effectively relinquished its SEVP certification by not petitioning for recertification, abandoning its petition, or not submitting a complete recertification package by the certification expiration date. Certified schools can also voluntarily withdraw their certification at any time. Appendix V: Comments from the Department of Homeland Security Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Latesha Love (Assistant Director), Kathryn Bernet (Assistant Director), Nick Weeks (Analyst-in- Charge), David Aja, David Dornisch, Gabrielle Fagan, April Gamble, Gina Hoover, Lauren Kirkpatrick, Kirsten Lauber, Barbara Lewis, Sasan J. “Jon” Najmi, Robin Nye, George Ogilvie, Ramon Rodriguez, Constance Satchell, Sabrina Streagle, Shana Wallace, and Helina Wong made key contributions to this report.
Why GAO Did This Study As of March 2018, more than 1.2 million foreign students in the United States were enrolled in 8,774 schools certified by SEVP. ICE is responsible for managing SEVP, which certifies schools to enroll foreign students. Various ICE offices have a role in preventing, detecting, and responding to potential fraud in the program. GAO was asked to review potential vulnerabilities to fraud in SEVP. GAO examined, among other things, the extent to which ICE (1) implemented controls to address fraud risks in the school certification and recertification processes and (2) implemented fraud risk controls related to DSO training. GAO analyzed ICE policies and documentation, including fraud risk guidance and procedures for school certification and recertification; analyzed 2013 through 2017 recertification data; and interviewed officials from five ICE field offices that GAO selected based on their experience investigating program fraud. GAO also interviewed officials from 17 selected schools located near these ICE field offices. This is a public version of a sensitive report that GAO issued in November 2018. Information that DHS deemed sensitive has been omitted. What GAO Found The Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement (ICE) has identified several fraud risks to the Student and Exchange Visitor Program (SEVP). As shown in the figure below, these include risks associated with school owners and designated school officials (DSO) who help ICE oversee students in the program. These fraud risks may occur as schools apply to become SEVP-certified, accept foreign students, and apply for recertification every 2 years. ICE has implemented controls to address fraud risks related to school certification, but long-standing delays in recertifying these schools exacerbate fraud risks. By statute and regulation, ICE must conduct recertification reviews every 2 years to ensure that schools continue to meet program requirements—an important fraud risk control. Between 2013 and 2017, ICE recertified about 12,900 schools. However, according to ICE officials, they have been unable to meet the 2-year time frame and, as of June 2018, had 3,281 recertification petitions waiting for review. To help manage its queue, ICE has lengthened the period between recertification reviews by extending schools' certification expiration dates by 180 days, which is inconsistent with its regulation and may allow fraudulent schools to operate longer without detection. Although ICE is taking steps to increase resources for recertification, it is unclear whether these steps will ensure recertification is conducted consistently with ICE regulations. ICE relies on DSOs to, among other things, update and maintain foreign-student data in ICE's foreign-student information system and report suspected fraud to ICE. However, ICE does not provide DSOs with training that addresses fraud risks to the program. In June 2018, ICE officials stated that they plan to develop this fraud training for DSOs, but do not have documented plans or timelines for when it would be completed. By developing these plans, the agency would be better positioned to ensure that DSOs receive the training needed to address potential fraud in the program. What GAO Recommends GAO is making seven recommendations, including that ICE (1) notify schools 180 days prior to the 2-year certification expiration date, as required, and evaluate whether additional resources for recertification are needed, and (2) develop a plan to implement fraud-specific training for DSOs. ICE concurred with all of GAO's recommendations.
gao_GAO-19-283
gao_GAO-19-283_0
Background Brief History of Military Commissions DOD describes military commissions as a form of military tribunal convened to try individuals for unlawful conduct associated with war. According to DOD, military commissions—as they came to be known in the 19th century—were preceded by military tribunals during previous conflicts, beginning from the Revolutionary War. After the September 11, 2001 terrorist attacks on the United States, the President issued an order, directing the Secretary of Defense to establish commissions to try certain individuals for violations of the laws of war and other offenses. In 2006, the United States Supreme Court invalidated the military commissions established under the President’s order. In response to the court’s ruling, Congress passed the Military Commissions Act of 2006. In 2009, the President ordered a review of military commissions and detention at NSGB which led to a halt in all pending military commissions’ proceedings. In 2009, Congress passed the Military Commissions Act of 2009 which replaced the Military Commissions Act of 2006 and led to the reinstatement of criminal proceedings against certain detainees. Held on NSGB, Cuba, current commissions’ proceedings include alleged terrorists accused of engaging in attacks against the United States, such as the USS Cole attack in which 17 people were killed and the September 11, 2001 attack in which 2,976 people were killed. Military Commissions’ Legal Framework The Military Commissions Act of 2009 establishes procedures governing the use of military commissions to try alien unprivileged enemy belligerents engaged in hostilities against the United States for violations of the law of war and other offenses triable by military commission. The Act defines an alien unprivileged enemy belligerent as a person who has engaged in hostilities against the United States or its coalition partners; has purposefully and materially supported hostilities against the United States or its coalition partners; or was a part of al Qaeda at the time of the alleged offense. While the Military Commissions Act of 2009 also provides protections for the accused individuals undergoing trial (the accused) similar to rights afforded to defendants in a federal criminal trial, the Act is more closely aligned with military court-martial practice. For example, the Act states that procedures for military commissions are based upon the procedures for trial by general courts-martial under the Uniform Code of Military Justice, Chapter 47 of the U.S. Code, except for certain provisions such as provisions related to speedy trial and pretrial investigations. Article 36 of the Uniform Code of Military Justice states that the President may prescribe regulations for pretrial, trial and post-trial procedures for cases triable in courts-martial and military commissions which shall, so far as the President considers practicable, apply the principles of law and the rules of evidence generally recognized in the trial of criminal cases in the United States district court but which may not be contrary to other provisions of the Uniform Code of Military Justice. Article 36 also states that all rules and regulations prescribed by the President or the Secretary of Defense as his designee shall be uniform insofar as practicable. In addition to relevant law, commissions’ proceedings are conducted in accordance with certain DOD manuals and regulations and rulings by military judges who preside over the proceedings. Roles and Responsibilities within DOD for Military Commissions There are a number of DOD organizations responsible for conducting the commissions’ proceedings included in the scope of our review. Each has separate functions and responsibilities, as shown in figure 1. The Convening Authority is responsible for the overall management of the commissions’ process and is empowered to convene the commissions, refer charges to trial, negotiate pre-trial agreements, review records of trial, and maintain the public website, among other responsibilities. The Office of the Chief Prosecutor includes attorneys, paralegals, and support staff from each branch of the United States Armed Forces, DOD, and attorneys from the Department of Justice. These attorneys coordinate investigative efforts, prepare charges, and represent the United States government in commissions’ proceedings. Located in the Office of the Chief Prosecutor, DOD’s Victim and Witness Assistance Program provides services to approximately 2,000 victims and their family members. The Military Commissions Defense Organization maintains a structure separate from the structure of OMC, to help ensure fairness and independence of the commissions’ legal system. Defense attorneys representing the accused can be military and/or civilian, either employed by DOD and/or a civilian attorney retained by the accused at their own expense. These attorneys are appointed by the Chief Defense Counsel to represent the accused. In capital cases, i.e. those cases in which the United States government is seeking the death penalty for the accused, the Military Commissions’ Defense Organization will also appoint a “learned counsel”—that is, an attorney with specialized training and experience in trials involving the death penalty. The Military Commissions’ Trial Judiciary consists of military judges nominated by the Judge Advocate Generals of the military departments to preside over trials. The Trial Judiciary also includes the judges’ support staff that, among other responsibilities, manages court documents—such as legal motions and judges’ rulings—that are part of the commissions’ process. According to OMC officials, the Trial Judiciary has also established certain practices—followed by OMC— for the review of these documents before they are posted on OMC’s public website. NSGB Expeditionary Legal Complex The Expeditionary Legal Complex at NSGB was completed in January 2008 and consists of various facilities, including a courtroom in which classified and unclassified proceedings may be conducted, office space and equipment for court administration employees as well as the prosecution and defense legal teams, and expeditionary lodging capable of housing up to 300 personnel, according to an OMC official. Key elements of this complex are highlighted below. The Courtroom The courtroom, shown in figure 2, is a multi-defendant courtroom capable of trying up to six defendants jointly. The courtroom can accommodate a case with the possibility of the death penalty, and has unique features that permit the use of highly-classified information at the Top Secret/Sensitive Compartmented Information level or below during closed proceedings. The courtroom within the Expeditionary Legal Complex has a viewing gallery (gallery), as shown in figure 3, where selected members of the public may view commissions’ proceedings, through soundproof glass. This is because the gallery was designed to permit public viewing of the proceedings even in the event that classified information is inadvertently disclosed. Specifically, according a DOD official, the gallery has video display monitors that play a closed-circuit television feed of the proceedings, on a 40-second delay between live action in the courtroom and the video transmitted to the gallery. This system provides United States government officials with time to prevent any inadvertent disclosure of classified information from being disseminated to the public. If victims or family members are present in the gallery, they enter last and are seated nearest to the exit. A curtain is available to separate the victims and family members from other members of the public, if they desire privacy. Commissions’ proceedings that are open to the public are transmitted by closed-circuit television to the media operations center located outside of, but nearby, the Expeditionary Legal Complex courtroom. The media operations center, shown in figure 4, also includes telephone and computer support, which enables up to 60 members of the media to simultaneously watch the proceedings, with the 40-second delay to prevent the inadvertent disclosure of classified information, while they work. The center also has a room for conducting press briefings. DOD Uses a Variety of Methods to Facilitate Public Access to Commissions’ Proceedings DOD has taken various steps to facilitate public access to commissions’ proceedings, using four primary methods to do so. Rule 806 of DOD’s Manual for Military Commissions specifies that, except in certain instances, such as to protect national security, that military commissions shall be publicly held. In accordance with this guidance, DOD facilitates public access to commissions’ proceedings by (1) communicating directly with victims and their family members about the status of scheduled hearings and other administrative matters; (2) enabling selected members of the public to view proceedings in-person at NSGB; (3) providing CCTV sites within the United States for viewing proceedings remotely; and (4) making information, such as court documents that will be used during proceedings, available to the public on the commissions’ website. In figure 5, we summarize key DOD efforts to facilitate public access to commissions’ proceedings, followed by a description of each method. Direct Communication With Victims and Their Family Members According to officials, DOD established its Victim and Witness Assistance Program in June 2004 to provide support services to the approximately 2,000 victims and their family members who opted to participate in the program. The program, which falls within the Office of the Chief Prosecutor, provides updates to victims and their family members on pending military commission cases, notifies them of scheduled hearings, and assists with the logistics associated with viewing proceedings at NSGB or a CCTV site. In our survey of victims and family members, we asked about their perspectives on communication originating from the prosecution team and found that a majority of those who responded (72 percent) were satisfied or very satisfied with DOD’s efforts. In-person Viewing of Proceedings at NSGB Due to space limitations, DOD is currently able to allot 52 seats for selected members of the public to view “open” commissions’ proceedings in-person from the courtroom gallery on NSGB. DOD is responsible for selecting these individuals who generally fall into three categories: (1) victims and their family members, (2) non-government stakeholders, and (3) the general public. DOD provides air transportation to and from NSGB for all individuals approved to view the proceedings in-person. Further details about DOD’s selection process and seating allocation, by category, are provided below. Victims and their family members: Per DOD policy, up to 5 victims or their family members and the person accompanying them to provide support are allotted seating in the courtroom gallery. There are also seats reserved for a grief counselor and an escort from the Victim and Witness Assistance Program for a total of 12 seats. Due to the limited number of total seats and lodging currently available, DOD asks the approximately 1,140 victims and family members who have expressed an interest in attending proceedings to identify the proceedings they would prefer to attend. DOD then uses these preferences to select victims and family members to travel to NSGB for each week that proceedings are held. According to DOD officials, this procedure works better than the lottery system that the Victim and Witness Assistance Program previously used because it provides victims and their family members more flexibility with their travel dates. Non-government stakeholders: This category includes individuals who represent 25 non-governmental organizations pre-approved by DOD to view proceedings in-person, as well as members of the media. DOD currently allots 12 seats in the courtroom gallery to representatives of approved non-governmental and civic organizations and 10 seats to the media. All individuals within this category who are approved for travel to NSGB are required to sign a list of “ground rules” developed by DOD and to be escorted by military personnel while on the base. General public: The remaining 18 seats are filled on a “first come, first served” basis by members of the public who live on NSGB or who have been cleared by the Navy to visit the base. Remote Viewing of Proceedings at CCTV Sites In 2012, DOD established five CCTV sites on the East Coast of the United States where individuals may view commissions’ proceedings remotely. Specifically, four CCTV sites are reserved for victims and their family members, and are located at Fort Hamilton, New York; Fort Devens, Massachusetts; Joint Base Dix/McGuire/Lakehurst, New Jersey; and Naval Station Norfolk, Virginia. The fifth CCTV site is located at Fort Meade, Maryland, and is open to victims and their family members, non- government stakeholders, and the general public. According to officials, at these sites, large video display monitors display the same video feed that appears on monitors in the viewing gallery at NSGB, with the same 40-second delay to prevent the inadvertent disclosure of classified information. This feed is delivered to CCTV sites by both fiber optic cable and satellite transmission. According to court documents, these sites are the result of DOD acknowledging both the importance of the public’s physical access to proceedings held at NSGB and the limited ability of the general public to do so. According to our analysis of available data from DOD on attendance at NSGB and the CCTV sites, there have been a total of 2,304 recorded visitors, beginning in 2011. It is important to note that DOD did not record the number of visitors from the general public at NSGB until approximately September 2018. Also, according to officials, DOD did not begin recording visitors from the general public at the Fort Meade CCTV site until September 2018, and did not record data on non-government stakeholder visitors to the Fort Meade CCTV site from 2012 to 2015. However, our review of available data indicates that of the recorded visitors, the majority—64 percent—were non-government stakeholders, while victims and family members made up 34 percent of attendees, and the general public made up 2 percent. Table 1 summarizes available DOD data on attendance at NSGB and CCTV sites, from November 2011 to September 2018. Providing Information Through the Commissions’ Public Website According to a DOD official, DOD established the Office of Military Commissions’ website as an online resource for the public in March 2005 to provide a variety of information about OMC’s organization, its facilities and services on NSGB, active and inactive cases, and court documents approved for public dissemination, among other things. Court documents may include legal motions (motions) filed by the prosecution and defense, docket-related documents (e.g., documents that list motions to be argued during a specific hearing), judges’ rulings on motions, and transcripts of hearings. According to officials, DOD updated the website in 2011 and 2014, which government and non-government stakeholders told us made it easier to use and provided additional information, thereby facilitating public access to information about the commissions’ proceedings. In addition, DOD officials told us the website has the only official, public calendar of scheduled hearings. The Public Faces a Variety of Challenges Accessing or Obtaining Information on Commissions’ Proceedings Public Access Challenges Created by Factors Outside DOD’s Control The public faces a number of challenges in gaining access to commissions’ proceedings or obtaining information about them. These challenges can be categorized into two groups: (1) those that DOD has limited ability to address, and (2) those that DOD has greater ability to address. During our review, we identified several aspects of commissions’ proceedings that constrain the extent of public access that DOD is able to provide. Specifically, DOD has limited ability to address these challenges because they result, in part, from factors that are not under the department’s control. As confirmed by DOD officials, these challenges are (1) the location of proceedings, (2) the prevalence of classified information associated with them, and (3) the duration of time awaiting trial—each of which are discussed in more detail below. Public Access Challenges Created by Factors Within DOD’s Control We also identified other public access challenges that DOD has a greater ability to address because the challenges result largely from factors under DOD’s control. As confirmed by DOD officials, these challenges to public access of military commissions’ proceedings involve limitations related to in-person viewing of proceedings at NSGB, remote viewing of proceedings, and the timeliness with which key information is posted on the commissions’ website. In-Person Viewing of Commissions’ Proceedings at NSGB DOD policy and processes, the size of the gallery DOD built, and the limited logistical support DOD provides to non-government stakeholders substantially constrain the public’s ability to view commissions’ proceedings at NSGB. As discussed previously, DOD policy and the size of the courtroom gallery on NSGB currently limit in-person attendance to a total of 52 seats for each week of hearings—12 of which are reserved for victims or their family members, as well as the support people and DOD escorts accompanying them. The relatively limited number of seats means that—in the 10 years since victims and their family members were permitted to travel to NSGB—according to a DOD official, fewer than half have been selected to do so. According to our review of DOD data on total attendance at NSGB since 2011, victims and family members comprise 21 percent of attendees. The limited weekly attendance for all visitors to commissions’ proceedings is in contrast to United States district court that conducts federal criminal trials and can generally accommodate a new set of attendees each day, if those attendees are in the local area or can travel to the court house. However, as discussed previously, DOD provides air transportation to and from NSGB, the department must approve all individuals who fly to NSGB to view the proceedings in-person, and the seats available to the general public in the gallery are filled on a “first come, first served” basis by members of the public who live on NSGB or who have been cleared by the Navy to visit the base. These constraints do not exist at federal courthouses. Thus, the portion of the general public that can attend commissions’ proceedings is substantially smaller than the portion of the public that can attend federal criminal trials. In addition, according to non-government stakeholders, DOD provides limited logistical support for their work at NSGB, which constrains their ability to provide the public with access to information about the commissions’ proceedings. Based on discussions with non-government stakeholders, the logistics of traveling to NSGB and the inherent limitations of working in a challenging environment made it difficult for some of these non-government stakeholders to be able to view proceedings in-person with the frequency that they believe is needed. For example, one national security policy expert told us that they “cannot afford the time required to attend another hearing.” This is because “…hearings are frequently cancelled or closed to the public,” and as a result, attendees “…typically spend at least a week there to see maybe two days of hearings.” We also spoke with a legal expert who explained that the lack of reliable internet and phone service while on NSGB presented challenges in maintaining contact with the individual’s law practice, thus limiting their ability to travel to NSGB and view proceedings in-person. Similarly, a member of the media told us that the conditions of reporting the commissions’ proceedings are “an extreme hindrance.” This member of the media noted that while at NSGB, visitors have access to limited and unreliable internet and telephone service. This has made covering the trials “extremely difficult,” according to the freelance journalist because the cost, lack of resources and unreliable schedule make it increasingly difficult to take a week away from reporting on other events “in order to cover only a couple of days of open hearings.” For many of the non-government stakeholders included in our review, their role as observers, scholars, or reporters on the commissions’ proceedings is not their full-time job. Instead, they do so as one part of their professional responsibilities or as volunteers. In this context, they told us generally that the time required to travel to NSGB due to infrequent flights, the difficulty of working there, and the frequent closings or cancellations of hearings discourage non-government oversight and reporting on the proceedings. This, in turn, reduces the amount and quality of the information that they can provide to the public. Also, while selected victims and family members and non-government stakeholders are able to view proceedings in-person on NSGB, the vast majority of the general public cannot, due to DOD policy. The exceptions are—according to a DOD official—civilians traveling to NSGB on official business and those who have a sponsor living at NSGB. Remote Viewing of Commissions’ Proceedings DOD’s decision to locate all CCTV sites on military bases on the East Coast of the United States has resulted in several challenges that limit the current usefulness of CCTV sites in facilitating public access to commissions’ proceedings. First, all five CCTV sites are concentrated within a 600 mile span on the East Coast of the United States. However, victims and their family members—the primary intended users of these sites—are located throughout the world or are concentrated in areas of the United States that are a significant distance from one of these five locations. According to our survey of victims and their family members, a majority of those who responded (71 percent) said that it was important to have the location of the hearings close to where they live. For example, the victim and family member population served by DOD’s Victim and Witness Assistance Program has a significant presence in California and Florida, as well as smaller populations in eight other countries. Further, survey respondents from Texas, Florida, and the United Kingdom noted that it was impractical for them to travel to the current CCTV sites, especially considering the unpredictable hearing schedule. Figure 7 shows the location of the CCTV sites along with the dispersion of victims and their family members served by DOD’s Victim and Witness Assistance Program. The logistics of traveling to the CCTV site at Fort Meade, Maryland—the only location open to non-government stakeholders and the general public—is also a factor that limits the public’s access to information about commissions’ proceedings. For example, non-government stakeholders who observe the commissions’ proceedings and were included in our review explained that the majority of their organizations are located in cities that either do not have a CCTV site, or are not near a site to which they have access. Examples include Los Angeles, California, and New York City, New York. Non-government stakeholders also expressed that there are challenges associated with the amount of time and travel it takes to get to Fort Meade, which can be difficult especially when hearings are often cancelled or closed with little or no notification, according to these stakeholders. Further, although the CCTV site at Fort Meade is open to the general public, DOD officials acknowledged that there is no practical way for the department to advertise the availability of the opportunity to view proceedings at the CCTV site on Fort Meade. In addition to the challenges of traveling to CCTV sites, some victims and family members and non-governmental stakeholders noted challenges regarding their ability to access military bases that host these sites. For example, some victims and family members told us that they or their relatives had been denied access to certain CCTV sites because, according to DOD, they did not meet the department’s definition of a victim or family member. Further, non-government stakeholders who are foreign nationals are required to be escorted while on Fort Meade, per DOD policy. However, DOD officials told us that Fort Meade does not always have the personnel necessary to escort these individuals, which could preclude certain non-government stakeholders from being able to access the site. Further, a senior DOD official acknowledged that by locating CCTV sites on military bases, DOD is running the risk that—in certain scenarios—no member of the public would be able to access the sites. This is because, in the event of a threat to base security, it may be closed to civilians who do not live or work on the installation. Timeliness of Information Posted to the Commissions’ Website As discussed previously, OMC’s website is a key enabler of public access to information about commissions’ proceedings because it provides the public with a way to retrieve unclassified court documents related to the commissions’ proceedings, such as legal motions and transcripts, and a schedule of the proceedings’ hearings. According to DOD’s Regulation for Trial by Military Commission (Regulation), court documents are provided by OMC to an inter-agency review team, which examines them and removes any classified or protected information that is identified. Once this examination is completed, the inter-agency review team returns the document to OMC to be posted to its website. DOD’s Regulation’s sets a timeliness standard for reviewing and posting court documents—noting that the entire process generally should take no longer than 15 business days. However, based on our analysis of available data, we determined that DOD has generally not met this standard for the timely posting of documents, which substantially limits public access to information about proceedings. Specifically, we obtained and analyzed data on when court documents were filed with OMC and the date on which the inter-agency review team returned them to OMC for posting and found that from October 2011 to October 2018, DOD frequently missed the timeliness standard laid out in its Regulation. For example, since 2011, we found that 8 percent of court documents reviewed by the inter-agency review team were returned to OMC after the 15 business day standard. Further, we found that—since 2015—DOD missed its timeliness standard with greater frequency. For example, approximately 7 percent of documents reviewed in 2015 were returned to OMC after the 15 business day standard whereas in 2018, more than 50 percent of the documents submitted for review missed the timeliness standard. Our analysis of data from the inter-agency review team is summarized in table 2. In addition to the data provided by the inter-agency review team, we independently collected and analyzed data from the commissions’ website on the filing and posting dates for more than 11,000 court documents filed between June 19 and November 19, 2018. Our analyses of these data further demonstrate DOD’s challenges with timely posting of court documents. For only one category of court documents— unofficial, unauthenticated transcripts from open hearings—our analysis of data collected from the website from June to November 2018 show that these transcripts were posted in a timely manner. For the remainder, over a five month period, nearly 1,300 court documents either remained unposted or were posted to OMC’s website after the 15 day business standard. Furthermore, the total for the median number of business days these documents were filed after the 15 business day standard ranged from 90-103.5 days—that is, from almost four months to more than five months past DOD’s timeliness standard. Table 3 summarizes our analysis for the five cases in the scope of our review. We reviewed relevant case studies in federal criminal proceedings involving both terrorism charges and certain matters related to commissions’ cases, and identified instances in which federal judges adopted processes for review and release of classified documents that are similar to processes specified in DOD’s regulation. However, we also identified differences, such as shorter timeframes in the federal court systems for the government’s review and public release of documents with the potential for classified information. For example, in one case, court security experts had 48 hours—and in another, 72 hours—to complete this process. According to various non-government stakeholders, DOD’s inability to post court documents in a timely manner has negatively impacted their ability to perform their role in facilitating public access to information about commissions’ proceedings. For example, according to our analysis, DOD posted legal motions filed by the prosecution and defense teams a median of 97 business days past DOD’s timeliness standard; military judges’ rulings were posted a median of 69 days past DOD’s timeliness standard. One member of the media explained that DOD’s delayed posting of court documents limits their access to information needed to justify travel to NSGB. They further explained that not being able to travel to NSGB impedes their ability to conduct interviews and research about the proceedings, which are needed to inform the general public. Similarly, other stakeholders told us that they believe the delays in posting docket- related documents have made it difficult for them to assess the proceedings and communicate their assessments to the public. According to our analysis, DOD posted these documents a median of 99 business days past DOD’s timeliness standard. Further, for hearings held between June 19, 2018 and November 19, 2018, we found that of the 74 docket- related documents filed with the court, three were posted in advance of the hearing. We also found that the hearing schedule posted on the commissions’ website—the only official, publicly-accessible schedule of proceedings, according to DOD officials—frequently is not updated in a timely manner to reflect schedule changes. According to DOD officials, this is because information on schedule changes is often not provided to the webmaster for timely updates, as the inter-agency review team is examining it; much like the inter-agency review team does with court documents. As a result, several non-governmental stakeholders told us that it is difficult to justify the time and costs of traveling to Fort Meade, Maryland—the only CCTV site open to them—given the risk of arriving only to learn that the scheduled hearing has been canceled or closed to the public. We observed the effect of these cancellations on public access firsthand during our review. For example, we attempted to attend hearings at Fort Meade on various occasions. On several of those occasions, the hearing was canceled. While we learned this information directly from our DOD contact, none of these changes were reflected on the website’s schedule. Also, when we asked for updates on scheduled hearings, multiple DOD officials told us that we should not bother checking the website’s hearing schedule. Instead, they recommended that we check the Twitter feed of a certain reporter who spends a lot of time at NSGB and routinely provides updates on hearings. In addition, according to DOD officials, victims and family members who attempt to access the website from certain locations outside of the United States are sometimes unable to do so. OMC officials are aware of this issue and an OMC information technology expert told us that while OMC has tried to fix this issue several times, it is based on security for the website. In addition, according to DOD officials, victims and family members who attempt to access the website from certain locations outside of the United States are sometimes unable to do so. OMC officials are aware of this issue and an OMC information technology expert told us that while OMC has tried to fix this issue several times, restricting access from certain locations outside of the United States is based on security for the website. DOD officials acknowledged that they are regularly not meeting their timeliness standard for posting court documents to OMC’s website— something that they largely attribute to the volume of documents submitted and the government-wide security classification review process to which they are subjected. Specifically, in this process for the military commissions’ proceedings, there are two DOD and two non-DOD intelligence agencies with the chief responsibility for conducting the security classification review of court documents filed for commissions’ proceedings. The Defense Intelligence Agency (DIA) is responsible for coordinating the process and all four agencies may be required to review a document depending on the type of information it contains. In accordance with DOD’s Regulation and the interests of national security, a review of certain documents submitted must be conducted to confirm that such filings are in publicly releasable form. Due to the multiple levels of review and depending on the amount and complexity of classified information involved, intelligence agency officials told us that— in the course of the inter-agency review team’s efforts—it can take anywhere from one day to several weeks to review a single document. These officials also told us that it is very difficult to hire personnel with the requisite expertise and experience to serve as reviewers, given that classified information that may be in these documents can be complex, esoteric, and decades old. Thus, it is unlikely that a significant number of new reviewers could be hired to help expedite the review team’s processes. According to our review of available information from intelligence agency officials, the agencies have a relatively small number of personnel reviewing large numbers of documents. Further, those personnel responsible for reviewing OMC-related documents spend only a portion of their time reviewing court documents for the purpose of posting them on the commissions’ website. This is because inter-agency review team personnel are also responsible for reviewing documents not released on the commissions’ website. According to a senior official from the review team, it has been tasked with competing requests for document reviews that have impacted the team’s ability to review court documents for posting on OMC’s website. For example, the official explained that—from May 2017 to February 2018—the review team completed seven of these large-scale, time- sensitive tasks, involving about 31,400 pages of document review, according to the official’s estimate. Table 4 summarizes available information about the agencies’ review of court documents to be posted on the website. Based on our discussions with officials from DOD and the inter-agency review team, factors such as—the complexity of documents, relative scarcity of qualified reviewers, and other document review tasks unrelated to web posting—are somewhat out of DOD’s control. For example, a senior official from the inter-agency review team explained, the complexity of court documents is the responsibility of the prosecution and defense teams that write them; the other document review tasks are often driven by the schedule of individual cases or military judges’ rulings. However, there is a key factor driving the timeliness challenge that may be in the department’s control. According to our discussions with DOD officials, they attributed document posting delays to a policy decision by the department to subject the extremely large volume of court documents filed—including schedule changes—to the same type of security review. A Number of Options Exist to Potentially Address Public Access Challenges, However Each Option Has Tradeoffs That Have Not Been Assessed by DOD Options Exist to Address Challenges That Are Well Supported by Victims, Their Family Members, and Non-Government Stakeholders Through our review of agency documentation and discussions with DOD officials, victims and family members, and non-government stakeholders, we identified a variety of potential options for expanding access to commissions’ proceedings. We have organized these options into three categories, as shown in table 5. The majority of both victims and family members who responded to our survey and non-government stakeholders who responded to our questionnaire support most potential options for expansion of public access. Specifically, the majority of victims and family members who responded to our survey supported six of the seven potential options about which we asked. The majority of non-government stakeholders supported seven of the ten potential options. There was general agreement between these two groups on the potential options they supported. This information is summarized in figures 8 and 9. Options exist that may potentially help DOD address the challenges the public faces attending hearings at NSGB. Specifically, a physical expansion of the courtroom viewing gallery that increases the number of seats open to the public, along with a change in DOD policy to allow more visitors, would enable NSGB to accommodate more people wishing to view proceedings in-person. An OMC official responsible for management of the office’s infrastructure at NSGB acknowledged that an expansion of the NSGB gallery and the number of the people it can accommodate is theoretically possible, potentially in the context of an ongoing project to renovate the complex of buildings that contains the courtroom, gallery, and other facilities that support the commissions’ proceedings. DOD officials expressed a number of concerns with this option. First, an OMC official cautioned that expanding the gallery’s capacity would likely increase the cost of the current $14 million expansion project, though the official was unable to estimate by how much. Second, an increase in the number of visitors would require a commensurate increase in logistical support—for example, more lodging and utilities—which an OMC official said may not be supported by the current level of resources. Third, according to an OMC official, an expansion of the gallery would require it to be temporarily closed, thus delaying commissions’ proceedings. This is because, the official explained, the current courtroom is the only venue at NSGB that can accommodate a multi-defendant trial and any highly classified evidence required for the proceedings. Further, according to a senior DOD official, renovation of the gallery will require it to be re- accredited before DOD could resume discussing highly classified evidence in the adjoining court room. This could result in a substantial increase in both the period of time in which the gallery and court room are unavailable, as well as the cost of a renovation. In our review of DOD documents and discussions with department officials, we learned that there may be ways to address some of these concerns. For example, DOD is planning to accommodate at least some additional visitors to NSGB. According to OMC documentation, it is planning to support about 350 total attendees per week of hearings during the trial phase of Khalid Shaikh Mohammad et al (2). This is an increase of about 260 percent, compared to the average total number of visitors for a week of pre-trial hearings in this case. Remote Viewing of Proceedings Based on our review of relevant court documents and discussions with DOD officials and stakeholders, we identified two broad categories of potential options that may help DOD address the public access challenges associated with CCTV sites: (1) adding or changing the locations of CCTV sites and (2) broadcasting video from NSGB using other technologies, such as the internet. CCTV sites: Additional CCTV sites—that are more evenly distributed across the country—could potentially be established for the general public or for use solely by victims and their family members. DOD officials acknowledged that most military bases have the requisite technology and physical infrastructure to host a CCTV site and that expanding the number of CCTV sites would require a relatively small outlay of resources. Further, they also acknowledged that there may be opportunities to establish CCTV sites at locations other than military bases, such as federal courthouses, which may help address the public access challenges posed by bases’ security procedures, such as foreign nationals’ difficulty when serving as observers or reporters. DOD officials noted however, that expanding CCTV sites would require approval by the Secretary of Defense or a military judge, because— according to DOD’s Manual for Military Commissions—the broadcasting of proceedings in the court room, to include video and audio recording or television broadcasting, shall not be permitted. The military judge, however, may permit contemporaneous closed-circuit video or audio transmission. For example, the prosecution requested this permission in 2012 and the military judge authorized the transmission of all open proceedings, by CCTV, to several sites. Similarly, based on our review of relevant selected case studies of terrorism trials in U.S. federal court, there are previous examples of federal terrorism trials using CCTV sites for the benefit of the public, victims and family members. Further, DOD officials were hesitant to support such an expansion based on their perception that relatively few people have utilized the current CCTV sites, but they were unable to provide complete or fully accurate and reliable data on attendance of certain groups, such as the media and general public. In addition, according to DOD officials, the demand for public access during the current cases’ decade-long pre-trial phase likely does not represent the magnitude of future public interest, which DOD officials believe will increase significantly once the trial phase begins. Television and internet broadcast: Broadcasting video of hearings via other technologies, such as the television or internet would increase opportunities for the general public to view commissions’ proceedings remotely. An OMC information technology expert told us that it would be relatively simple and inexpensive to transmit the existing video feed from the proceedings on NSGB to either television stations, such as C-SPAN, or through the internet using the same cyber security protocols used for CCTV sites. Further, internet broadcasts could either be password- protected so that they could be viewed only by a specific group, such as victims and family members, or they could be made available to the general public. This option raised mixed views from the experts and officials we interviewed. According to Rule 806(c) of the Manual for Military Commissions, television or internet broadcasting would require express authorization by the Secretary of Defense—and as previously noted—this rule is consistent with federal criminal practice which prohibits the broadcasting of judicial proceedings from the courtroom. Legal experts who we contacted had varying perspectives on this issue. For example, officials from the Office of the Chief Prosecutor had concerns that parallel those of the Judicial Conference of the United States—the national policy- making body for the federal courts—on the negative impact of cameras in the courtroom on jurors and witnesses, among other reasons. Specifically, the Judicial Conference cited concerns such as publicity that could threaten jurors’ privacy and witnesses that could be by subjected to distractions, intrusions or influences. In contrast, a senior official in the Military Commissions’ Defense Organizations generally supported television and internet broadcasting of proceedings. This perspective was shared by the American Bar Association, which stated that it would support adoption of such an initiative in the future to protect the integrity of the military commissions’ process and better educate the public about these proceedings. Also, in our discussions with DOD officials, they too expressed mixed perspectives regarding internet or television broadcast of proceedings from NSGB. On one hand, according to an OMC information technology expert, broadcasting is technologically possible and could use certain existing security procedures. Specifically, because safeguarding classified information is critical, any television or internet broadcast of proceedings would use the same video feed currently transmitted to the NSGB gallery and CCTV sites, and thus would use the same safeguards provided by the 40-second delay previously discussed. Further, DOD information technology experts suggested that using a limited internet broadcast, it could be possible for DOD to create temporary viewing sites almost anywhere they are needed; for instance, in a hotel conference room. On the other hand, senior DOD officials expressed several concerns regarding the security implications of broadcasting video outside of the current CCTV framework. For example, they highlighted the potential for adversaries of the United States to copy and alter the video feed from an unsecured broadcast—thus creating a new and inaccurate record of proceedings that could be used as propaganda. Further, while internet broadcasts could be password-protected for victims and their family members only, DOD officials were concerned that the size of the group may make it more likely that the password would be shared with people outside of the group. In regard to these concerns, DOD’s technology experts suggested that they could potentially be addressed, at least in part, by using security procedures already in place at the NSGB gallery and CCTV sites. Specifically, at the temporary viewing sites they proposed, DOD officials would not allow recording of the video feed, following the rules currently in place at NSGB and CCTV sites. However, regarding this proposal, senior DOD officials conveyed force protection concerns for government personnel and any attendees. For example, an official noted that there have been investigations into allegations that OMC personnel have been surveilled by unknown persons, both in the United States and overseas, when on official travel. Also, in a relatively unsecure civilian location like a hotel, DOD would not be able to enforce the rules of the commissions. For instance, according to this official, if someone wanted to attend a temporary viewing site but refused to relinquish their electronic recording devices, per rules currently in place at NSGB and CCTV sites, DOD’s only recourse would be to call local law enforcement authorities. Timeliness of Posting Information to the Commissions’ Website DOD’s Regulation suggests two possible approaches the department can take when reviewing court documents, prior to posting on the website, and one of these could help the department post court documents in a timelier manner. The first approach would allow for an OMC security classification expert to independently determine whether a court document may contain classified information. If it is determined that the document does not contain classified information, the document is to be posted within 1 business day of it being filed. In contrast, according to OMC officials, the second approach provided by the Regulation—and since at least 2014—has been interpreted as directing that every document filed must undergo a security review before it is posted to the OMC website. As discussed previously, DOD officials told us that they attributed the department’s document posting delays to DOD’s policy decision to subject the extremely large volume of court documents filed, including schedule changes, to the same type of security review. DOD’s practice has resulted in nearly every document filed with the commission undergoing a security review before it could be posted to the OMC website. However, at the end of our review, a military judge’s ruling on a pre-trial motion in the case of U.S. v. Khalid Shaikh Mohammad et al.(2) is expected to substantially change DOD’s previous practice of submitting every document for security review prior to posting to the OMC website. Specifically, in December 2018, a military judge found that DOD’s practice, based on the interpretation of the relevant provisions of the Regulation by the previously assigned military judge and the office of the convening authority, resulted in all pleadings—classified or not— undergoing a more laborious classification review intended for classified (or arguably classified) filings. As a result, the military judge found that compliance with DOD’s timeliness standard has, since at least 2017, been the exception rather than the rule. In this ruling, the military judge ordered that commencing on January 16, 2019, the OMC Trial Judiciary’s Chief Clerk will instead send all filings that do not require a classification security review directly to the OMC Webmaster for posting within one business day of filing. Further, per the regulation, filings requiring a classification security review will be sent to the inter-agency review team to coordinate the classification review. Implementation of the military judge’s ruling is expected to reduce the volume of documents submitted for security classification review and thus may improve the timeliness of posting information to OMC’s website. Each Potential Option for Expanding Public Access Has Tradeoffs That DOD Has Not Yet Assessed Current law and DOD guidance establish a framework in which DOD and military judges are to weigh the interests of public access to commissions’ proceedings against other considerations, including national security. For example, paralleling the statutory requirement for public access found in the Military Commissions Act of 2009, DOD’s Regulation for Trial by Military Commission states that its goal is to make commissions’ proceedings accessible to the public to the maximum extent possible, consistent with the interests of national security, the rights of the accused, and other interests protected by law. Standards for Internal Control in the Federal Government state that agencies should identify and analyze risks related to achieving their defined goals. These standards also maintain that—based on an agency’s assessment of risks—it should design specific actions as part of their response to the risks. However, DOD has not yet assessed the tradeoffs made by maintaining its current approach in pursuit of its goal of maximizing public access to the extent possible versus expanding public access by implementing other options. This is because the department has not yet identified these options and analyzed the risks associated with them for expanding public access. For example, we spoke to senior DOD officials who expressed strong support for public access to commissions’ proceedings. While they were not necessarily opposed to the concept of expanding public access, they did express concerns about the potential risks and challenges associated with how it may be achieved. Specifically, according to the former Acting Convening Authority, open and transparent commissions’ proceedings are “very important,” adding that public access must be weighed against the need to protect the proceedings’ large amounts of classified information. Similarly, the current Chief Prosecutor for Military Commissions stated that public access to commissions’ proceedings is “hugely important” and that they are “owned by the American People,” but also noted the importance of protecting classified information, especially the sources and methods of the intelligence community. Further, the current head of the Military Commissions Defense Organization, while acknowledging the necessity of processes to protect classified information, stated that “nothing is more important” than public access to the proceedings, calling them “the most important cases of our lifetime.” While these officials generally acknowledge that there are tradeoffs to be made, for example, in facilitating public access while protecting classified information, they have not identified how this could be accomplished or assessed the extent of the tradeoffs associated with any potential options for expanding public access to proceedings. As discussed previously, there are a number of potential options for expanding public access—well supported by victims, their family members that we surveyed, and non-government stakeholders. However, DOD officials have cited various tradeoffs, in the form of concerns over resources and national security, among others. While DOD officials’ concerns may be warranted, until it fully assesses these tradeoffs by identifying and analyzing the potential risks and challenges, it may be missing an opportunity to expand public access. For example, DOD officials have expressed concern with the potential cost and logistical challenge of expanding the viewing gallery on NGSB. However, DOD officials have not assessed such options for increasing public access to proceedings at NSGB while weighing the risks of doing so—such as cost or potentially delaying hearings—and not doing so—such as the current situation, with hundreds of victims and family members who have not been able to attend hearings. Our prior work on leading practices for effective strategic planning has also shown that agencies should define strategies that address management challenges and identify resources needed to achieve their goals. However, according to DOD officials, the department has not developed a strategy that explains how DOD will achieve its goal of maximizing public access to the military commissions’ proceedings in the context of public access challenges and the expected increase in demand for public access, once the cases’ trial phases begin. For example, DOD officials acknowledged that there are large populations of victims and family members who are “underserved” by the current number and locations of CCTV sites and that they need to be expanded. Further, the former acting convening authority noted that there would be a substantial amount of time required to plan for additional sites. Some DOD officials estimate that there will likely be 12-24 months advance notice before trials are held and therefore believe that this will provide sufficient time to develop a strategy that addresses challenges with opening additional sites. However, based on our discussions with DOD officials, this may not be enough time given the substantial planning and coordination that will need to take place within and outside the department on such efforts and the lengthy lead time typically needed to secure additional resources through DOD’s budget process. For example, DOD officials told us that they do not have many facilities anymore in urban communities, which necessitates that they have partners in these areas to facilitate additional CCTV sites. DOD officials said that they have tried working with government officials in New York City—a city with a high concentration of victims and family members—to identify ways to expand options for remote viewing of proceedings. However, DOD officials said that the coordination has been challenging, given management challenges—such as finding adequate space that is accessible for victims, family members, and the media—and required resources—such as reimbursing the City of New York for required security. In addition, while other agencies’ facilities could potentially be used, DOD officials noted that they have not begun coordinating with other agencies because the trial dates are currently unknown. But, given the logistical constraints and budget challenges, if DOD waits until the announcement of a trial dates, the department runs the risk of not having adequate time to plan and budget for a new CCTV site in New York City or any other appropriate location. This example illustrates the complexities of addressing public access, the usefulness of assessing the tradeoffs between DOD’s current approach to public access and options for expanding access, and a strategy that addresses management challenges and identifies needed resources. Until DOD comprehensively identifies and analyzes the risks of maintaining its current approach compared with those posed by potential options for expanding public access, it cannot be assured that it has met its objective of maximizing public access to the extent possible. Furthermore, until DOD develops a strategy, as appropriate, to deal with potential options and describes how the department plans to achieve its public access goals, it cannot ensure that it is well-positioned for the substantial increase in demand for public access that is anticipated when the commissions’ proceedings move into the trial phase. Conclusions With the responsibility to carry out military commissions’ proceedings for cases that many believe to be the most consequential in United States history, DOD also has—according to its guidance—the responsibility to provide the public with as much access as possible, consistent with national security interests. Although this is a complex set of responsibilities, DOD has facilitated public access to commissions’ proceedings in a variety of ways. These complexities and constraints notwithstanding, there are a number of challenges posed to the public’s ability to access commissions’ proceedings and obtain information about the proceedings. While there are potential options to address these challenges, there are also potential risks that need to be assessed. Whether or not DOD should expand public access—as outlined by these potential options—is a determination the department must make. Given that the public’s demand for access will most likely increase substantially when the commissions’ enter into their trial phases, the longer DOD waits to determine its strategy, the greater the risk of not fully meeting the demand from victims and family members, non-government stakeholders, and the general public. Recommendation for Executive Action The Secretary of Defense should ensure that the Deputy Secretary of Defense assesses the tradeoffs of potential options for expanding public access to military commissions’ proceedings by identifying and analyzing associated risks, and, as appropriate, developing a strategy to implement any viable options. Agency Comments We provided a draft of this report to the DOD, Department of Justice, and relevant intelligence agencies for review and comment. In written comments provided by DOD (reproduced in appendix IV), DOD concurred with our recommendation, noting planned actions to address it. DOD and certain intelligence agencies also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, the Office of Military Commissions, Department of Justice, and four relevant intelligence agencies. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or FarrellB@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report describes (1) how the Department of Defense (DOD) currently facilitates public access to military commissions’ proceedings; (2) the challenges, if any, that the public faces in gaining access to or obtaining information on these proceedings; and (3) what is known about potential options to address public access challenges, including any related tradeoffs. Specifically, the military commissions’ cases included in our review are 9/11: Khalid Shaikh Mohammad et al (2), USS Cole: Abd al-Rahim Hussein Muhammed Abdu Al-Nashiri, Majid Shoukat Khan, Abd al Hadi al-Iraqi, and Ahmed Mohammed Ahmed Haza al Darbi. To address our first objective, we reviewed relevant guidance, policies, and regulations related to public access to military commissions’ proceedings. We attended military commissions’ proceedings at U.S. Naval Station Guantanamo Bay (NSGB) from April 30, 2018, to May 4, 2018 to observe how the public accessed and viewed the proceedings, in person. During this visit we also visited the facilities relevant to public access. For example, in the Expeditionary Legal Complex, where proceedings are held, we inspected the courtroom where hearings occur, discussing the equipment used to facilitate the 40-second delay used to ensure that classified information is not transmitted to Closed Circuit Television (CCTV) sites during open hearings. We also inspected the gallery, from which the public watches hearings. In addition, we visited facilities where certain of the accused are detained, discussing with DOD officials the access granted by the department to visiting victims and family members and non-government stakeholders. We also discussed key issues with DOD officials, such as the Chief Prosecutor and the Military Commissions Defense Organization. To observe how the public utilized remote viewing sites we viewed military commissions’ proceedings remotely at one CCTV site, and visited another. These include Fort Meade, Maryland, which is a site for victims and their family members, as well as being the site for use by the media, non-governmental organizations, and members of the general public. In addition, we visited the Norfolk Naval Station, Virginia CCTV site, which is open to victims and family members only. In addition to watching the hearings, we spoke with Office of Military Commissions (OMC) representatives at the sites regarding their responsibilities and they provided us with an overview of how the sites operate. In addition, to determine what information was available on OMC’s public website and how it is organized, we reviewed its content, including the portion of the site reserved for victims and their family members. Further, to obtain information on how public access is provided in federal criminal courts, we conducted interviews with officials from the Department of Justice and the Administrative Offices of the U.S. Courts, also discussing with these organizations whether they provided support to DOD’s public access procedures for the commissions’ proceedings. To address our second objective, we reviewed applicable sections of the U.S. Constitution, relevant case law, executive orders, DOD guidance and reports from experts on public access to military commissions’ proceedings to understand the role that current laws, policies, and judicial precedence play in decisions about public access to military commissions’ proceedings. We then took selected examples of public access issues at military commission proceedings and compared them to the access afforded to the public at terrorism trials held in U.S. federal courts. To identify and understand any challenges facing public access, we obtained the perspectives of both victims and their family members and other non-government stakeholders on any challenges associated with public access to commissions’ proceedings. We developed a non- generalizable survey to obtain perspectives on public access from a sample population of victims and their family members associated with terrorist attacks being adjudicated by military commissions’ proceedings, such as the attacks on the USS Cole and September 11, 2001. See appendix II for further details regarding our survey of victims and family members. We also developed a standardized set of 10 questions that was used to obtain the perspectives of 55 selected non-government stakeholders on challenges to public access to military commissions’ proceedings. The questions were delivered to these stakeholders in the form of a self- administered questionnaire. To identify the non-government stakeholders included in our review, we first obtained a list of the non- governmental organizations that DOD has approved to observe military commissions’ proceedings in-person at NSGB. These organizations include victim advocacy groups, universities, civic organizations, and independent professional associations. During the course of our review, we identified additional individuals with relevant expertise, such as legal and national security policy experts and members of the media whom we also asked to complete our self-administered questionnaire. We pre-tested the self-administered questionnaire with four non- government stakeholders to ensure functionality and ease of understanding—after which we distributed the questionnaires via email to the remaining non-government stakeholders included in our review. Of the 55 non-government stakeholders who received our questionnaire, 25 completed it. The analysis was conducted by two analysts who reviewed and coded responses according to a pre-determined coding scheme. A third analyst was used to reconcile any conflicting conclusions from the first two analysts. The results of our analysis were used to describe non- government stakeholders’ perspectives in the report, as appropriate. We supplemented data obtained through our survey and self-administered questionnaire with interviews of victims and their family members, DOD officials, and observers from non-governmental organizations to better understand their perspectives. To assess the timeliness of information posted on OMC’s website, we gathered and analyzed data from an inter-agency review team that reviews documents to be posted on OMC’s website, as well as the website itself. In regard to data from the inter-agency review team, we obtained and analyzed data on when court documents were filed with OMC and the date on which the inter-agency review team returned them to OMC for posting; comparing that amount of time to a timeliness standard laid out in DOD’s Regulation for Trial by Military Commission (Regulation). According to the Regulation, DOD is supposed to post documents to the OMC website generally no later than 15 business days after documents have been filed with OMC’s Trial Judiciary, known as the “file date.” In regard to our analysis of data from OMC’s website, we collected this information using a “web-scraping tool” that we developed to regularly visit OMC’s website and capture data about a court document’s file date and the date on which it was posted on OMC’s website. We selected these two dates because they allowed us to compare the time DOD took to post court documents to the department’s timeliness standard. Using our analysis of data from the review team and OMC’s website, we determined the extent to which DOD posted court documents in a timely manner. Please refer to appendix III for additional details on the scope and methodology for our collection of data using the web-scraping tool and our analysis of these data. For data provided by DOD, we performed a number of assessments. As a result of discussions with the Defense Intelligence Agency about the timeframes and completeness of available data, the agency clarified timeframes and explained why the data are not fully complete. As a result of these assessments, we determined that data from DOD on timeliness of information posted to the commissions’ website are sufficiently reliable to serve as one of several sources of information used to determine that DOD faces challenges in the timeliness with which it posts court documents to the commissions’ website. In addition, through discussions with OMC officials about the way information is added to the commissions’ website, we determined that the data we independently collected and analyzed from the website are sufficiently reliable to serve as another source of information used in our determination of challenges that DOD faces. To address our third objective, we reviewed relevant reports to identify potential options for expanding public access to commissions’ proceedings and any concerns associated with doing so. To determine potential options for expanding public access to the commissions’ proceedings, we obtained the perspectives of victims and their family members, other non-government stakeholders, and DOD officials on (1) what potential options for expansion or improvement exist, and (2) any associated concerns with potential options for expansion or improvement. We conducted a survey of victims and their family members to determine the extent to which respondents support various options for expanding public access and their views on the timeliness of court document postings to OMC’s website. Similarly, we provided standardized question sets to non-government stakeholders and analyzed responses from the completed questionnaires to determine the extent to which respondents support various options for expanding public access as well as their views on other issues, such as the timeliness with which court document are posted to OMC’s website. Further, to examine the potential risks associated with these options for expansion—and ways to mitigate those risks—we discussed these potential options with DOD officials. Finally, we asked OMC officials to identify any DOD-led efforts to assess the current level of public access to commissions’ proceedings. We then compared any related efforts with Standards for Internal Control in the Federal Government, which state that agencies should identify and analyze risks related to achieving its defined objectives, and to develop leading practices for sound strategic management planning. Further, we compared any related DOD efforts to leading practices of effective federal strategic planning, which we derived in part from the Government Performance and Results Act (GPRA), as updated by the GPRA Modernization Act of 2010, associated guidance, and our prior work. To assess the extent to which DOD has applied selected principles of effective federal strategic planning in its facilitation of public access to military commissions’ proceedings, we compared actions DOD has taken to address challenges that it faced with meeting its goal of maximizing public access, consistent with the interests of national security, to these leading practices of effective federal strategic planning. We conducted this performance audit from January 2018 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Scope and Methodology for Survey of Victims and Their Family Members Overview To obtain information about the perspectives of victims and their family members on public access to military commissions’ proceedings, we administered a survey to the memberships of three victim’s organizations. In the survey questionnaire, we asked victims and their family members to provide their perspectives on the different ways they access information about, or participate in viewing military commissions’ proceedings. We administered the survey from July to September 2018. A reproduction of the questions and answers in the questionnaire and aggregate responses from the survey are included in this appendix. We informed our methodology approach and survey development through interviews and other communications with representatives from eight victim’s organizations. From these interviews we gathered information from the organizations about their membership, such as, the number of members, criteria for becoming a member, and how information about the members was recorded and stored. We also ascertained their willingness to share contact information for their membership with us for the sole purpose of administering the survey. Survey Population and Selection of Victims and Their Family Members We defined and identified the survey’s target population of victims and family members through interviews with victims’ organizations whose memberships were impacted by the attack on the USS Cole, the events of 9/11, or other terrorist attacks for which there are military commissions cases being tried or that have been completed. Our survey population was composed of the memberships of the Department of Defense’s (DOD) Victim and Witness Assistance Program (VWAP) (1,928 eligible members), which includes victims who were impacted by the attack on the USS Cole, the events of 9/11, or other terrorist attacks, for which Hadi Al-Iraqi is accused, as well as Massachusetts 9/11 Fund, Inc. (470 eligible members), and 9/11 Families for Peaceful Tomorrows (200 eligible members). Membership in these organizations, and inclusion in our survey population, was limited to those family members or surviving victims who chose to join one or more of these organizations. In addition, we added 42 other qualifying victims and family members (who may not have been members of the three organizations) that we identified in answers to a survey question that respondents were asked. Our survey’s population totaled 2,640 victims and family members, and we attempted to contact each one in our survey. Our survey population was limited to the memberships of these organizations because of concerns from some other victims’ organizations about the applicability of their data. However, many more people were significantly impacted by the events of 9/11 than are represented in our survey population. For example, according to the World Trade Center Health Program there are 88,484 individuals who have received medical treatment for 9/11 related injuries or illnesses. Thus, the survey results presented in the body of this report represent the views of only those responding, and are not generalizable to any broader population because it is difficult to determine with certainty the total population that was impacted by the events of 9/11 and would therefore have an interest in access to military commissions’ proceedings. Survey Development and Administration We informed the development of our methodological approach and the actual questionnaire through four meetings with eight victims and their family members during our visit to Naval Station Guantanamo Bay (NSGB). In these meetings, we piloted an interviewer administered questionnaire that included items that (1) related to their views on various topics related to the military commissions’ proceedings, and (2) solicited input on the best approaches for gathering the views of victims and family members. These meetings confirmed that a survey would be a valuable method for gathering the views of a broad range of victims and family members and informed the development of a draft instrument for further pre-testing. In developing, administering, and analyzing this survey, we took steps to minimize the five types of potential errors that the practical difficulties of conducting any survey may introduce. Because we surveyed all members of the population we identified, there was no statistical uncertainty in our estimates due to sampling error. A different issue, measurement error, can result from differences in how a particular question is interpreted, and the sources of information available to respondents. We conducted 4 pre- tests of the draft questionnaire with 4 victim family members and made revisions to (1) ensure that survey questions were clear, (2) obtain any suggestions for clarification, (3) determine whether victims and their family members would be able to provide responses to questions with minimal burden, and (4) ensure that the survey was comprehensive and unbiased. We also provided GAO contact information in our communications for respondents who had questions about the survey or experienced technical problems. To minimize the effects of coverage error—the exclusion of some eligible members of the population, duplicate responses, or inclusion of ineligible members—we consulted the three victims’ organizations to determine the coverage of their membership lists and what survey methodology options for contacting them existed based on their willingness to provide us with contact information for their membership. All three of the organizations preferred to retain their member contact information citing privacy concerns, but agreed to send their membership unique usernames and passwords provided by GAO via email that their members could use to access the survey. Additionally, DOD VWAP also agreed to send postal mail questionnaires provided by GAO to approximately 500 of their members who did not have email addresses on record. GAO also provided an introductory email or letter, and postal questionnaires. Survey respondents received the email and used their associated username and password to access the survey website, and before opening their questionnaire, were required to change their password to further prevent unauthorized access to their responses. Those respondents who received postal mail questionnaires were given the option to complete the paper questionnaire or to log into and complete the web-based version. Because we did not obtain contact information from the organizations we worked with we were unable to determine if more than one survey was sent to any of the respondents. For example, if a respondent was a member of both 9/11 Families for Peaceful Tomorrows and DOD VWAP it is possible that they would have received two sets of unique usernames and passwords. However, we did include statements in the introductory email that directed respondents to disregard the email if they had already received a copy of the survey. Non-response error can result when a survey fails to capture information from all population members selected into the survey. To encourage survey response, for emails that were undeliverable, their respective organizations contacted them via telephone and attempted to obtain new email addresses. We were also able to send reminder emails out to respondents who were members of the two private victims’ organizations. However, DOD VWAP preferred not to send reminder emails to its members because of concerns of being overly intrusive. In an effort to increase the number of respondents to the survey we included a question asking respondents if they wanted to provide contact information for any other victims and family members who might be eligible to respond to the survey, and we administered the survey to them as well. We received 248 responses to the 2,640 questionnaires that were sent out, which after removing two ineligible population members confirmed to have died, resulted in a response rate of 9.4 percent. We anticipated a fairly low response rate because in our discussions with the leadership of each of the victims’ organizations they had pointed out that this population was quite private. In addition, the issues were sensitive, and not all organization members may wish to engage in discussions or surveys regarding activities related to the terrorist events. There were 70 responses by mail and the remaining 178 responses were to the web based survey. Also, there were 11 partial, but usable responses and 22 partial, but not usable responses. Finally, to limit the possibility of processing error, survey responses were checked for invalid or illogical answer patterns, and edits were made as necessary. All analysis programming was verified by a separate data analyst. Survey Questions and Results Reproduced below are the questionnaire text and question and answer wording presented to victims and family members in our survey. The percentage of responses for each answer to a question is displayed. Not all 248 respondents to the survey answered each question—some questions were only asked of a subset of respondents giving a qualifying answer to an earlier question, and not all qualifying respondents may have answered a particular question. Percentages may not sum to 100 percent due to rounding. Narrative answers to open-ended text questions are not displayed for brevity and to limit the possibility of identification of individual respondents. This survey is being done by the Government Accountability Office, or GAO. GAO is sometimes called the Congressional Watchdog because it reviews federal programs for the United States Congress. Congress directed us to consider if it’s possible, and a good idea, to expand the public’s access to Military Commission proceedings (usually referred to as hearings) that are open to the public. As part of this effort, Congress also asked us to speak with those affected by terrorism and their families. We are very appreciative of your willingness to respond to this survey. We will combine your answers with those of many others, and we will not publish any information that could identify you. We will not share any identifiable information from this survey unless required by law or a member of Congress. If you have any questions about this survey, or the GAO study, please contact ________, an analyst on this study, at proceedings@gao.gov ________. 1. To better understand your perspective on the events of 9/11 or the attack on the USS Cole, which one of the following best describes you? Family member of a victim (parent, sibling, daughter, son) Family member of a victim (aunt, uncle, niece, nephew, grandparent) Data Analysis We used these data collected from the commissions’ website in three analyses, as discussed below. Analysis, as of June 19, 2018: According to our research, the first document recorded as being filed with the Trial Judiciary, and included in our scope, on the current OMC website has a file date in April of 2011. On June 19, 2018, we began data collection using the web-scraping tool, as described above. While the website provides a file date for all documents, the website does not provide a date when documents are uploaded. Thus, for documents uploaded before June 19, 2018, we were not able to assess the Department of Defense’s (DOD) timeliness performance with data from the web-scraping tool. However, our analysis as of June 19, 2018, allowed us to asses other aspects of performance. Specifically, we determined the following: On June 19, 2018, the number of documents that had been filed with the Trial Judiciary, number that had been uploaded, or number that had yet to be uploaded. On June 19, 2018, the number of documents that had not been uploaded within 15 business days of the file date. We refer to these documents as having missed DOD’s 15 business day timeliness standard. On June 19, 2018, for documents that missed the 15 business day standard, the median number of days that they were uploaded after the timeliness standard. On June 19, 2018, DOD’s performance in these parameters, for five different types of court documents: motions, rulings, transcripts from open hearings, transcripts from closed hearings, and docket-related documents. Recent performance analysis, June 19 to November 19, 2018: While the website does not provide a date when documents are uploaded, our web-scraping tool provided this information for each document uploaded on or after June 19, 2018. Thus, for the five months we used the tool, we were able to assess DOD’s timeliness performance for each document filed with the Trial Judiciary or uploaded. For these documents, we determined the following: The number and percentage of documents that were uploaded after DOD’s 15 business day timeliness standard. For documents uploaded after the 15 business day standard, the median number of days that the standard was missed. DOD’s performance in these parameters, for five different types of court documents: motions, rulings, transcripts from open hearings, transcripts from closed hearings, and docket-related documents. Docket availability analysis, June 19 to November 19, 2018: According to DOD guidance and an OMC official, there is a set of documents that list the legal motions on which the military judge plans to hear arguments from the prosecution and defense during a specific hearing. We refer to these documents as docket-related documents. This set of documents includes dockets and amended dockets, among others, that are a sub-category of all the court documents that we discuss in this report. For hearings that occurred during the five months in which we used the web-scraping tool, we reviewed the commissions’ public website to identify hearings that occurred during this timeframe, cross-referencing the hearings with the posted court documents to identify docket-related documents related to these hearings. Because docket-related documents for a specific hearing share an alphanumeric designation, we were able to use this information to determine DOD’s timeliness performance for posting docket-related documents for these five hearings. For these documents, we determined the following: For each hearing that occurred from June 19, 2018, whether the relevant docket-related documents for a hearing were posted at least one day before the hearing for which those docket-related documents list the motions to be argued in the hearing. Appendix IV: Comments from the Department of Defense Office of Military Commissions Appendix V: GAO Contact and Staff Acknowledgments Contact Acknowledgments In addition to the contact named above, Kimberly Mayo, Assistant Director; Tracy Barnes; Kathryn Bassion; Steven Campbell; Signe Janoska-Bedi; Jill Lacey; Ronald La Due Lake; Amie Lesser; Ying Long; Ned Malone; Samuel Moore; Christina Murphy; Samuel Portnow; Carl Ramirez; Clarice Ransom; Paul Seely; Chris Turner; and John Van Schaik made key contributions to this report.
Why GAO Did This Study DOD is in the pre-trial phase of the military commissions' proceedings it is conducting to try the alleged perpetrators of terrorist attacks on the USS Cole and September 11, 2001. The Military Commissions Act of 2009 specifies that proceedings shall be publicly held unless the judge makes findings that justify a closed session, such as national security concerns. The National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to study the feasibility and advisability of expanding access to commissions' proceedings that are open to the public. This report describes (1) how DOD currently facilitates public access to proceedings; (2) challenges the public faces in gaining access to or obtaining information on proceedings; and (3) what is known about potential options to address public access challenges, including any related tradeoffs. GAO analyzed relevant laws and guidance; conducted a non-generalizable survey that received responses from 248 victims of terrorist attacks and their family members; collected data from DOD's website to analyze timeliness of court document postings; and interviewed relevant DOD officials and other government and non-government stakeholders. What GAO Found The Department of Defense (DOD) currently facilitates public access to and information about military commissions' proceedings at Naval Station Guantanamo Bay (NSGB) in Cuba by: communicating directly with victims and their family members about hearings; enabling selected members of the public to view proceedings in-person; providing five sites in the United States to view proceedings remotely via closed circuit television (CCTV); and making information such as court documents available on the Office of Military Commissions' website. The public faces various challenges in gaining access to military commissions' proceedings or obtaining information about them. First, some aspects of the proceedings limit public access, but addressing them is largely outside of DOD's control. For example, proceedings, by law, are held on NSGB—a location that is largely inaccessible to the general public. Further, cases currently before the military commissions have spent 4-10 years in pre-trial hearings with trials yet to be scheduled, which some suggest has lessened media coverage and public visibility. Second, there are other challenges that DOD officials have acknowledged that they have a greater ability to address. For example, the courtroom gallery is limited to 52 seats for those permitted to travel to NSGB. Additionally, all five CCTV sites are located within a span of 600 miles on the East Coast of the United States. However, victims and their family members—the primary intended users of these sites—often live a significant distance from these locations. A number of options may potentially address some of the public access challenges identified. DOD could potentially expand the viewing gallery to accommodate more people as part of an ongoing project to renovate the NSGB courtroom. However, DOD officials cautioned that it would require a commensurate increase in the lodging needed to house more visitors, which may not be supported by current levels of resources. Further, DOD has two potential options for addressing challenges with the remote viewing of proceedings. First, DOD could potentially increase the number and geographic dispersion of CCTV sites. Second, DOD could potentially maximize public access by broadcasting proceedings via the television or internet. DOD officials acknowledged that both options are possible and likely would require a relatively small outlay of resources. However, broadcasting proceedings via the television or internet is currently prohibited by DOD's regulation, and DOD officials were especially concerned with the security implications of this option. DOD has not assessed the tradeoffs nor identified or analyzed the risks of options for expanding public access to military commissions' proceedings. Consequently, DOD has not developed a strategy to address challenges or identified the resources needed to achieve its public access goals. Until DOD does so, it cannot be sure that it is meeting its goal of maximizing public access and may not be prepared for the potential increased demand for public access that is anticipated when proceedings move into the trial phase. What GAO Recommends GAO recommends that DOD identify and analyze the risks associated with potential options for expanding public access to proceedings, and develop a strategy, as appropriate, for how it will meet its public access goals with the expected increase in public interest. DOD concurred with the recommendation.
gao_GAO-20-257T
gao_GAO-20-257T_0
Background Maintenance for the nuclear elements of the fleet (i.e., aircraft carriers and submarines) is generally performed at the four public Naval shipyards, while maintenance for the conventional elements of the fleet (e.g., cruisers, destroyers, amphibious assault ships, and Military Sealift Command ships) is generally performed at private shipyards and ship repair companies throughout the United States, as shown in figure 1. A number of organizations and commands within the Navy share responsibilities for setting maintenance policies and planning, scheduling, and executing ship maintenance, from the offices of the Secretary of the Navy and Chief of Naval Operations, to fleet commanders and ships’ crews. Naval Sea Systems Command is the primary Navy ship maintenance organization. It is charged with, among other things, maintaining ships to meet fleet requirements within defined cost and schedule parameters; managing critical modernization, maintenance, and inactivation programs; life-cycle management of maintenance requirements; and management and oversight of the public naval shipyards. Its offices also perform contract administration, program management, and planning for future maintenance periods informed by the historical maintenance needs of Navy ships. Persistent and Substantial Maintenance Delays for Ships and Submarines Reduce Time for Training and Operations and Result in Additional Costs Our work has found that the Navy has been generally unable to complete ship and submarine maintenance on time, resulting in reduced time for training and operations and additional costs in a resource-constrained environment. The Navy’s readiness recovery is premised on the adherence to set deployment, training, and maintenance schedules. However, we reported in May 2016 on the difficulty that both the public and private shipyards were having in completing maintenance on time. We reported that, from 2011 through 2014, about 72 percent of scheduled maintenance for surface combatants, and 89 percent of scheduled maintenance for aircraft carriers, was completed late. We updated these data as of November 2019 to include ongoing and completed maintenance periods through the end of fiscal year 2019, and found that the Navy continues to struggle to complete maintenance on time, as we discuss below. The Navy was unable to complete scheduled ship maintenance on time about 75 percent of the time during fiscal years 2014 through 2019, which equates to about 33,700 days of maintenance delays (see figure 2). Furthermore, these delays have been growing longer and more frequent. In fiscal year 2014, about 20 percent of the Navy’s maintenance periods were more than 90 days late. However, in fiscal year 2019, more than 57 percent of its maintenance periods were similarly late (see figure 3). When maintenance is not completed on time, there are two primary effects. First, fewer ships are available to conduct training or operations, which can hinder readiness. For example, in fiscal year 2019, maintenance delays resulted in the Navy losing the equivalent of 19 surface ships. Second, maintenance delays are costly. In November 2018, we examined attack submarine maintenance delays and reported that the Navy incurred significant operating and support costs to crew and maintain attack submarines that are delayed during maintenance periods. We estimated that from 2008 to 2018, the Navy spent $1.5 billion to support attack submarines that provided no operational capability—attack submarines sitting idle no longer certified to conduct normal operations—while waiting to enter the shipyards and those delayed in completing their maintenance at the shipyards. We recommended that the Navy analyze how it allocates its maintenance workload across public and private shipyards. DOD concurred with our recommendation, and in December 2018, the Navy analyzed its workload allocation and moved two additional attack submarine maintenance availabilities to the private shipyards, with the possibility of moving additional availabilities to the private sector over the next 5 years. Navy Maintenance Challenges Stem from Multiple Interrelated Factors The Navy’s ability to successfully maintain its ships—completing all required maintenance on-time and within estimated cost—is affected by numerous factors that occur throughout a ship’s lifecycle (see figure 4). Some of these factors involve decisions made during the acquisition phase, years before a ship arrives at a shipyard for maintenance, while others manifest during operational use of the ship or during the maintenance process, as illustrated in figure 4. These decisions can be interrelated; for example, decisions to increase deployment lengths to meet the Navy’s operational demands can result in declining ship conditions and material readiness. The declining condition of the ships can increase the time that ships spend undergoing maintenance at the shipyards. Increased maintenance time at shipyards can lead to decisions to make further operational schedule changes to extend deployment lengths for other ships to compensate for ships experiencing maintenance delays. Acquisition Decisions Affect Maintenance Timeliness While our statement today focuses on factors occurring during operations and the maintenance process, we have previously reported that long-term sustainment costs can be affected by decisions made early in the acquisition process. The decisions made during the acquisition phase of a weapon system can affect maintenance strategies used throughout the lifecycle, as 80 percent of a program’s operating and support costs are fixed at the time a program’s requirements are set and the ship is designed. For example, the littoral combat ship (LCS) program initially planned to operate the ship with 40 sailors using contractors to complete all of the onboard maintenance tasks. After challenges with the first LCS deployments, the Navy began revising the ships maintenance strategy, including adding more sailors onboard the ship. In addition, decisions to acquire or not acquire rights to technical data can have far-reaching implications for DOD’s ability to sustain the systems and competitively procure parts and services. Furthermore, the Navy has shown a willingness to provide ships to the fleet that still have a number of unresolved construction and quality deficiencies, which add to its maintenance burden. For example, the Navy delivered the USS Somerset amphibious transport dock to the fleet with 52 significant defects, including an electronic system crucial to the ship’s mission effectiveness that the fleet had to replace shortly after it received the ship. We have ongoing work on the effect that acquisition decisions can have on maintenance that we expect to issue in early 2020. Operational Decisions Affect Maintenance Timeliness Some causes of delays are created or exacerbated during an operational deployment. Our work has shown that to meet heavy operational demands over the past decade with a smaller fleet, the Navy has increased ship deployment lengths and has reduced or deferred ship maintenance. Decisions to reduce crew sizes between 2003 and 2012 also left crews overburdened and contributed to deferred maintenance. These decisions have resulted in declining ship conditions across the fleet and have increased the amount of time that ships require to complete maintenance in the shipyards. Increased maintenance periods, in turn, have compressed the time during which ships are available for training and operations. Specifically, the Navy: Decreased crew levels. We reported in 2017 that the Navy’s effort to reduce crew sizes between 2003 through 2012 corresponded with increases in maintenance costs that outweighed the savings achieved through reduced personnel costs. Navy officials told us that shifts in maintenance workload from the organizational- and intermediate- levels to depot-level maintenance increased overall maintenance costs. This change occurred in part because reduced crew sizes resulted in minor maintenance being deferred, which developed into more costly issues that had to be addressed later at the depot level. Extended deployments. We have previously reported that Navy decisions to extend deployments can lead to maintenance challenges, as these decisions have resulted in declining ship conditions across the fleet, and have increased the amount of time that ships require to complete maintenance in the shipyards. Deferred maintenance. We reported in 2015, 2016, and 2017 that maintenance deferred while a ship is deployed can develop into more costly issues that must be addressed later, often during depot-level maintenance. Deferred maintenance can lead to new work at the shipyards, as the degraded ship conditions result in the need for additional maintenance. For example, maintenance officials told us that the focus for ships homeported overseas is on mission readiness, so overseas-homeported ships place priority on the maintenance of combat systems. This means that systems with the potential to reduce ship service life—such as fuel and ballast tanks that require extended in-port periods to properly maintain—can be subject to maintenance deferrals in order to allow the ship to sustain a high operational tempo. Challenges during the Maintenance Process Affect Timeliness In our prior work, we identified numerous challenges that occur during the Navy’s planning and execution of a ship’s maintenance period that contribute to delays. For example: Difficulties in adhering to the maintenance planning process. We reported in 2016 that the Navy must accurately define the work for each ship’s maintenance period. To do this, the Navy’s maintenance planning process specifies planning milestones intended to ascertain the ship’s condition, identify the work needed, and plan for its execution. Missing or meeting planning milestones late can contribute to maintenance delays. However, the Navy does not always adhere to its own maintenance planning process due to high operational tempo, scheduling difficulties, or personnel shortages, among other factors, resulting in shipyards discovering the need for additional repairs after maintenance has begun and adding time to the schedule for planning, contracting, or waiting for parts. Navy shipyards have shortages of skilled personnel. The Navy has reported a variety of workforce challenges at the four public shipyards such as hiring personnel in a timely manner and providing personnel with the training necessary to gain proficiency in critical skills. The Navy has noted that some occupations require years of training before workers become proficient. According to Navy officials, a large portion of its workforce is inexperienced. For example, we reported in December 2018 that 45 percent of the Puget Sound and 30 percent of the Portsmouth Naval Shipyards’ skilled workforce had fewer than 5 years of experience. According to DOD officials, workforce shortages and inexperience contribute to maintenance delays. For example, at Pearl Harbor Naval Shipyard in 2014 and 2015, two submarines were delayed approximately 20 months each, in part because of shortages in ship fitters and welders, among other skilled personnel. Most of DOD’s depots, which include the naval shipyards, have taken actions to maintain critical skills through retention incentives, bonuses, and awards. However, we found that neither the depots, their higher-level service component commands, nor the services have conducted an assessment to determine the effectiveness of these actions. The condition of facilities and equipment at Navy shipyards is generally poor. We reported in September 2017 that poor condition of facilities and equipment at the shipyards contributed to maintenance delays for aircraft carriers and submarines, hindering the shipyards’ ability to support the Navy. Specifically, we found that the average condition of shipyard facilities was poor and that shipyard equipment was generally past its expected service life. For example, four of the five dry docks at Norfolk Naval Shipyard face flooding threats from extreme high tides and storm swells and average one major flooding event per year. In 2009 a dry dock at Norfolk Naval Shipyard required emergency repairs to prevent flooding while the USS Tennessee (SSBN-734) was undergoing maintenance. According to the Navy’s report on the incident, several days of high tides and winds, coupled with multiple leaks in the dry dock’s granite block joints, resulted in the dry dock flooding at an estimated rate of 3,000 gallons per minute before workers could repair it. In addition, at Puget Sound Naval Shipyard—located in an area identified by the U. S. Geological Survey as a “High Seismic Hazard Zone”—a 7.0 magnitude or greater earthquake could damage or ruin the only dry dock on the west coast that is capable of performing maintenance on aircraft carriers. We have also previously reported that the Navy shipyards do not track when facility problems leads to maintenance delays. Furthermore, the average age of equipment at the shipyards is beyond its average expected service life (see table 1). Equipment that is past its expected service life can pose an increased risk for maintenance delays or higher maintenance costs, affecting the depots’ ability to conduct work. As we have previously reported, aging equipment can present a number of challenges, such as more frequent breakdowns, less effective or efficient operation, and safety hazards. The Navy shipyards lack the capacity to conduct required maintenance in the future. We also reported in 2019 that the naval shipyards cannot support 68 of the 218—almost a third—of the maintenance periods that aircraft carriers and submarines will require through 2040, due to a lack of dry dock capacity. Specifically, several of the Navy’s 17 dry docks will become obsolete after the Los Angeles-class submarines are retired because they will be too small or lack the appropriate shore-side support for newer classes of submarines. For example, only 14 dry docks can support the early- flight Virginia-class submarines and only 11 dry docks can support the Virginia-class submarines outfitted with the longer Virginia Payload Module. In addition, no dry docks can currently support repairs to the Ford class aircraft carrier, even though the Navy accepted delivery of the first ship of that class in 2017. Private shipyards have told the Navy that they could have some additional capacity to conduct maintenance, but are hesitant to invest in creating this capacity without more certainty from the Navy. The Navy Has Taken Some Steps to Address Maintenance Delays, but Corrective Actions Will Take Years to Implement The Navy Developed a Shipyard Infrastructure Optimization Plan, but It Will Require Significant Time and Resources to Implement The Navy has begun to implement a major effort—the Shipyard Infrastructure Optimization Plan—that is intended to significantly improve the condition of shipyard facilities and equipment, but it will require significant time and resources to implement. This plan is designed to address the bulk of the Navy’s dry-dock capacity issues as well as identify the optimal placement of facilities and major equipment at each public shipyard. The Navy estimates these changes can ultimately increase its maintenance efficiency by reducing the distance that workers and material will have to travel around the shipyards during the maintenance period. According to the Navy, this equates to recovering about 328,000 labor days per year—an amount roughly equal to that of an additional submarine maintenance period annually. In addition, the Navy has created a program office to oversee its shipyard improvement effort, which we believe demonstrates leadership attention and commitment to the effort. However, the Navy estimated that the replacement of the facilities will take 20 years (see figure 5). Further, the Navy estimates that it will take 30 years to bring the average age of its equipment to within industry standards. The Navy estimated in 2018 that this effort will require $21 billion over 20 years to implement. However, this $21 billion estimate does not include inflation and other significant costs, such as those for utilities, roads, or environmental remediation. Our analysis of the Navy’s preliminary estimate is that it is understated due to a lack of inflation adjustments, which could add billions to the final cost. Navy officials stated that the $21 billion estimate is an initial indicator of the scope of the effort and is not intended as a cost estimate in its budget. However, even that $21 billion estimate would require funding levels beyond what the Navy has requested for shipyard infrastructure in recent years. We recommended in November 2019 that the Navy should prepare more accurate cost estimates using best practices so that the Navy can request accurate funding from Congress and avoid common pitfalls associated with inaccurate estimates such as cost overruns, missed deadlines, and performance shortfalls. We recommended that the Navy take steps to improve its cost estimate prior to the start of its primary facility improvement effort; the Navy has concurred with this recommendation. Other Navy Efforts Are in Early Stages and Will Need Additional Time to Produce Results The Navy has additional efforts underway that should help reduce maintenance delays, though the results of these efforts likely will not be seen for several years. For example: Revising the size of ship crews. The Navy has taken steps to address some of our recommendations regarding the size of ship crews. Specifically, the Navy has begun reviewing and revising its ship crew levels—most notably adding 32 crewmembers to its DDG- 51 destroyers and 23 crewmembers to its LPD-17 fleet. However, officials noted that the process to update crew levels throughout the fleet would take about 4 years to complete. The Navy will also need to demonstrate that it actually can assign crew members to these ships to meet the higher crew levels. We have ongoing work examining this issue and plan to report on our findings in winter of 2020. Hiring additional workers at shipyards. Shipyards have increased hiring, going from about 30,600 workers in fiscal year 2014 to about 37,400 workers in fiscal year 2019. However, Navy officials have stated that it takes several years for workers to reach full productivity. In the past, officials expected that new hires would take about 5 years to become fully productive, although the Navy has testified that they hope to reduce that time through new training techniques. Performance to Plan. The Navy has begun an analytical effort to better understand maintenance challenges and its capacity needs for the future, called “Performance to Plan.” According to Navy officials and plans, this effort is intended to help the Navy improve full and timely completion of maintenance, including for aviation, surface ships, and submarines. For example, the effort for surface ship maintenance currently involves a pilot program looking at how to better plan and execute maintenance periods for DDG 51-class destroyers, including examining how to improve the accuracy of forecasted maintenance requirements and duration and better adhere to planning milestones, among other outcomes. We are encouraged by this effort, but note that it remains in the early stages, and it is not clear whether or when the pilot effort will be extended to examine the entire surface fleet. In sum, the Navy faces significant challenges in maintaining its current fleet and reaping full benefit of the ships it has in its inventory today due to persistent and substantial maintenance delays. The Navy has made progress identifying the causes of their maintenance challenges and has begun efforts to address them. However, delays continue to persist and these challenges will require years of continued management attention and substantial investment to be resolved. As part of this sustained management attention, the Navy would benefit from a continued focus on implementing our prior recommendations. Since 2015, we have made 17 recommendations to the Navy to address various concerns we identified with its maintenance process. The Navy agreed with 14 of those recommendations, partially concurred with 1 recommendation, and disagreed with 2 recommendations. However, as of November 2019, the Navy had fully implemented 6 of these recommendations. While the Navy has taken some additional action on the 11 remaining unimplemented recommendations, taking additional steps to fully address these recommendations could help the Navy address its maintenance challenges and better position it to sustain the current and future fleet. Looking to the future, the Navy is seeking to grow the fleet over the next 15 years. However, if it increases the size of the fleet before addressing its maintenance challenges, it is likely that the Navy will be faced with a growing number of both maintenance delays and ships that are unavailable for use. Even assuming the Navy’s efforts to improve shipyard operations succeed, it will be years before the Navy can maintain a significantly larger fleet. Chairmen Perdue and Sullivan, Ranking Members Hirono and Kaine, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have questions about this testimony, please contact Diana Maurer, Director, Defense Capabilities and Management at (202) 512-9627 or maurerd@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Suzanne Wren (Assistant Director), James Lackey (Analyst-in-Charge), A. S. Bagley, Chris Cronin, Amie Lesser, Felicia Lopez, Tobin McMurdie, Carol Petersen, Clarice Ransom, Matt Thompson, and Sally Williamson. Appendix I: Implementation Status of Prior GAO Recommendations Related to Ship and Submarine Maintenance In recent years, we have issued a number of reports related to ship and submarine maintenance. Table 1 summarizes the recommendations in these reports. The Department of Defense (DOD) concurred with most of the 17 recommendations; however, to date DOD has fully implemented 6 of the recommendations. For each of the reports, the specific recommendations and any progress made in implementing them are summarized in tables 2 through 9. Related GAO Products Report numbers with a C or RC suffix are classified. Report numbers with a SU suffix are sensitive but unclassified. Classified and sensitive but unclassified reports are available to personnel with the proper clearances and need to know, upon request. Report numbers with a T suffix are testimonies. Naval Shipyards: Key Actions Remain to Improve Infrastructure to Better Support Navy Operations. GAO-20-64. Washington, D.C.: November 25, 2019. Military Depots: Actions Needed to Improve Poor Conditions of Facilities and Equipment that Affect Maintenance Timeliness and Efficiency. GAO-19-242. Washington, D.C.: April 29, 2019. DOD Depot Workforce: Services Need to Assess the Effectiveness of Their Initiatives to Maintain Critical Skills. GAO-19-51. Washington, D.C.: December 14, 2018. Navy and Marine Corps: Rebuilding Ship, Submarine, and Aviation Readiness Wil Require Time and Sustained Management Attention. GAO-19-225T. Washington, D.C.: December 12, 2018. Navy Readiness: Actions Needed to Address Costly Maintenance Delays Facing the Attack Submarine Fleet. GAO-19-229. Washington, D.C.: November 19, 2018. Navy Readiness: Actions Needed to Address Costly Maintenance Delays Affecting the Attack Submarine Fleet. GAO-19-192C. Washington, D.C.: October 31, 2018 Military Readiness: Update on DOD’s Progress in Developing a Readiness Rebuilding Plan. GAO-18-441RC. Washington, D.C.: August 10, 2018. (SECRET) Navy Shipbuilding: Past Performance Provides Valuable Lessons for Future Investments. GAO-18-238SP. Washington, D.C.: June 6, 2018. Weapon Systems Annual Assessment: Knowledge Gaps Pose Risks to Sustaining Recent Positive Trends. GAO-18-360SP. Washington, D.C.: April 25, 2018. Columbia Class Submarine: Immature Technologies Present Risks to Achieving Cost Schedule and Performance Goals. GAO-18-158. Washington, D.C.: December 21, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. GAO-17-809T. Washington, D.C.: September 19, 2017. Naval Shipyards: Actions Needed to Improve Poor Conditions that Affect Operations. GAO-17-548. Washington, D.C.: September 12, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-798T. Washington, D.C.: September 7, 2017. Navy Shipbuilding: Policy Changes Needed to Improve the Post-Delivery Process and Ship Quality. GAO-17-418. Washington, D.C.: July 13, 2017. Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016. Navy and Marine Corps: Services Face Challenges to Rebuilding Readiness. GAO-16-481RC. Washington, D.C.: May 25, 2016. (SECRET//NOFORN) Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. Navy Force Structure: Sustainable Plan and Comprehensive Assessment Needed to Mitigate Long-Term Risks to Ships Assigned to Overseas Homeports. GAO-15-329. Washington, D.C.: May 29, 2015. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The 2018 National Defense Strategy emphasizes that restoring and retaining readiness is critical to success in the emerging security environment. The Navy is working to rebuild its readiness while also growing and modernizing its aging fleet of ships. A critical component of rebuilding Navy readiness is implementing sustainable operational schedules, which hinge on completing maintenance on time. We have reported that the Navy faces persistent challenges with completing required maintenance on time. This statement provides information on (1) the magnitude of maintenance delays for Navy ships and submarines, (2) factors contributing to maintenance delays, and (3) the Navy's efforts to address these factors. GAO also discusses its prior recommendations on the factors contributing to Navy maintenance delays and the Navy's progress in addressing the recommendations. This statement is based on previously published work from 2015 through 2019 on Navy maintenance, ship acquisition, crew size, ship maintenance and deployment schedules, the condition of Naval shipyards, and recruiting skilled maintenance personnel. What GAO Found The Navy continues to face persistent and substantial maintenance delays that affect the majority of its maintenance efforts and hinder its attempts to restore readiness. From fiscal year 2014 to the end of fiscal year 2019, Navy ships have spent over 33,700 more days in maintenance than expected. The Navy was unable to complete scheduled ship maintenance on time for about 75 percent of the maintenance periods conducted during fiscal years 2014 through 2019, with more than half of the delays in fiscal year 2019 exceeding 90 days. When maintenance is not completed on time, fewer ships are available for training or operations, which can hinder readiness. GAO identified multiple factors that contribute to maintenance delays, including insufficient shipyard capacity, shortage of skilled personnel, and deferred maintenance during operational deployments, among others. Ships awaiting or delayed in maintenance incur operating and support costs. For example, GAO estimated that the Navy spent more than $1.5 billion in support costs from fiscal years 2008 through 2018 due to delayed maintenance for attack submarines. The Navy has several efforts underway to improve its maintenance operations, but they will take years to implement, and will require sustained management attention and funding above current levels. For example, the Navy estimates it will take 20 years to improve the infrastructure at its shipyards, 4 years to restore ship crew levels, and several years to improve maintenance planning. Until the Navy addresses these challenges, it will be hindered in its ability to rebuild readiness and prepare for the future, particularly as it grows the size of the fleet. What GAO Recommends GAO made 17 recommendations in prior work cited in this statement. The Department of Defense generally concurred with most of GAO's recommendations, and has fully implemented 6. Continued attention is needed to ensure that the remainder of these recommendations are addressed.
gao_GAO-20-8
gao_GAO-20-8_0
Background The size and complexity of states’ Medicaid programs have implications for program administration and oversight, including provider screening and enrollment. States have flexibility, within broad federal guidelines, in how they design, administer, and oversee their Medicaid programs. For example, states have the option to pay for care through fee-for-service (FFS) payments to participating providers, contract with managed care organizations (MCO) to deliver services based on a fixed amount per beneficiary, or a combination of both. In fiscal year 2018, total Medicaid spending was $629 billion, about half of which was estimated to be spent for services delivered under managed care. CMS and states each have a role to play in protecting the integrity of the Medicaid program and preventing fraud, waste, and abuse. States administer their Medicaid programs, including implementing federal requirements for screening and enrolling Medicaid providers. CMS has a role overseeing states’ compliance with federal requirements. CMS’s oversight activities include measuring improper payments in the Medicaid program, and conducting focused program integrity and desk reviews. Other federal and state entities also have a role in oversight of the Medicaid program. For example, state auditors—state agencies that typically conduct the annual single state audit of federal programs—may also conduct program integrity reviews and identify Medicaid improper payments. We have previously testified that state auditors are uniquely qualified to partner with CMS in its oversight of Medicaid. In our testimony, we noted that CMS could help improve program integrity by providing state auditors with a substantive and ongoing role in auditing state Medicaid programs. Provider Screening and Enrollment Requirements To limit payments to ineligible providers—such as those convicted of program-related fraud and abuse, or with a suspended or revoked medical license for reasons of bearing on professional competence or performance—federal regulations require states to screen and enroll all providers, whether the provider furnished, ordered, or referred services to an eligible beneficiary or whether the service was paid for under FFS or Medicaid managed care contracts. Providers subject to these requirements include individual practitioners—such as physicians, nurse practitioners, and physical therapists—as well as any physicians and other professionals who may only order or refer beneficiaries to services, but do not render services; for example, providers who only prescribe medications or order imaging services, such as an x-ray. Providers also include provider organizations—such as hospitals, group practices, and skilled nursing facilities—and providers and suppliers of medical equipment or goods. All providers must be screened when they (1) initially apply for and submit an application, and (2) upon reenrollment in a state’s Medicaid program. Further, states must screen all providers at least once every 5 years to revalidate their enrollment. States may rely on the results of providers’ screenings performed by the Medicare program or another state’s Medicaid program. States may also choose to delegate screening activities to vendors that screen providers on the states’ behalf or MCOs. If a state chooses to delegate screening activities, it must ensure that the screenings are conducted in accordance with the Medicaid program requirements. States must also collect certain information from providers to enroll them into their Medicaid programs, such as their Social Security numbers, dates of birth, and National Provider Identifiers, if applicable. States must also collect disclosure information for owners, managing employees, and others with controlling interests in provider organizations meeting certain criteria. For example, states must collect disclosure information for those with direct or indirect ownership totaling 5 percent or more, or who are agents or managing employees of a provider organization. These owners and others with controlling interests who are subject to disclosure requirements must undergo certain required screening activities, such as federal database checks, and states must perform these screening activities to enroll the provider organization. Federal regulations require states to perform several screening activities prior to enrolling providers. The provider’s categorical risk level for fraud, waste, and abuse determine the required screening activities. The screening activities may include conducting checks in federal databases; verifying licensure; and performing site visits and fingerprint-based background checks. In addition to required activities, states may also choose to conduct other screening activities in order to identify providers ineligible for participating in Medicaid. See figure 1 for an overview of Medicaid provider screening activities and appendix II for a full list of provider screening requirements. Risk-based screening. States must screen providers according to the provider’s categorical risk level for fraud, waste, and abuse. The regulations establish screening requirements for three risk levels—limited, moderate, and high risk—and each risk level includes a range of provider types. (See table 1.) In addition, providers’ risk levels can change. For example, limited- or moderate-risk providers may be categorized as high risk if the state Medicaid agency imposes a payment suspension based on a credible allegation of fraud, waste, or abuse. Federal database checks. States must confirm the identity of prospective providers, providers seeking revalidation, and individuals subject to disclosure requirements to determine if they have been excluded from participating in Medicaid by checking four federal databases: 1. the Social Security Administrations’ Death Master File (DMF); 2. the National Plan and Provider Enumeration System (NPPES); 3. the List of Excluded Individuals/Entities (LEIE); and 4. the General Services Administration’s System for Award Management (SAM). In addition, states must conduct at least monthly checks in the LEIE and SAM. States may also check other federal and state databases. For example, states may check CMS’s database containing Medicare provider enrollment data—the Provider Enrollment, Chain and Ownership System (PECOS)—prior to conducting their own database checks to determine if a provider is enrolled in Medicare and was previously screened. For providers enrolled in Medicare, states may choose to rely on the results of the screening conducted for the Medicare program and enroll the provider without conducting any further screening activities. For providers not enrolled in Medicare, states must screen the provider prior to enrolling them. (See table 2.) Licensure verification. States must verify that providers have a current, valid medical license in the states in which they are licensed. Further, states must confirm that the providers’ license does not have any limitations, such as a suspension or probation. Site visits and fingerprint-based criminal background checks. States must conduct on-site visits for moderate- and high-risk providers to verify that the information submitted is accurate and to determine providers’ compliance with federal and state enrollment requirements. Further, states must collect fingerprints from high-risk providers, and these providers must consent to a criminal background check. CMS Oversight of States’ Provider Enrollment CMS developed the PERM to estimate the national Medicaid improper payment rate, including improper payments due to states’ non- compliance with provider screening and enrollment requirements. CMS computes the national improper payment rate as the weighted average of states’ improper payment rate estimates from the PERM using three key components of the Medicaid program: FFS, managed care, and beneficiary eligibility determinations. Each component of the PERM is estimated differently, and only the FFS component is used to oversee states’ compliance with provider screening and enrollment requirements. When calculating the FFS component, CMS measures improper payments in a sample of FFS claims, which record services provided. Specifically, CMS reviews the sample of FFS claims and examines related state documents to identify any errors resulting from a failure to meet federal and state policies, including provider screening and enrollment requirements. For example, CMS verifies the provider was eligible to render and bill for the services by reviewing provider information, including the provider’s name and license, and whether the provider was screened in accordance with risk-based screening requirements. Any FFS claims paid for services furnished, ordered, referred, or prescribed by a provider who was not screened in compliance with requirements or not enrolled with the state is considered an improper payment. The managed care component of the PERM measures any improper payments in the capitated payments that state Medicaid agencies make to MCOs on behalf of enrollees. It does not examine whether providers in managed care were appropriately screened and enrolled within a state. The eligibility component focuses solely on measuring improper payments related to state determinations of whether Medicaid enrollees meet categorical and financial criteria for Medicaid benefits. CMS conducts the PERM across all states on a 17-state, 3-year rotation cycle and computes an annual rolling average of improper payment rates from the 3 years of data. At the conclusion of each PERM cycle, CMS develops reports for each state, which include any findings related to provider screening and enrollment. Following each PERM cycle, states must prepare a corrective action plan to address errors found. CMS also conducts other oversight activities to protect the integrity of the Medicaid program and assess states’ compliance with Medicaid provider screening and enrollment requirements. These activities include the following: Focused program integrity reviews. CMS conducts these reviews to examine specific areas of Medicaid, including provider screening and enrollment and managed care. These reviews may include a full or partial review of states’ compliance with provider screening and enrollment requirements. Desk reviews. CMS conducts these off-site reviews on specific aspects of states’ program integrity activities, such as a state’s progress toward implementing corrective action plans in response to PERM findings and payments made to providers terminated from Medicaid. Selected States Faced Challenges Implementing Provider Screening and Enrollment Requirements; Some States Have Not Implemented Certain Requirements Officials from all seven selected states told us they faced challenges building the capacity and establishing the administrative processes needed to implement the new and expanded provider screening and enrollment requirements under PPACA and the 21st Century Cures Act. These challenges included establishing procedures for risk-based screenings, using federal databases and collecting information from providers, and screening and enrolling an increased volume of providers. Due, in part, to these challenges, officials from five selected states told us they have not yet implemented some of the requirements. Challenges Establishing Procedures for Risk- Based Screenings Officials from all seven selected states described challenges building their capacity to conduct risk-based provider screenings prior to enrollment into their Medicaid programs. To incorporate database checks, site visits, fingerprint-based background checks, and other risk-based screening activities into state screening procedures; state Medicaid officials said they needed financial resources, leadership support, and time. Specifically, officials from the selected states told us they used one of the following three approaches to build capacity to implement the screening and enrollment requirements. 1. Developing new information technology systems. Officials from two of the seven selected states told us they developed new state information technology systems that automated screening and enrollment activities. For example, officials from one state told us they spent $5.9 million from 2015 through 2018 to develop a new provider screening and enrollment system that included an online provider application portal and automated screening activities, such as conducting database checks and flagging high-risk providers for site visits and fingerprint-based background checks. According to state officials, this new system helped the state implement provider screening and enrollment requirements, yielding efficiencies by allowing staff to focus on analyzing provider screening results rather than clarifying data entry errors and manually checking each database. (See fig. 2.) 2. Contracting with vendors. Officials from four other selected states told us they initiated or modified existing contracts with vendors to screen new provider applications and conduct revalidations on their behalf. For example, officials from one state told us their contract with a vendor resulted in screening and enrolling about 10,000 providers in 2018, more than five times the number the state had processed in the previous year. Another state used a vendor to revalidate more than 9,000 providers in 2016; about 12 percent of the state’s enrolled provider population. 3. Modifying existing procedures. Officials from our seventh selected state told us that they modified their existing state information technology system and procedures to manually screen and enroll providers. However, according to state officials, this approach has put pressure on their resources. Officials said that they were working to automate some database checks as much as possible without requiring services from a contractor. Challenges Using Federal Databases and Collecting Required Information for Screening and Enrollment Officials from six of the seven selected states told us they experienced challenges using federal databases, and all seven of the states described challenges collecting required information for screening and enrollment. To mitigate challenges using federal databases, the state Medicaid agencies took actions including accessing data from alternate sources, manually verifying information, and collecting information from providers. (See table 3.) Recent CMS actions could also improve states’ ability to search databases. In April 2019, CMS officials told us they have partnered with the Treasury Department, which is conducting a pilot that will offer states access to its Do Not Pay Business Center services. Do Not Pay is a resource developed by the Treasury Department to detect and prevent improper payments. This resource allows federal agencies to automate screenings by searching for excluded parties using common identification numbers, such as Social Security numbers. Do Not Pay also allows users to search DMF, LEIE, and SAM from a single portal. CMS referred seven states, including two of our selected states, to take part in Treasury’s pilot. Officials from all seven selected states told us that they faced challenges collecting required information from providers for screening and enrollment, such as Social Security numbers or fingerprints. These states took steps—such as educating providers and developing or updating forms, procedures, and statutory provisions—to address some of the challenges associated with collecting the information necessary to screen and enroll providers. For example, one state told us that some providers have been hesitant to disclose Social Security numbers and their date of birth on applications, as well as other information that states are required to collect for enrollment. In response, the state has offered provider education on the requirements to facilitate collecting this information. State Medicaid officials also noted that their agencies worked with CMS, state legislatures, and state law enforcement agencies to implement fingerprint-based background check requirements to, for example, collect fingerprints and check them against Federal Bureau of Investigation records. Officials from two selected states told us their agencies did not have the authority under state law to collect fingerprints from providers or submit them to the Federal Bureau of Investigation prior to PPACA, and officials from one of these states told us changes to state statute were needed before they could implement this requirement. Challenges Screening and Enrolling an Increased Volume of Providers Officials from five of the seven selected states described challenges having sufficient capacity to screen an increased volume of providers and enroll certain provider types. Officials from one state told us that the new requirement to screen managed care providers more than doubled the number of providers the state needed to screen and enroll. Further, officials from three states told us about challenges obtaining information needed to conduct screenings from prescribers and other professionals who only order and refer services. Previously, such providers were not required to enroll in Medicaid and some were not responsive to the state Medicaid agency’s requests for information. Officials from the five selected states that faced these challenges told us they had taken steps to address these challenges. Yet, four of the five selected states that faced these challenges continued to make payments to these types of providers even though they were not enrolled in their Medicaid programs, because they wanted to maintain beneficiary access to the services. Managed care providers. Officials from three of the selected states told us they faced challenges enrolling managed care providers. For example, officials from one state told us that they could not process the large number of applications they needed to screen before enrolling these providers, and attempted to delegate some required database checks for managed care providers to its MCOs. However, officials told us these MCOs do not have state-level access to all required databases; therefore, the managed care providers have not been screened as required and are not all enrolled with the state. The officials told us the state has about 80,000 managed care providers to screen and enroll as part of implementing the 21st Century Cures Act requirements. However, officials said they have chosen to wait until the state launches a new information technology system that automates screenings before screening and enrolling these providers. Prescribers and other professionals who may only order and refer services to beneficiaries. Officials from three selected states told us they had not enrolled all prescribers and other professionals who may only order and refer services, but do not render them. These states have taken steps to address this challenge. Officials from one state told us they took steps to screen and enroll medical residents— hospital providers who are not providing services to Medicaid beneficiaries, but may prescribe medication during a beneficiary’s hospital stay. These officials told us they did not screen and enroll medical residents prior to PPACA, because of differences in licensure. Officials from all three states said they continue to pay for prescriptions written by these providers who are not enrolled in their Medicaid programs. CMS’s Optional Consultations Are Tailored to Support States; Oversight Does Not Provide Comprehensive, Timely Information on States’ Compliance CMS offers optional consultations that are tailored to support states’ implementation of the Medicaid provider screening and enrollment requirements. However, these consultations are optional, regardless of whether states have implemented the federal requirements. CMS also conducts several oversight activities—the PERM, focused program integrity reviews, and other activities—to oversee states’ compliance with provider screening and enrollment requirements. Collectively, these activities do not ensure CMS has comprehensive and timely information on the extent of states’ compliance with the requirements. CMS’s Optional Consultations Are Tailored to Support States’ Implementation of Medicaid Screening and Enrollment Requirements In 2016, CMS began offering optional consultations tailored to support states’ implementation of Medicaid screening and enrollment requirements. Optional consultations include CMS contractor site visits to states that examine the extent to which states have implemented the requirements, and the data compare service to assist states with screening providers. While most states (38) have used one or more of these consultations, 13 states have not used any. Because some states do not avail themselves of the optional consultations, these consultations do not provide CMS with information on all states’ progress in implementing the requirements. Officials from some of the seven selected states reported limitations affecting their use of the consultations. CMS contractor site visits. One-third of states (17), including three of the seven selected states, participated in at least one multi-day CMS contractor site visit, as of June 2019. Officials from CMS and its contractor told us that during the site visit, the state completes a self- assessment, followed by the contractor’s assessment on the implementation status of all provider screening and enrollment requirements to identify requirements that have not been implemented and opportunities for improving the states’ screening and enrollment procedures. (See fig. 3 for a map of states that participated in the CMS contractor site visit.) After the visit, the contractor provided a report that summarizes the state’s status toward implementing each requirement, such as full, partial, nearly complete, and not started. (See fig. 4.) CMS officials consider any requirements that are not fully implemented as “opportunities for improvement.” CMS contractor site visits are not required and are not considered audits; the agency does not track states’ progress on implementing requirements and opportunities for improvement unless the state engages CMS in follow-up. At the time of their contractor site visit, 16 of the 17 states that opted for this service had not fully implemented all provider screening and enrollment requirements. Officials from the three selected states that received a CMS contractor site visit told us that the visit helped (1) accurately identify requirements that their state had not fully implemented, or (2) establish priorities for making changes. Two of these states found it helpful to learn from the contractor about other states’ best practices and requested a return visit. Officials from another state told us they were able to make positive changes to their screening and enrollment procedures immediately after the visit, such as improving documentation of database checks through the use of screenshots to record search results. Data compare service. About half of all states (25), including four of the seven selected states, used the data compare service as of June 2019— a process by which states submit a list of all their active providers, and CMS provides a full report of the state’s providers who were previously screened by Medicare, as well as identify providers the state may need to take action on, because, among other reasons, they were terminated from Medicaid. For the states that have used the data compare service, CMS officials reported being able to screen between 40 to 80 percent of their providers. Officials from the four selected states said it was also useful for testing their provider screening and enrollment procedures to see whether the service would identify any providers they should have excluded in their screening, and three of these states said it was useful for streamlining their provider revalidations. Additionally, officials from one state that had not yet used the service told us they would consider using it in the future for both of these purposes. For example, officials from one state reported that CMS’s data compare service screened half of the approximately 80,000 providers they needed to revalidate. (See fig. 5 for a map of states that opted for the data compare service.) However, officials from all seven selected states identified limitations of the data compare service that led some states to use the service less frequently and three states to not use the service at all. CMS officials acknowledged the three limitations reported by state officials: 1. Time for receiving results. The results from the data compare service were not timely enough to help states with screening newly enrolling providers. Officials from one state explained that some provider information may become outdated by the time the results are received 6 to 8 weeks later, which makes the service less useful than it could be. 2. Different Medicare and Medicaid address entries. The data compare service’s addresses reflected Medicare practice or billing locations that may be different from providers’ Medicaid addresses. Because these addresses do not match, they could not be relied upon for updating the state’s provider records or to help states conduct site visits required for screening and enrolling moderate- and high-risk providers. 3. Additional burden for manual enrollment systems. Officials from two selected states told us that manually extracting provider data from their system—including names, addresses, Social Security numbers, and National Provider Identifiers—and manually re-entering the results from CMS for each provider into their system was burdensome and resource-intensive, leading one of these states to stop using the service. CMS offers guidance and other supports to states on a regular and periodic basis, including monthly calls with states, and assigning states to a CMS contact (see sidebar). These services also assist states with implementing the Medicaid provider screening and enrollment requirements. Officials from all of our selected states told us the guidance and other supports were helpful. According to CMS officials, the extent to which states participate in these other supports varies, because the level of participation is optional. CMS officials also told us that they use these other supports, including monthly calls and ad hoc emails, to discuss progress and keep a record of information provided; however, the agency does not revisit or require corrective actions unless the state initiates it. The PERM and other methods CMS uses to oversee states’ efforts to screen and enroll Medicaid providers do not provide CMS with comprehensive and timely information on states’ compliance with the requirements. Some methods do not fully track whether states have enrolled all types of providers and are in compliance with all the requirements; other program integrity oversight methods have not been conducted on all states. Further, these methods do not ensure timely follow-up to address identified concerns. The PERM’s components—FFS, managed care, and beneficiary eligibility determinations—measure improper payments across all states; as previously noted, the FFS component is the only component CMS uses to assess states’ compliance with provider screening and enrollment requirements. However, using the PERM to oversee states’ compliance with the requirements has limitations, including the following: The PERM does not examine whether providers under contract with MCOs are appropriately screened and enrolled. The PERM assesses states’ compliance with the provider screening and enrollment requirements by reviewing provider information for claims paid under FFS; it does not review such information for services financed under managed care. Currently, the PERM does not examine ownership disclosure and certain other provider screening and enrollment requirements. CMS officials told us the agency plans to assess the feasibility of including ownership disclosure requirements in the PERM over the next 3 years. The PERM does not ensure that CMS identifies areas of non- compliance in a timely manner. CMS conducts the PERM in each state every 3 years, and states develop corrective action plans in response to findings from the PERM; thus, it may be years before CMS identifies—and states resolve—areas of non-compliance with the provider screening and enrollment requirements. (See fig. 6.) Although CMS follows up annually with states regarding their corrective action plans, it does not fully assess states’ progress toward implementing their plans until the next PERM cycle, which is 3 years later. Further, while four of our selected states had implemented all their corrective action plans regarding provider screening and enrollment requirements within 1 year of PERM findings, the other three states had not fully implemented their plans about 2 years after PERM findings. CMS officials emphasized that developing and tracking corrective action plans was a collaborative process and that states may change corrective action plans in response to competing priorities. CMS uses other methods to oversee states’ compliance with the provider screening and enrollment requirements—focused program integrity reviews and desk reviews—that are not optional and have resulted in findings. However, these methods do not provide the agency with comprehensive and timely information on states’ compliance with the requirements. Specifically, these methods have not been conducted in all states, performed in a timely manner, or included a systematic review of states’ compliance with all the provider screening and enrollment requirements. For example: Focused program integrity reviews. CMS has not conducted focused program integrity reviews examining specific areas in Medicaid for all states. Most of the reviews performed did not include a comprehensive or timely examination of states’ compliance with provider screening and enrollment requirements. Overall, CMS has conducted 42 focused program integrity reviews in fiscal years 2014 through 2018 in 39 states. Among these reviews, nine of the 42 focused program integrity reviews examined states’ compliance with provider screening and enrollment requirements, the last of which was completed in fiscal year 2015. CMS also conducted focused program integrity reviews on managed care for 34 of the 41 states with managed care expenditures in fiscal year 2017. However, nearly all of these reviews (33) were conducted prior to January 2018 when states were required to screen and enroll all managed care providers, as required by the 21st Century Cures Act. CMS also conducted seven focused reviews examining personal care services in seven states— which include examining screening and enrollment requirements for providers of these services. Desk reviews. Off-site desk reviews that examine specific aspects of states’ program integrity activities do not include a comprehensive or timely examination of states’ compliance with the provider screening and enrollment requirements. CMS has conducted desk reviews examining activities related to provider screening and enrollment, such as corrective actions states have taken in response to PERM findings. However, desk reviews on corrective action plans are limited to examining findings on provider screening and enrollment identified during the PERM and are not conducted until 3 years after the PERM has occurred. For example, CMS told us that in 2018 the agency conducted desk reviews on the 17 states that underwent the PERM in fiscal year 2015. CMS also conducted 35 desk reviews on potential payments to terminated providers since fiscal year 2014. The PERM and CMS’s other oversight methods do not provide CMS with sufficient or timely information about states’ screening and enrollment procedures for all Medicaid provider types, and most states with managed care expenditures have not undergone a managed care-focused program integrity review since the 21st Century Cures Act screening and enrollment provisions went into effect. The lack of complete information on whether states are screening and enrolling all providers according to requirements is inconsistent with federal internal controls on assessing risk, which note that management should consider the potential for fraud when identifying, analyzing, and responding to risks. Without complete information, CMS cannot ensure that only eligible providers are participating in the Medicaid program, leaving the program vulnerable to improper payments. Further, CMS does not obtain timely information on all states’ actions to address areas of non-compliance or track progress toward addressing these areas. The length of the PERM cycle—3 years—and time for performing corrective actions to address PERM findings, limits CMS’s awareness of states’ progress, or lack thereof, toward implementing requirements. As a result, CMS lacks assurance that states are addressing areas of non-compliance or if such actions are being taken in a timely manner. This is inconsistent with federal internal controls on monitoring, which note that management should remediate deficiencies in the internal control system on a timely basis. Conclusions CMS has a range of activities that provide the agency with some knowledge of states’ implementation of required provider screening and enrollment under PPACA and the 21st Century Cures Act; however, the agency’s oversight activities are not designed to systematically examine compliance with all the requirements for all providers in a timely manner. Notably, the PERM does not examine managed care providers, and CMS’s assessment of compliance and monitoring of corrective actions are not timely, because they are based on the 3-year PERM cycle. Also, focused program integrity reviews—which may examine states’ oversight of MCOs and their compliance with provider screening and enrollment requirements for providers participating in managed care—have not been conducted on all states. The one activity that can provide CMS and states with a complete and timely assessment of states’ implementation with provider screening and enrollment requirements is optional. While CMS does some tracking of state-reported information on the status of states’ implementation of the requirements, this oversight does not include states that have not availed themselves of the support CMS provides. Since states are not required to participate in these optional consultations, the states that may face the greatest challenges with implementing the provider screening and enrollment requirements might not volunteer to participate in the consultations. Without a thorough review of states’ implementation of the provider screening and enrollment requirements, as well as processes to monitor states to ensure timely remediation of deficiencies, the agency lacks assurance that only eligible providers are participating in the Medicaid program, leaving the program at risk for improper payments. Recommendations We are making the following two recommendations to CMS: The Administrator of CMS should expand its review of states’ implementation of the provider screening and enrollment requirements to include states that have not made use of CMS’s optional consultations. Similar to CMS’s contractor site visits, such reviews should include any necessary steps to address areas of noncompliance for all types of enrolled providers, including those under contract with MCOs. (Recommendation 1) The Administrator of CMS should annually monitor progress toward addressing any areas of noncompliance related to the provider screening and enrollment requirements for any state with one or more corrective action plans. (Recommendation 2) Agency Comments We provided a draft of this report to HHS for review and comment. In its written comments, HHS concurred with our recommendations; the full text of which are reproduced in appendix I. Regarding our first recommendation, HHS stated that it will reach out to states that have not yet participated in optional consultations to discuss their progress and outline steps that the states should take to come into full compliance with the provider screening and enrollment requirements. Regarding our second recommendation, HHS stated that it is in the process of instituting more frequent reviews of corrective action plans resulting from one of CMS’s oversight activities—the PERM—stating that such reviews will now be performed quarterly. However, HHS’s comments did not discuss monitoring areas of noncompliance that are identified through other oversight activities, such as focused program integrity reviews, which include reviews of states’ screening and enrollment of providers who are under contract with MCOs. We recommend that CMS annually monitor progress toward addressing any areas of noncompliance related to the provider screening and enrollment requirements, which would include areas of noncompliance identified through the PERM, optional consultations, and other oversight activities. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. Appendix I: Comments from the Department of Health and Human Services Appendix II: Summary of Medicaid Provider Screening and Enrollment Requirements Appendix II: Summary of Medicaid Provider Screening and Enrollment Requirements Description The state Medicaid agency must require all enrolled providers to be screened in accordance with applicable requirements. The state Medicaid agency must have a method for verifying that a provider is licensed without restrictions in accordance with the laws of that state. The state Medicaid agency must complete revalidation of enrollment for all providers, regardless of provider type, at least every 5 years. The state Medicaid agency must deny enrollment to any provider and disclosing entity that does not successfully pass or comply with the screening process, and the agency must terminate providers who no longer meet the requirements for enrollment. The state Medicaid agency must rescreen a provider who has been deactivated for any reason prior to the provider’s reactivation. The state Medicaid agency must share with providers who are terminated or denied enrollment the process for appealing the decision. The state Medicaid agency must conduct site visits for providers who are designated as moderate or high-risk levels. Fingerprint criminal background checks The state Medicaid agency must complete fingerprint-based criminal background checks for providers and disclosing entities in the high-risk category. The state Medicaid agency must confirm the identity and determine the exclusion status of providers, any person with an ownership or control interest, and any agent or managing employee of the provider. The state Medicaid agency must require all claims for payment for items and services that were ordered or referred to contain the National Provider Identifier of the physician or other professional who ordered or referred such items or services. Screening levels for Medicaid providers The state Medicaid agency must screen all initial applications based on a categorical risk level of “limited,” “moderate,” or “high.” The state Medicaid agency must collect an application fee from institutional providers during a new enrollment or revalidation, unless Medicare or another Medicaid agency has already collected an application fee. Allows CMS and states to implement temporary moratoria pausing the enrollment of new provider types in a given location. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Leslie V. Gordon (Assistant Director), Kristin Ekelund (Analyst-in-Charge), Manuel Buentello, Drew Long, Giao N. Nguyen, and Chris Zakroff made key contributions to this report. Also contributing were Marissa Coloske, Vikki Porter, and Jennifer Whitworth.
Why GAO Did This Study A crucial component of protecting the integrity of the Medicaid program is ensuring that only eligible providers participate in Medicaid. States' non-compliance with provider screening and enrollment requirements contributed to over a third of the $36.3 billion estimated improper payments in Medicaid in 2018. To improve the integrity of the Medicaid program, PPACA and the 21st Century Cures Act established new requirements for screening and enrolling providers and expanded enrollment to include additional provider types. In this report, GAO (1) describes challenges states faced implementing provider screening and enrollment requirements; and (2) examines CMS support for and oversight of states' implementation of these requirements. GAO reviewed federal laws and CMS guidance. GAO also reviewed CMS documents, including reports resulting from CMS oversight activities published from 2014 through 2018 for seven states. These states were selected based on their use of CMS's contractor site visits, among other things. GAO also interviewed officials from CMS and the seven selected states. What GAO Found Officials from seven selected states that GAO interviewed described challenges they faced implementing new Medicaid provider screening and enrollment requirements, established by the Patient Protection and Affordable Care Act (PPACA) in 2010 and the 21st Century Cures Act in 2016. These challenges included establishing procedures for risk-based screenings, using federal databases and collecting required information, and screening an increased volume of providers. Due in part to these challenges, officials from five of the seven selected states told GAO they had not implemented certain requirements. For example, one state plans to launch its new information technology system, which automates screenings, before it will enroll providers under contract with managed care organizations, as required under these laws. The Centers for Medicare & Medicaid Services (CMS)—the federal agency that oversees Medicaid—supports states' implementation of new requirements with tailored optional consultations, such as CMS contractor site visits that examine the extent of states' implementation. Yet, because these are optional, states that need support might not participate, and CMS would not have information on those states. CMS uses other methods to oversee states' compliance, such as, the Payment Error Rate Measurement (PERM) process for estimating improper payments, and focused program integrity reviews. PERM. This process assesses states' compliance with provider screening and enrollment requirements, but does not assess compliance for all providers and all requirements, and occurs once every 3 years. Focused program integrity reviews. These reviews examine specific areas in Medicaid, like state compliance with provider screening and enrollment requirements, but have not been done in all states. CMS conducted reviews in 39 states in fiscal years 2014 through 2018. Collectively, CMS's oversight methods do not provide it with comprehensive and timely reviews of states' implementation of the provider screening and enrollment requirements or the remediation of deficiences. As a result, CMS lacks assurance that only eligible providers are participating in the Medicaid program. What GAO Recommends GAO recommends that CMS (1) expand its review of states' implementation of provider screening and enrollment requirements to include states that have not participated in optional consultations; and (2) for states not fully compliant with the requirements, annually monitor the progress of those states' implementation. The Department of Health and Human Services, the department that houses CMS, concurred with both recommendations.
gao_GAO-19-588T
gao_GAO-19-588T_0
Background As shown in table 1 the cost of counting the nation’s population has been escalating with each decade. The 2010 Census was the most expensive in U.S. history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion 2000 Census (in 2020 dollars). According to the Bureau, the total cost of the 2020 Census in October 2015 was estimated at $12.3 billion and in October 2017 that cost estimate grew to approximately $15.6 billion, approximately a $3 billion increase. Additionally, Bureau officials told us that while the estimated cost of the census had increased to $15.6 billion, it was nevertheless managing the 2020 Census to a lower cost of $14.1 billion. Bureau officials explained that the $14.1 billion includes all program costs and contingency funds to cover risks and general estimating uncertainty. The remaining $1.5 billion estimated cost is additional contingency for “unknown unknowns”—that is, low probability events that could cause massive disruptions—and several what-if scenarios such as an increase in the wage rate or additional supervisors needed to manage field operations. Moreover, as shown in figure 1, the average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 constant dollars). At the same time, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates have led to higher costs because the Bureau sends temporary workers to each non-responding household to obtain census data. Achieving a complete and accurate census has become an increasingly daunting task, in part, because the population is growing larger, more diverse, and more reluctant to participate in the enumeration. In many ways, the Bureau has had to invest substantially more resources each decade to conduct the enumeration. In addition to these external societal challenges that make achieving a complete count a daunting task, the Bureau also faces a number of internal management challenges that affect its capacity and readiness to conduct a cost-effective enumeration. Some of these issues—such as acquiring and developing IT systems and preparing reliable cost estimates—are long-standing in nature. At the same time, as the Bureau looks toward 2020, it has faced emerging and evolving uncertainties. For example, on March 26, 2018, the Secretary of Commerce announced his decision to add a question to the decennial census on citizenship status which resulted in various legislative actions and legal challenges. Ultimately, the case was heard by the U.S. Supreme Court, which, in a June 26, 2019, ruling, prevented the addition of the question because the Court found that the evidence Commerce provided in the case did not match the Secretary’s explanation. In addition, the Fourth Circuit Court of Appeals remanded other legal challenges to the district court on June 24, 2019, for further legal action, which is yet to be resolved. According to Bureau officials, on June 28, 2019, Commerce asked the Bureau to put its scheduled July 1 start date for printing questionnaires on hold while it considered legal implications of the Supreme Court ruling. On July 2, 2019, Commerce told the Bureau to proceed with printing questionnaires and other materials without the citizenship question on them. As of July 5, 2019, the Department of Justice (DOJ) indicated that, although printing was continuing without the citizenship question, DOJ was evaluating legal options to include the question. On July 11, 2019, the President announced that instead of collecting this information from the census questionnaire, he ordered all federal agencies to provide data on citizenship status to Commerce using legally available federal records. We have not analyzed this decision or its implications, if any, for how the Bureau will tabulate its official counts. We will continue to monitor developments for Congress. The Bureau also faced budgetary uncertainties that, according to the Bureau, led to the curtailment of testing in 2017 and 2018. However, the Consolidated Appropriations Act, 2018 appropriated for the Periodic Censuses and Programs account $2.544 billion, which more than doubled the Bureau’s request in the President’s Fiscal Year 2018 Budget of $1.251 billion. According to the explanatory statement accompanying the act, the appropriation, which is available through fiscal year 2020, was provided to ensure the Bureau has the necessary resources to immediately address any issues discovered during operational testing, and to provide a smoother transition between fiscal year 2018 and fiscal year 2019. The availability of those resources enabled the Bureau to continue preparations for the 2020 Census during the 35 days in December 2018 to January 2019 when appropriations lapsed for the Bureau and a number of other federal agencies. Moreover, the Consolidated Appropriations Act, 2019 appropriated for the Periodic Censuses and Programs account $3.551 billion. According to Bureau officials, this level of funding for fiscal year 2019 is sufficient to carry out 2020 Census activities as planned. Importantly, the census is conducted against a backdrop of immutable deadlines. In order to meet the statutory deadline for completing the enumeration, census activities need to take place at specific times and in the proper sequence. Thus, it is absolutely critical for the Bureau to stay on schedule. Figure 2 shows some dates for selected decennial events. The Bureau Has Begun Opening Offices and Hiring Temporary Staff The Bureau has begun to open its area census offices (ACO) for the 2020 Census. It has signed leases for all 248 ACOs, of which 39 of the offices will be open for the address canvassing operation set to begin in August 2019 where staff verifies the location of selected housing units. The remaining 209 offices will begin opening this fall. In 2010 the Bureau opened 494 census offices. The Bureau has been able to reduce its infrastructure because it is relying on automation to assign work and to record payroll. Therefore there is less paper—field assignments, maps, and daily payroll forms—to manually process. For the 2020 Census, the Bureau is refining its recruiting and hiring goals, but tentatively plans to recruit approximately 2.24 million applicants and to hire over 400,000 temporary field staff from that applicant pool for two key operations: address canvassing, and nonresponse follow-up, where they visit households that do not return census forms to collect data in person. In 2010 the Bureau recruited 3.8 million applicants and hired 628,000 temporary workers to conduct the address canvassing and nonresponse follow-up field operations. According to Bureau officials, it has reduced the number of temporary staff it needs to hire because automation has made field operations more efficient and there is less paper. As of June 2019, the Bureau reported that for all 2020 Census operations it had processed about 430,000 applicants. In addition, the Bureau was seeking to hire approximately 1,500 partnership specialists by the end of June 2019 to help increase census awareness and participation in minority communities and hard-to-reach populations. As of July 9, 2019, the Bureau’s latest biweekly reporting indicated that it had hired 813 partnership specialists as of June 22, 2019. Moreover, as of July 10, 2019, Bureau officials told us that another 830 applicants were waiting to have their background checks completed. According to Bureau officials, hiring data are based on payroll dates generated biweekly, while background check data are tracked internally. Therefore, according to Bureau officials, more current hiring data were not available as of July 10, 2019 to indicate whether the Bureau had met its June 30 hiring goal. Among other things, partnership specialists are expected to either provide or identify partners to help provide supplemental language support to respondents locally in over 100 different languages. We will continue to monitor the Bureau’s progress in meeting its partnership specialist staffing goals and addressing any turnover that takes place. Hiring partnership specialists in a timely manner and maintaining adequate partnership specialist staffing levels are key to the Bureau’s ability to carry out its planned outreach efforts, especially to hard-to-count communities. Moreover, Bureau officials also stated that the current economic environment (i.e., the low unemployment rate compared to the economic environment of the 2010 Census) has not yet impacted their ability to recruit staff. The Bureau will continue to monitor the impact of low unemployment on its ability to recruit and hire at the local and regional levels. The Bureau Plans to Rely Heavily on IT for the 2020 Census For the 2020 Census, the Bureau is substantially changing how it intends to conduct the census, in part by re-engineering key census-taking methods and infrastructure, and making use of new IT applications and systems. For example, the Bureau plans to offer an option for households to respond to the survey via the internet and enable field-based enumerators to use applications on mobile devices to collect survey data from households. To do this, the Bureau plans to utilize 52 new and legacy IT systems, and the infrastructure supporting them, to conduct the 2020 Census. A majority of these 52 systems have been tested during operational tests in 2017 and 2018. For example, the Bureau conducted its 2018 End-to- End test, which included 44 of the 52 systems and was intended to test all key systems and operations in a census-like environment to ensure readiness for the 2020 Census. Nevertheless, additional IT development and testing work needs to take place before the 2020 Census. Specifically, officials from the Bureau’s Decennial Directorate said they expect that the systems will need to undergo further development and testing due to, among other things, the need to add functionality that was not part of the End-to-End test, scale system performance to support the number of respondents expected during the 2020 Census, and address system defects identified during the 2018 End-to-End test. To prepare the systems and technology for the 2020 Census, the Bureau is also relying on substantial contractor support. For example, it is relying on contractors to develop a number of systems and components of the IT infrastructure, including the IT platform that is intended to be used to collect data from households responding via the internet and telephone, and for non-response follow-up activities. Contractors are also deploying the IT and telecommunications hardware in the field offices and providing device-as-a-service capabilities by procuring the mobile devices and cellular service to be used for non-response follow-up. In addition to the development of technology, the Bureau is relying on a technical integration contractor to integrate all of the key systems and infrastructure. The contractor’s work is expected to include, among other things, evaluating the systems and infrastructure and acquiring the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs; integrating all of the systems; and assisting with technical, performance and scalability, and operational testing activities. 2020 Census Identified by GAO as a High-Risk Area In February 2017, we added the 2020 Decennial Census as a high-risk area needing attention from Congress and the executive branch. This was due to significant risks related to, among other things, innovations never before used in prior enumerations, the acquisition and development of IT systems, and expected escalating costs. Among other things, we reported that the commitment of top leadership was needed to ensure the Bureau’s management, culture, and business practices align with a cost-effective enumeration. We also stressed that the Bureau needed to rigorously test census-taking activities; ensure that scheduling adheres to best practices; improve its ability to manage, develop, and secure its IT systems; and have better oversight and control over its cost estimation process. Our experience has shown that agencies are most successful at removal from our High-Risk List when leaders give top level attention to the five criteria for removal and Congress takes any needed action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. The agency has demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root causes and solutions, and that provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. The agency has demonstrated progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve, and ultimately address, high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. As we reported in the March 2019 high-risk report, the Bureau’s efforts to address the risks and challenges for the 2020 Census had fully met one of the five criteria for removal from the High-Risk List—leadership commitment—and partially met the other four, as shown in figure 3. Additional details about the status of the Bureau’s efforts to address this high-risk area are discussed later in this statement. The 2020 Census Remains High Risk Due to Challenges Facing the Enumeration The 2020 Census is on our list of high-risk programs because, among other things, (1) innovations never before used in prior enumerations are not expected to be fully tested, (2) the Bureau continues to face challenges in implementing IT systems, (3) the Bureau faces significant cybersecurity risks to its systems and data, and (4) the Bureau’s cost estimate for the 2020 Census was unreliable. If not sufficiently addressed, these risks could adversely impact the cost and quality of the enumeration. Moreover, the risks are compounded by other factors that contribute to the challenge of conducting a successful census, such as the nation’s increasingly diverse population and concerns over personal privacy. Key Risk #1: The Bureau Redesigned the Census to Control Costs, and Will Need to Take Several Actions to Better Manage Risks The basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for non-respondents—has been in use since 1970. However, a lesson learned from the 2010 Census and earlier enumerations is that this traditional design is no longer capable of cost-effectively counting the population. In response to its own assessments, our recommendations, and studies by other organizations, the Bureau has fundamentally re-examined its approach for conducting the 2020 Census. Specifically, its plan for 2020 includes four broad innovation areas: re-engineering field operations, using administrative records, verifying addresses in-office, and developing an internet self-response option (see table 2). If they function as planned, the Bureau initially estimated that these innovations could result in savings of over $5 billion (in 2020 constant dollars) when compared to its estimates of the cost for conducting the census with traditional methods. However, in June 2016, we reported that the Bureau’s initial life-cycle cost estimate developed in October 2015 was not reliable and did not adequately account for risk. As discussed earlier in this statement, the Bureau has updated its estimate from $12.3 billion and now estimates a life-cycle cost of $15.6 billion, which would result in a smaller potential savings from the innovative design than the Bureau originally estimated. According to the Bureau, the goal of the cost estimate increase was to ensure quality was fully addressed. While the planned innovations could help control costs, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. Our prior work has shown the importance of the Bureau conducting a robust testing program, including the 2018 End-to-End test. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. To address some of these challenges we have made numerous recommendations aimed at improving reengineered field operations, using administrative records, verifying the accuracy of the address list, and securing census responses via the internet. The Bureau has held a series of operational tests since 2012, but according to the Bureau, it scaled back its most recent field tests because of funding uncertainties. For example, the Bureau canceled the field components of the 2017 Census Test including non-response follow-up, a key census operation. In November 2016, we reported that the cancelation of the 2017 Census Test was a lost opportunity to test, refine, and integrate operations and systems, and that it put more pressure on the 2018 End-to-End test to demonstrate that enumeration activities will function under census-like conditions as needed for 2020. However, in May 2017, the Bureau scaled back the operational scope of the 2018 End-to-End test and, of the three planned test sites, only the Rhode Island site would fully implement the 2018 End-to-End test. The Washington and West Virginia sites would test just one field operation. In addition, due to budgetary concerns, the Bureau delayed ramp up and preparations for its coverage measurement operation (and the technology that supports it) from the scope of the test. However, removal of the coverage measurement operation did not affect testing of the delivery of apportionment or redistricting data. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, in part because the 2018 End-to-End test was the last opportunity to demonstrate census technology and procedures across a range of geographic locations, housing types, and demographic groups under decennial-like conditions prior to the 2020 Census. We reported on the 2018 End-to-End test in December 2018 and noted that the Bureau had made progress addressing prior test implementation issues but still faced challenges. As the Bureau studies the results of its testing to inform the 2020 Census, it will be important that it addresses key program management issues that arose during implementation of the test. Namely, by not aligning the skills, responsibilities, and information flows for the first-line supervisors during field data collection, the Bureau limited its role in support of enumerators within the re-engineered field operation. The Bureau also lacked mid-operation training or guidance, which, if implemented in a targeted, localized manner, could have further helped enumerators navigate procedural modifications and any commonly encountered problems when enumerating. It will be important for the Bureau to prioritize its mitigation strategies for these implementation issues so that it can maximize readiness for the 2020 Census. The Bureau Has Developed Hundreds of Risk Mitigation and Contingency Plans, but Those We Reviewed Were Missing Key Information To manage risk to the 2020 Census the Bureau has developed hundreds of risk mitigation and contingency plans. Mitigation plans detail how an agency will reduce the likelihood of a risk event and its impacts, if it occurs. Contingency plans identify how an agency will reduce or recover from the impact of a risk after it has been realized. In May 2019, we reported that the Bureau had identified 360 active risks to the 2020 census as of December 2018—meaning the risk event could still occur and adversely impact the census. Of these, 242 met the Bureau’s criteria for requiring a mitigation plan and, according to the Bureau’s risk registers, 232 had one (see table 3). In addition, 146 risks met the Bureau’s criteria for requiring a contingency plan and, according to the Bureau’s risk registers, 102 had one. Bureau guidance states that these plans should be developed as soon as possible after a risk is added to the risk register, but it does not establish a clear time frame for doing so. Consequently, some risks may go without required plans for extended periods. We found that, as of December 2018, some of the risks without required plans had been added to the Bureau’s risk registers in recent months, but others had been added more than 3 years earlier. We reviewed the mitigation and contingency plans in detail for six risks which the Bureau identified as among the major concerns that could affect the 2020 Census. These included cybersecurity incidents, late operational design changes, and integration of the 52 systems and 35 operations supporting the 2020 Census. We found that the plans did not consistently include key information needed to manage the risk. For example, the Bureau’s contingency plan for late operational design changes did not include activities specific to the three most likely late operational design changes—including removal of the citizenship question as a result of litigation or congressional action—that the Bureau could carry out to lessen their adverse impact on the enumeration, should they occur. We found that gaps stemmed from either requirements missing from the Bureau’s decennial risk management plan, or that risk owners—the individuals assigned to manage each risk—were not fulfilling all of their risk management responsibilities. Bureau officials said that risk owners are aware of these responsibilities but do not always fulfill them given competing demands. Bureau officials also said that they are managing risks to the census, even if not always reflected in their mitigation and contingency plans. However, if such actions are reflected in disparate documents or are not documented at all, then decision makers are left without an integrated and comprehensive picture of how the Bureau is managing risks to the census. We made seven recommendations to improve the Bureau’s management of risks to the 2020 Census, including that the Bureau develop mitigation and contingency plans for all risks that require them, establish a clear time frame for plan development, and ensure that the plans have the information needed to manage the risk. Commerce agreed with our recommendations and said it would develop an action plan to address them. Key Risk #2: The Bureau Faces Challenges in Implementing IT Systems We have previously reported that the Bureau faces challenges in managing and overseeing IT programs, systems, and contractors supporting the 2020 Census. Specifically, we have noted challenges in the Bureau’s efforts to manage, among other things, the schedules and contracts for its systems. As a result of these challenges, the Bureau is at risk of being unable to fully implement the systems necessary to support the 2020 Census and conduct a cost-effective enumeration. The Bureau Has Made Initial Progress against Its Revised Development and Testing Schedule, but Risks Missing Near-term Milestones To help improve its implementation of IT for the 2020 Census, the Bureau revised its systems development and testing schedule. Specifically, in October 2018, the Bureau organized the development and testing schedule for its 52 systems into 16 operational deliveries. Each of the 16 operational deliveries has milestone dates for, among other things, development, performance and scalability testing, and system deployment. According to Bureau officials in the Decennial Directorate, the schedule was revised, in part, due to schedule management challenges experienced, and lessons learned, while completing development and testing during the 2018 End-to-End test. The Bureau has made initial progress in executing work against its revised schedule. For example, the Bureau completed development of the systems in the first operational delivery—for 2020 Census early operations preparations—in July 2018, and deployed these systems into production in October 2018. However, our current work has determined that the Bureau is at risk of not meeting several near-term systems testing milestones. As of June 2019, 11 systems that are expected to be used in a total of five operational deliveries were at risk of not meeting key milestones for completing system development, performance and scalability testing, and/or integration testing. These 11 systems are needed for, among other things, data collection for operations, business and support automation, and customer support during self-response. Figure 4 presents an overview of the status for all 16 operational deliveries, as of June 2019. The at-risk systems previously discussed add uncertainty to a highly compressed time frame over the next 6 months. Importantly, between July and December 2019, the Bureau is expected to be in the process of integration testing the systems in 12 operational deliveries. Officials from the Bureau’s integration contractor noted concern that the current schedule leaves little room for any delays in completing the remaining development and testing activities. In addition to managing the compressed testing time frames, the Bureau also has to quickly finalize plans related to its IT infrastructure. For example, as of June 2019, the Bureau stated that it was still awaiting final approval for its Trusted Internet Connection. Given that these plans may impact systems being tested this summer or deployed into production for the address canvassing operation in August 2019, it is important that the Bureau quickly addresses this matter. Our past reporting noted that the Bureau faced significant challenges in managing its schedule for system development and testing that occurred in 2017 and 2018. We reported that, while the Bureau had continued to make progress in developing and testing IT systems for the 2020 Census, it had experienced delays in developing systems to support the 2018 End-to-End test. These delays compressed the time available for system and integration testing and for security assessments. In addition, several systems experienced problems during the test. We noted then, and reaffirm now, that continued schedule management challenges may compress the time available for the remaining system and integration testing and increase the risk that systems may not function or be as secure as intended. The Bureau has acknowledged that it faces risks to the implementation of its systems and technology. As of May 2019, the Bureau had identified 17 high risks related to IT implementation that may have substantial technical and schedule impacts if realized. Taken together, these risks represent a cross-section of issues, such as schedule delays for a fraud-detection system, the effects of late changes to technical requirements, the need to ensure adequate time for system development and performance and scalability testing, contracting issues, privacy risks, and skilled staffing shortages. Going forward, it will be important that the Bureau effectively manages these risks to better ensure that it meets near-term milestones for system development and testing, and is ready for the major operations of the 2020 Census. Key Risk #3: The Bureau Faces Significant Cybersecurity Risks to Its Systems and Data The risks to IT systems supporting the federal government and its functions, including conducting the 2020 Census, are increasing as security threats continue to evolve and become more sophisticated. These risks include insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, and the emergence of new and more destructive attacks. Underscoring the importance of this issue, we have designated information security as a government-wide high-risk area since 1997 and, in our most recent biennial report to Congress, ensuring the cybersecurity of the nation was one of nine high-risk areas that we reported needing especially focused executive and congressional attention. Our prior and ongoing work has identified significant challenges that the Bureau faces in securing systems and data for the 2020 Census. Specifically, the Bureau has faced challenges related to completing security assessments, addressing security weaknesses, resolving cybersecurity recommendations from DHS, and addressing numerous other cybersecurity concerns (such as phishing). The Bureau Has Made Progress in Completing Security Assessments, but Critical Work Remains Federal law specifies requirements for protecting federal information and information systems, such as those systems to be used in the 2020 Census. Specifically, the Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA) require executive branch agencies to develop, document, and implement an agency-wide program to provide security for the information and information systems that support operations and assets of the agency. In accordance with FISMA, National Institute of Standards and Technology (NIST) guidance, and Office of Management and Budget (OMB) guidance, the Bureau’s Office of the Chief Information Officer (CIO) established a risk management framework. This framework requires system developers to ensure that each of the Bureau’s systems undergoes a full security assessment, and that system developers remediate critical deficiencies. According to the Bureau’s risk management framework, the systems expected to be used to conduct the 2020 Census will need to have complete security documentation (such as system security plans) and an approved authorization to operate prior to their use. As of June 2019, according to the Bureau’s Office of the CIO: Thirty-seven of the 52 systems have authorization to operate, and will not need to be reauthorized before they are used in the 2020 Census Nine of the 52 systems have authorization to operate, and will need to be reauthorized before they are used in the 2020 Census Five of the 52 systems do not have authorization to operate, and will need to be authorized before they are used in the 2020 Census One of the 52 systems does not need an authorization to operate. Figure 5 summarizes the authorization to operate status for the systems being used in the 2020 Census, as reported by the Bureau in June 2019. As we have previously reported, while large-scale technological changes (such as internet self-response) increase the likelihood of efficiency and effectiveness gains, they also introduce many cybersecurity challenges. The 2020 Census also involves collecting personally identifiable information (PII) on over a hundred million households across the country, which further increases the need to properly secure these systems. Thus, it will be important that the Bureau provides adequate time to perform these security assessments, completes them in a timely manner, and ensures that risks are at an acceptable level before the systems are deployed. We have ongoing work examining how the Bureau plans to address both internal and external cyber threats, including its efforts to complete system security assessments and resolve identified weaknesses. The Bureau Has Identified a Significant Number of Corrective Actions to Address Security Weaknesses, but Has Not Always Been Timely in Completing Them FISMA requires that agency-wide information security programs include a process for planning, implementing, evaluating, and documenting remedial actions (i.e., corrective actions) to address any deficiencies in the information security policies, procedures, and practices of the agency. Additionally, the Bureau’s framework requires it to track security assessment findings that need to be remediated as a plan of action and milestones (POA&M). These POA&Ms are expected to provide a description of the vulnerabilities identified during the security assessment that resulted from a control weakness. As of the end of May 2019, the Bureau had over 330 open POA&Ms to remediate for issues identified during security assessment activities, including ongoing continuous monitoring. Of these open POA&Ms, 217 (or about 65 percent) were considered “high-risk” or “very high-risk.” While the Bureau established POA&Ms for addressing these identified security control weaknesses, it did not always complete remedial actions in accordance with its established deadlines. For example, of the 217 open “high-risk” or “very high-risk” POA&Ms we reviewed, the Bureau identified 104 as being delayed. Further, 74 of the 104 had missed their scheduled completion dates by 60 or more days. According to the Bureau’s Office of Information Security, these POA&Ms were identified as delayed due to technical challenges or resource constraints to remediate and close them. We previously recommended that the Bureau take steps to ensure that identified corrective actions for cybersecurity weaknesses are implemented within prescribed time frames. As of late May 2019, the Bureau was working to address our recommendation. Until the Bureau resolves identified vulnerabilities in a timely manner, it faces an increased risk, as continuing opportunities exist for unauthorized individuals to exploit these weaknesses and gain access to sensitive information and systems. The Bureau Is Working with DHS to Improve Its 2020 Census Cybersecurity Efforts, but Lacks a Formal Process to Address DHS’s Recommendations The Bureau is working with federal and industry partners, including DHS, to support the 2020 Census cybersecurity efforts. Specifically, the Bureau is working with DHS to ensure a scalable and secure network connection for the 2020 Census respondents (e.g., virtual Trusted Internet Connection with the cloud), improve its cybersecurity posture (e.g., risk management processes and procedures), and strengthen its response to potential cyber threats (e.g., federal cyber incident coordination). Federal law describes practices for strengthening cybersecurity by documenting or tracking corrective actions. As previously mentioned, FISMA requires executive branch agencies to establish a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in their information security policies, procedures, and practices. Standards for Internal Control in the Federal Government calls for agencies to establish effective internal control monitoring that includes a process to promptly resolve the findings of audits and other reviews. Specifically, agencies should document and complete corrective actions to remediate identified deficiencies on a timely basis. This would include correcting identified deficiencies or demonstrating that the findings and recommendations do not warrant agency action. Since January 2017, DHS has been providing cybersecurity assistance (including issuing recommendations) to the Bureau in preparation for the 2020 Census. Specifically, DHS has been providing cybersecurity assistance to the Bureau in five areas: management coordination and executive support, including a cybersecurity threat intelligence and information sharing enhancement through, among other things, a DHS cyber threat briefing to the Bureau’s leadership; network and infrastructure security and resilience, including National Cybersecurity Protection System (also called EINSTEIN) support; incident response and management readiness through a Federal Incident Response Evaluation assessment; and risk management and vulnerability assessments for specific high value assets provided by the Bureau. In the last 2 years, DHS has provided 42 recommendations to assist the Bureau in strengthening its cybersecurity efforts. Among other things, the recommendations pertained to strengthening cyber incident management capabilities, penetration testing and web application assessments of select systems, and phishing assessments to gain access to sensitive PII. Of the 42 recommendations, 10 recommendations resulted from DHS’s mandatory services for the Bureau (e.g., risk management and vulnerability assessments for specific high value assets). The remaining 32 recommendations resulted from DHS’s voluntary services for the Bureau (e.g., Federal Incident Response Evaluation assessment). Due to the sensitive nature of the recommendations, we are not identifying the specific recommendations or specific findings associated with them in this statement. In April 2019, we reported that the Bureau had not established a formal process for documenting, tracking, and completing corrective actions for all of the recommendations provided by DHS. Accordingly, we recommended that the Bureau implement a formal process for tracking and executing appropriate corrective actions to remediate cybersecurity findings identified by DHS. As of late May 2019, the Bureau was working to address our recommendation. Until the Bureau implements our recommendation, it faces an increased likelihood that findings identified by DHS will go uncorrected and may be exploited to cause harm to agency’s 2020 Census IT systems and gain access to sensitive respondent data. Implementing a formal process would also help to ensure that DHS’s efforts result in improvements to the Bureau’s cybersecurity posture. The Bureau Faces Several Other Cybersecurity Challenges in Implementing the 2020 Census The Bureau faces other substantial cybersecurity challenges in addition to those previously discussed. More specifically, we previously reported that the extensive use of IT systems to support the 2020 Census redesign may help increase efficiency, but that this redesign introduces critical cybersecurity challenges. These challenges include those related to the following: Phishing. We have previously reported that advanced persistent threats may be targeted against social media web sites used by the federal government. In addition, attackers may use social media to collect information and launch attacks against federal information systems through social engineering, such as phishing. Phishing attacks could target respondents, as well as Bureau employees and contractors. The 2020 Census will be the first one in which respondents will be heavily encouraged to respond via the internet. This will likely increase the risk that cyber criminals will use phishing in an attempt to steal personal information. According to the Bureau, it plans to inform the public of the risks associated with phishing through its education and communication campaigns. Disinformation from social media. We previously reported that one of the Bureau’s key innovations for the 2020 Census is the large-scale implementation of an internet self-response option. The Bureau is encouraging the public to use the internet self-response option through expanded use of social media. However, the public perception of the Bureau’s ability to adequately safeguard the privacy and confidentiality of the 2020 Census internet self-responses could be influenced by disinformation spread through social media. According to the Bureau, if a substantial segment of the public is not convinced that the Bureau can safeguard public response data against data breaches and unauthorized use, then response rates may be lower than projected, leading to an increase in cases for follow-up and subsequent cost increases. To help address this challenge, the Bureau stated that it plans to inform the public of the risks associated with disinformation from social media through its education and communication campaigns. Ensuring that individuals gain only limited and appropriate access to 2020 Census data. The Bureau plans to enable a public- facing website and Bureau-issued mobile devices to collect PII (e.g., name, address, and date of birth) from the nation’s entire population— estimated to be over 300 million. In addition, the Bureau is planning to obtain and store administrative records containing PII from other government agencies to help augment information that enumerators did not collect. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. Because of these challenges, we have recommended, among other things, that federal agencies improve their response to information security incidents and data breaches involving PII, and consistently develop and implement privacy policies and procedures. Accordingly, it will be important for the Bureau to ensure that only respondents and Bureau officials are able to gain access to this information, and enumerators and other employees only have access to the information needed to perform their jobs. Ensuring adequate control in a cloud environment. The Bureau has decided to use cloud solutions as a key component of the 2020 Census IT infrastructure. We have previously reported that cloud computing has both positive and negative information security implications and, thus, federal agencies should develop service-level agreements with cloud providers. These agreements should specify, among other things, the security performance requirements—including data reliability, preservation, privacy, and access rights—that the service provider is to meet. Without these safeguards, computer systems and networks, as well as the critical operations and key infrastructures they support, may be lost; information—including sensitive personal information—may be compromised; and the agency’s operations could be disrupted. Commerce’s Office of the Inspector General recently identified several challenges the Bureau may face using cloud-based systems to support the 2020 Census. Specifically, in June 2019, the Office of the Inspector General identified, among other things, unimplemented security system features that left critical 2020 Census systems vulnerable during the 2018 End-to-End Test and a lack of fully implemented security practices to protect certain data hosted in the 2020 Census cloud environment. Officials from the Bureau agreed with all eight of the Office of Inspector General’s recommendations regarding 2020 Census cloud-based systems and identified actions taken to address them. Ensuring contingency and incident response plans are in place to encompass all of the IT systems to be used to support the 2020 Census. Because of the brief time frame for collecting data during the 2020 Census, it is especially important that systems are available for respondents to ensure a high response rate. Contingency planning and incident response help ensure that, if normal operations are interrupted, network managers will be able to detect, mitigate, and recover from a service disruption while preserving access to vital information. Implementing important security controls, including policies, procedures, and techniques for contingency planning and incident response, helps to ensure the confidentiality, integrity, and availability of information and systems, even during disruptions of service. Without contingency and incident response plans, system availability might be impacted and result in a lower response rate. The Bureau’s CIO has acknowledged these cybersecurity challenges and is working to address them, according to Bureau documentation. In addition, we have ongoing work looking at many of these challenges, including the Bureau’s plans to protect PII, use a cloud-based infrastructure, and recover from security incidents and other disasters. Key Risk #4: The Bureau Will Need to Control Any Further Cost Growth and Develop Cost Estimates That Reflect Best Practices Since 2015, the Bureau has made progress in improving its ability to develop a reliable cost estimate. We have reported on the reliability of the $12.3 billion life-cycle cost estimate released in October 2015 and the $15.6 billion revised cost estimate released in October 2017. In 2016 we reported that the October 2015 version of the Bureau’s life-cycle cost estimate for the 2020 Census was not reliable. Specifically, we found that the 2020 Census life-cycle cost estimate partially met two of the characteristics of a reliable cost estimate (comprehensive and accurate) and minimally met the other two (well-documented and credible). We recommended that the Bureau take specific steps to ensure its cost estimate meets the characteristics of a high-quality estimate. The Bureau agreed and has taken action to improve the reliability of the cost estimate. In August 2018 we reported that while improvements had been made, the Bureau’s October 2017 cost estimate for the 2020 Census did not fully reflect all the characteristics of a reliable estimate. (See figure 6.) In order for a cost estimate to be deemed reliable as described in GAO’s Cost Estimating and Assessment Guide and thus, to effectively inform 2020 Census annual budgetary figures, the cost estimate must meet or substantially meet the following four characteristics: Well-Documented. Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. Accurate. Accurate estimates are unbiased and contain few mathematical mistakes. Credible. Credible cost estimates must clearly identify limitations due to uncertainty or bias surrounding the data or assumptions, according to best practices. Comprehensive. To be comprehensive an estimate should have enough detail to ensure that cost elements are neither omitted nor double-counted, and all cost-influencing assumptions are detailed in the estimate’s documentation, among other things, according to best practices. The 2017 cost estimate only partially met the characteristic of being well- documented. In general, some documentation was missing, inconsistent, or difficult to understand. Specifically, we found that source data did not always support the information described in the basis of estimate document or could not be found in the files provided for two of the Bureau’s largest field operations: Address Canvassing and Non- Response Follow-Up. We also found that some of the cost elements did not trace clearly to supporting spreadsheets and assumption documents. Failure to document an estimate in enough detail makes it more difficult to replicate calculations, or to detect possible errors in the estimate; reduces transparency of the estimation process; and can undermine the ability to use the information to improve future cost estimates or even to reconcile the estimate with another independent cost estimate. The Bureau told us it would continue to make improvements to ensure the estimate is well- documented. Increased Costs Are Driven by an Assumed Decrease in Self- Response Rates and Increases in Contingency Funds and IT Cost Categories The 2017 life-cycle cost estimate includes much higher costs than those included in the 2015 estimate. The largest increases occurred in the Response, Managerial Contingency, and Census/Survey Engineering categories. For example, increased costs of $1.3 billion in the response category (costs related to collecting, maintaining, and processing survey response data) were in part due to reduced assumptions for self- response rates, leading to increases in the amount of data collected in the field, which is more costly to the Bureau. Contingency allocations increased overall from $1.35 billion in 2015 to $2.6 billion in 2017, as the Bureau gained a greater understanding of risks facing the 2020 Census. Increases of $838 million in the Census/Survey Engineering category were due mainly to the cost of an IT contract for integrating decennial survey systems that was not included in the 2015 cost estimate. Bureau officials attribute a decrease of $551 million in estimated costs for Program Management to changes in the categorization of costs associated with risks. Specifically, in the 2017 version of the estimate, estimated costs related to program risks were allocated to their corresponding work breakdown structure (WBS) element. Figure 7 shows the change in cost by WBS category for 2015 and 2017. More generally, factors that contributed to cost fluctuations between the 2015 and 2017 cost estimates include: Changes in assumptions. Among other changes, a decrease in the assumed rate for self-response from 63.5 percent in 2015 to 60.5 percent in 2017 increased the cost of collecting responses from nonresponding housing units. Improved ability to anticipate and quantify risk. In general, contingency allocations designed to address the effects of potential risks increased overall from $1.3 billion in 2015 to $2.6 billion in 2017. An overall increase in IT costs. IT cost increases, totaling $1.59 billion, represented almost 50 percent of the total cost increase from 2015 to 2017. More defined contract requirements. Bureau documents described an overall improvement in the Bureau’s ability to define and specify contract requirements. This resulted in updated estimates for several contracts, including for the Census Questionnaire Assistance contract. However, while the Bureau has been able to better quantify risk; in August 2018 we also reported that the Secretary of Commerce included a contingency amount of about $1.2 billion in the 2017 cost estimate to account for what the Bureau refers to as “unknown unknowns.” According to Bureau documentation these include such risks as natural disasters or cyber attacks. The Bureau provides a description of how the risk contingency for “unknown unknowns” is calculated; however, this description does not clearly link calculated amounts to the risks themselves. Thus, only $14.4 billion of the Bureau’s $15.6 billion cost estimate has justification. According to Bureau officials, the cost estimate remains at $15.6 billion; however, they stated that they are managing the 2020 Census at a lower level of funding—$14.1 billion. In addition, they said that, at this time, they do not plan to request funding for the $1.2 billion contingency fund for unknown unknowns or $369 million in funding for selected discrete program risks for what-if scenarios, such as an increase in the wage rate or additional supervisors needed to manage field operations. Instead of requesting funding for these contingencies upfront the Bureau plans to work with OMB and Commerce to request additional funds, if the need arises. According to Bureau officials they anticipate that the remaining $1.1 billion in contingency funding included in the $14.1 billion will be sufficient to carry out the 2020 Census. In June 2016 we recommended the Bureau improve control over how risk and uncertainty are accounted for. This prior recommendation remains valid given the life-cycle cost estimate still includes the $1.2 billion unjustified contingency fund for “unknown unknowns”. Moreover, given the cost growth between 2015 and 2017 it will be important for the Bureau to monitor cost in real-time, as well as, document, explain and review variances between planned and actual cost. In August 2018 we reported that the Bureau had not been tracking variances between estimated life-cycle costs and actual expenses. Tools to track variance enable management to measure progress against planned outcomes and will help inform the 2030 Census cost estimate. Bureau officials stated that they already have systems in place that can be adapted for tracking estimated and actual costs. We will continue to monitor the status of the tracking system. According to Bureau officials, the Bureau planned to release an updated version of the 2020 Census life-cycle estimate in the spring of 2019; however, they had not done so as of June 28, 2019. To ensure that future updates to the life-cycle cost estimate reflect best practices, it will be important for the Bureau to implement our recommendation related to the cost estimate. Continued Management Attention Needed to Keep Preparations on Track and Help Ensure a Cost- Effective Enumeration 2020 Challenges Are Symptomatic of Deeper Long-Term Organizational Issues The difficulties facing the Bureau’s preparation for the decennial census in such areas as planning and testing; managing and overseeing IT programs, systems, and contractors supporting the enumeration; developing reliable cost estimates; prioritizing decisions; managing schedules; and other challenges, are symptomatic of deeper organizational issues. Following the 2010 Census, a key lesson learned for 2020 that we identified was ensuring that the Bureau’s organizational culture and structure, as well as its approach to strategic planning, human capital management, internal collaboration, knowledge sharing, capital decision- making, risk and change management, and other internal functions are aligned toward delivering more cost-effective outcomes. The Bureau has made improvements over the last decade, and continued progress will depend in part on sustaining efforts to strengthen risk management activities, enhancing systems testing, bringing in experienced personnel to key positions, implementing our recommendations, and meeting regularly with officials from its parent agency, Commerce. Going forward, we have reported that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency officials to (1) leadership commitment, (2) ensuring capacity, (3) developing a corrective action plan, (4) regular monitoring, and (5) demonstrated progress. Although important steps have been taken in at least some of these areas, overall, far more work is needed. We discuss three of five areas below. The Secretary of Commerce has successfully demonstrated leadership commitment. For example, the Bureau and Commerce have strengthened this area with executive-level oversight of the 2020 Census by holding regular meetings on the status of IT systems and other risk areas. In addition, in 2017 Commerce designated a team to assist senior Bureau management with cost estimation challenges. Moreover, on January 2, 2019, a new Director of the Census Bureau took office, a position that had been vacant since June 2017. With regard to capacity, the Bureau has improved the cost estimation process of the decennial when it established guidance including: roles and responsibilities for oversight and approval of cost estimation processes, procedures requiring a detailed description of the steps taken to produce a high-quality cost estimate, and a process for updating the cost estimate and associated documents over the life of a project. However, the Bureau continues to experience skills gaps in the government program management office overseeing the $886 million contract for integrating the IT systems needed to conduct the 2020 Census. Specifically, as of June 2019, 14 of 44 positions in this office were vacant. For the monitoring element, we found to track performance of decennial census operations, the Bureau relied on reports to track progress against pre-set goals for a test conducted in 2018. According to the Bureau, these same reports will be used in 2020 to track progress. However, the Bureau’s schedule for developing IT systems during the 2018 End-to-End test experienced delays that compressed the time available for system testing, integration testing, and security assessments. These schedule delays contributed to systems experiencing problems after deployment, as well as cybersecurity challenges. In the months ahead, we will continue to monitor the Bureau’s progress in addressing each of the five elements essential for reducing the risk to a cost-effective enumeration. Further Actions Needed on Our Recommendations Over the past several years we have issued numerous reports that underscored the fact that, if the Bureau was to successfully meet its cost savings goal for the 2020 Census, the agency needed to take significant actions to improve its research, testing, planning, scheduling, cost estimation, system development, and IT security practices. As of June 2019, we have made 106 recommendations related to the 2020 Census. The Bureau has implemented 74 of these recommendations, 31 remain open, and one recommendation was closed as not implemented. Of the 31 open recommendations, 9 were directed at improving the implementation of the innovations for the 2020 Census. Commerce generally agreed with our recommendations and is taking steps to implement them. Moreover, in April 2019 we wrote to the Secretary of Commerce, providing a list of the 12 open 2020-Census-related recommendations that we designated as “priority.” Priority recommendations are those recommendations that we believe warrant priority attention from heads of key departments and agencies. We believe that attention to these recommendations is essential for a cost-effective enumeration. The recommendations included implementing reliable cost estimation and scheduling practices in order to establish better control over program costs, as well as taking steps to better position the Bureau to develop an internet response option for the 2020 Census. In addition to our recommendations, to better position the Bureau for a more cost-effective enumeration, on March 18, 2019, we met with OMB, Commerce, and Bureau officials to discuss the Bureau’s progress in reducing the risks facing the census. We also meet regularly with Bureau officials and managers to discuss the progress and status of open recommendations related to the 2020 Census, which has resulted in Bureau actions in recent months leading to closure of some recommendations. We are encouraged by this commitment by Commerce and the Bureau in addressing our recommendations. Implementing our recommendations in a complete and timely manner is important because it could improve the management of the 2020 Census and help to mitigate continued risks. In conclusion, while the Bureau has made progress in revamping its approach to the census, it faces considerable challenges and uncertainties in implementing key cost-saving innovations and ensuring they function under operational conditions; managing the development and testing of its IT systems; ensuring the cybersecurity of its systems and data; and developing a quality cost estimate for the 2020 Census and preventing further cost increases. For these reasons, the 2020 Census is a GAO high-risk area. Going forward, continued management attention and oversight will be vital for ensuring that risks are managed, preparations stay on track, and the Bureau is held accountable for implementing the enumeration, as planned. Without timely and appropriate actions, the challenges previously discussed could adversely affect the cost, accuracy, schedule, and security of the enumeration. We will continue to assess the Bureau’s efforts and look forward to keeping Congress informed of the Bureau’s progress. Chairman Johnson, Ranking Member Peters, and Members of the Committee, this completes our prepared statement. We would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you have any questions about this statement, please contact Robert Goldenkoff at (202) 512-2757 or by email at goldenkoffr@gao.gov or Nick Marinos at (202) 512-9342 or by email at marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Ty Mitchell (Assistant Director); Lisa Pearson (Assistant Director); Jon Ticehurst (Assistant Director); Emmy Rhine Paule (Analyst in Charge); Christopher Businsky; Jackie Chapin; Jeff DeMarco; Rebecca Eyler; Adella Francis; Scott Pettis; Lindsey Pilver; Kayla Robinson; Robert Robinson; Cindy Saunders; Sejal Sheth; Kevin R. Smith; Andrea Starosciak; and Umesh Thakkar. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Bureau is responsible for conducting a complete and accurate decennial census of the U.S. population. The decennial census is mandated by the Constitution and provides vital data for the nation. A complete count of the nation's population is an enormous undertaking as the Bureau seeks to control the cost of the census, implement operational innovations, and use new and modified IT systems. In recent years, GAO has identified challenges that raise serious concerns about the Bureau's ability to conduct a cost-effective count. For these reasons, GAO added the 2020 Census to its High-Risk list in February 2017. GAO was asked to testify about the reasons the 2020 Census remains on the High-Risk List and the steps the Bureau needs to take to mitigate risks to a successful census. To do so, GAO summarized its prior work regarding the Bureau's planning efforts for the 2020 Census. GAO also included preliminary observations from its ongoing work examining the IT systems readiness and cybersecurity for the 2020 Census. This information is related to, among other things, the Bureau's progress in developing and testing key systems and the status of cybersecurity risks. What GAO Found The 2020 Decennial Census is on GAO's list of high-risk programs primarily because the Department of Commerce's Census Bureau (Bureau) (1) is using innovations that are not expected to be fully tested, (2) continues to face challenges in implementing information technology (IT) systems, and (3) faces significant cybersecurity risks to its systems and data. Although the Bureau has taken initial steps to address risk, additional actions are needed as these risks could adversely impact the cost, quality, schedule, and security of the enumeration. Innovations. The Bureau is planning several innovations for the 2020 Census, including allowing the public to respond using the internet. These innovations show promise for controlling costs, but they also introduce new risks, in part, because they have not been used extensively, if at all, in earlier enumerations. As a result, testing is essential to ensure that key IT systems and operations will function as planned. However, citing budgetary uncertainties, the Bureau scaled back operational tests in 2017 and 2018, missing an opportunity to fully demonstrate that the innovations and IT systems will function as intended during the 2020 Census. To manage risk to the census, the Bureau has developed hundreds of mitigation and contingency plans. To maximize readiness for the 2020 Census, it will also be important for the Bureau to prioritize among its mitigation and contingency strategies those that will deliver the most cost-effective outcomes for the census. Implementing IT systems. The Bureau plans to rely heavily on IT for the 2020 Census, including a total of 52 new and legacy IT systems and the infrastructure supporting them. To help improve its implementation of IT, in October 2018, the Bureau revised its systems development and testing schedule to reflect, among other things, lessons learned during its 2018 operational test. However, GAO's ongoing work has determined that the Bureau is at risk of not meeting near-term IT system development and testing schedule milestones for five upcoming 2020 Census operational deliveries, including self-response (e.g., the ability to respond to the 2020 Census through the internet). These schedule management challenges may compress the time available for the remaining system development and testing, and increase the risk that systems will not function as intended. It will be important that the Bureau effectively manages IT implementation risk to ensure that it meets near-term milestones for system development and testing, and that it is ready for the major operations of the 2020 Census. To its credit, the Bureau is also working with the Department of Homeland Security (DHS) to support its 2020 Census cybersecurity efforts. For example, DHS is helping the Bureau ensure a scalable and secure network connection for the 2020 Census respondents and to strengthen its response to potential cyber threats. During the last 2 years, as a result of these activities, the Bureau has received 42 recommendations from DHS to improve its cybersecurity posture. GAO recently recommended that the Bureau implement a formal process for tracking and executing appropriate corrective actions to remediate cybersecurity findings identified by DHS. Implementing the recommendation would help better ensure that DHS's efforts result in improvements to the Bureau's cybersecurity posture. In addition to addressing risks which could affect innovations and the security of the enumeration, the Bureau has the opportunity to improve its cost estimating process for the 2020 Census, and ultimately the reliability of the estimate itself, by reflecting best practices. In October 2017, the 2020 Census life-cycle cost estimate was updated and is now projected to be $15.6 billion, a more than $3 billion (27 percent) increase over its earlier estimate. GAO reported in August 2018 that although the Bureau had taken steps to improve its cost estimation process for 2020, it needed to implement a system to track and report variances between actual and estimated cost elements. According to Bureau officials, they planned to release an updated version of the 2020 Census life-cycle estimate in the spring of 2019; however, they had not done so as of June 28, 2019. To ensure that future updates to the life-cycle cost estimate reflect best practices, it will be important for the Bureau to implement GAO's recommendation related to the cost estimate. Over the past decade, GAO has made 106 recommendations specific to the 2020 Census to help address these risks and other concerns. The Department of Commerce has generally agreed with these recommendations and has taken action to address many of them. However, as of June 2019, 31 of the recommendations had not been fully implemented. While all 31 open recommendations are important for a high-quality and cost-effective enumeration, 9 are directed at managing the risks introduced by the Bureau's planned innovations for the 2020 Census. To ensure a high-quality and cost-effective enumeration, it will be important for the Bureau to address these recommendations. What GAO Recommends Over the past decade, GAO has made 106 recommendations specific to the 2020 Census to help address issues raised in this and other products. The Department of Commerce has generally agreed with the recommendations. As of June 2019, 31 of the recommendations had not been fully implemented.
gao_GAO-20-46
gao_GAO-20-46_0
Background BSA/AML Requirements and Key Agencies Involved in Their Enforcement The BSA established reporting, recordkeeping, and other AML requirements for financial institutions. As the delegated administrator of the BSA, FinCEN has issued implementing regulations. In complying with BSA/AML requirements, U.S. financial institutions assist government agencies in detecting and preventing money laundering and terrorist financing by, among other things, establishing and maintaining compliance programs, conducting ongoing monitoring of customers and transactions, and reporting suspicious activity. Oversight and enforcement of compliance with the BSA involve several federal agencies, including FinCEN and the Internal Revenue Service (IRS). FinCEN has overall authority for administering and enforcing compliance under the BSA and may seek civil penalties and injunctions to compel compliance. In addition, each of the federal banking regulators has independent authority to initiate enforcement actions against supervised institutions for violations of law and to seek civil money penalties for BSA violations, among other things. FinCEN has delegated authority to IRS to investigate most criminal violations of the BSA. The Department of Justice prosecutes violations of federal criminal money-laundering statutes, including violations of the BSA, and several law enforcement agencies conduct BSA-related criminal investigations. The federal banking regulators have also issued BSA/AML regulations that require banks to establish and maintain a BSA/AML compliance program that includes, among other things, policies, procedures, and processes to identify and report suspicious activity. The banking regulators are required to review banks’ compliance with BSA/AML requirements and regulations, which they generally do every 1 to 2 years as a part of their routine safety and soundness examinations. FinCEN has also delegated examination authority for BSA/AML compliance for certain entities, including money transmitters, to IRS. In general, money transmitters must register with FinCEN and provide certain information on their structure and ownership. According to Treasury, in all but one state, money transmitters are required to obtain licenses from states in which they are incorporated or conduct business. State supervisory agencies also may conduct BSA/AML examinations of licensed money transmitters. To ensure consistency in the application of BSA/AML requirements, in 2005 the federal banking regulators collaborated with FinCEN on developing an examination manual that was issued by FFIEC for federal bank examiners conducting BSA/AML examinations of banks. The examination manual has been revised several times since its release, and the most recent comprehensive revision was released in 2014. According to the examination manual, a key function of the federal banking regulators’ BSA/AML examinations is to assess whether banks have established the appropriate policies, procedures, and processes based on their BSA/AML risk to identify and report suspicious activity. The supervisory process also assesses whether banks provide sufficient detail in reports to law enforcement agencies to make the reports useful for investigating suspicious transactions that are reported. Moreover, federal banking regulators conduct risk-focused BSA/AML examinations of banks—that is, they review key BSA/AML risks or specific risk areas identified by the bank and tailor examination procedures based on each bank’s risk profile. Among other things, examiners review whether banks have an adequate system of internal controls to ensure ongoing compliance with BSA/AML regulations. Similarly, in 2008 FinCEN issued a BSA examination manual to guide reviews of money transmitters and other types of MSBs, including reviews by IRS and state regulators. Both the FFIEC and FinCEN examination manuals are publicly available. Components of BSA/AML Compliance Programs for Money Transmitters and Banks under the BSA Money transmitters and banks are subject to requirements under the BSA. They are generally required to design and implement a written AML compliance program, report certain transactions to Treasury, and meet recordkeeping (including identity documentation) requirements for transfers of $3,000 or more. At a minimum, each AML compliance program must establish a system of AML compliance policies, procedures, and internal controls to ensure ongoing compliance; designate an individual to coordinate and monitor day-to-day provide training for appropriate personnel; and provide for an independent audit function to test for compliance. Additionally, banks must include appropriate risk-based procedures for conducting ongoing customer due diligence as part of their AML compliance program. BSA/AML regulations require that each bank or money transmitter tailor a compliance program that is specific to its own risks based on factors such as the products and services offered and the customers and locations served. BSA/AML compliance programs for banks—including those that service money transmitters—are expected to include the following: Customer identification program. Banks must have written procedures for opening accounts and must specify what identifying information they will obtain from each customer. At a minimum, the bank must obtain the following identifying information from each customer before opening the account: name, date of birth, address, and identification number, such as a Social Security number or a passport number. Banks’ customer identification programs must also include risk-based procedures for verifying the identity of each customer to the extent reasonable and practicable. Additionally, a bank’s customer identification program should contain procedures for circumstances when a bank cannot verify the customer’s identity, including procedures for when the bank should not open an account and when the bank should close an account. Customer due diligence procedures. These procedures assist banks in determining when transactions are potentially suspicious. Procedures must be designed to achieve two minimum regulatory requirements: (1) understanding the nature and purpose of customer relationships so customer risk profiles can be developed and (2) conducting ongoing monitoring, based on the level of risk associated with the customer, to identify and report suspicious activity and to maintain and update customer information on a risk-basis. Additional due diligence procedures. Due diligence procedures also should define when and what additional customer information will be collected for customers who banks determine may pose a higher risk for money laundering or terrorist financing. Procedures should be based on each customer’s risk profile and specific risks posed. Banks review higher-risk customers and their transactions more closely at account opening and more frequently throughout the term of their relationship with the bank. In addition, banks and money transmitters must also have policies and procedures to monitor transactions and identify suspicious activity. Monitoring generally includes (1) manual review of transaction summary reports to identify suspicious transactions or (2) automated monitoring systems that use computer algorithms to identify patterns of unusual activity. As we previously reported, banks with large transaction volumes typically use automated monitoring systems. Banks and money transmitters also must comply with certain reporting requirements: Currency Transaction Reports. Banks and money transmitters must electronically file this type of report for each transaction or a combination of transactions in a single day—such as a deposit, withdrawal, exchange, or other payment or transfer—in currency of more than $10,000. Suspicious Activity Reports (SAR). Under FinCEN regulation, banks and money transmitters are required to file this type of report when (1) a transaction involves or aggregates at least $5,000 in funds or other assets for banks or at least $2,000 in funds or other assets for money transmitters and (2) the institution knows, suspects, or has reason to suspect that the transaction is suspicious. In addition, banks’ compliance programs generally include policies and procedures that describe criteria for deciding to close or not to open an account. For example, although there is no requirement for a bank to close an account that is the subject of a SAR filing, a bank should develop criteria in policies and procedures that indicate when it will escalate issues identified through repeat SAR filings on accounts, including criteria on when to close an account. The federal banking regulators generally do not direct banks to open, close, or maintain individual accounts. Transfers through Money Transmitters The money transfer industry is diverse, ranging from Fortune 500 companies with numerous outlets worldwide to small, independent money transmitters. Some money transmitters are in communities with population concentrations that do not necessarily have access to traditional banking services. Money transmitters may send and receive funds domestically—intrastate or interstate—or internationally. Money transmitters typically work through agents—separate business entities generally authorized to send and receive money transfers. Most money transfers are initiated in person at retail outlets. Money transmitters generally operate through their own retail storefronts or through grocery stores, financial service outlets, convenience stores, and other retailers that serve as agents. In one common type of money transmitter transaction—known as a cash- to-cash transfer—a sender enters a money transmitter agent location and provides cash to cover the transfer amount and fees (see fig. 1). For transfers at or above $3,000, senders must generally provide basic information about themselves (including name and address) at the time of the transfer request. The agent processes the transaction, and the money transmitter’s headquarters screens it to validate BSA/AML compliance. The money is then transferred to a recipient via a distributing agent or bank. In an international money transfer, the money may be distributed through an agent in the destination country, wired through the money transmitter’s bank to the distributor agent’s bank, or transferred by other means to a specified agent in the recipient’s country. The distributor agent pays out cash to the recipient in either U.S. dollars or local currency. Money-Laundering and Terrorist-Financing Risks Posed by Money Transmitters Money transfers can pose money-laundering and terrorist-financing risks, as funds related to illicit activity may go undetected due to the large volume of transactions or to money transmitters’ inadequate oversight of the various entities involved. We and others have identified money- laundering and terrorist-financing risks associated with money transmitters, including risks related to agents, customers, geographic location, and products. Agents. Money transmitters often work with multiple agents, and maintaining adequate oversight can be challenging, given the decentralized nature of the agent system. According to data collected by the Conference of State Bank Supervisors, as of December 31, 2018, 204 money transmitters reported that they had more than 440,000 agents—with nine of these money transmitters reporting that they had at least 10,000 agents. These agents present money- laundering risks if they knowingly or unknowingly fail to follow BSA/AML requirements or the policies and programs established by the money transmitter. For example, an agent may not follow the recordkeeping requirements for transfers above the regulatory funds transfer threshold or above lower thresholds that a money transmitter has self-imposed. MSB principals are required to conduct risk-based monitoring of their agents. Customers. Certain customers may pose heightened risk because of the nature of their business, occupation, or anticipated transaction activity. Additionally, in certain instances, they may be able to launder money while remaining anonymous. For example, customers may use false identities or straw men (individuals hired to conduct transfers on behalf of others) to keep from being identified as the original source of the funds. Examples of suspicious customer activity that may indicate money laundering include identification documents that cannot be easily verified; the use of different taxpayer identification numbers with variations of the same name; frequent or large transactions with no record of past or present employment; and reluctance to provide identification for transactions subject to identification requirements. Geographic location. Certain geographic locations may be more vulnerable to money laundering or terrorist financing via money transfers. High-risk geographic locations can be either international or domestic. According to FinCEN’s MSB examination manual, examples of international high-risk geographic locations include countries subject to sanctions by the Office of Foreign Assets Control or countries and territories identified as being noncooperative. Domestic high-risk geographic locations include High Intensity Drug Trafficking Areas (HIDTA) and High Intensity Financial Crime Areas (HIFCA). Products. According to the FFIEC and FinCEN MSB examination manuals, certain products and services, such as money transfers, may pose a higher risk of money laundering because of the degree of anonymity they can offer. For example, the Financial Action Task Force identified money-laundering and terrorist-financing risks associated with mobile payments because these services can sometimes allow for anonymous transactions, depending on the level of AML measures the mobile payments provider has in place. The task force also reported that virtual currency—digital representations of value such as Bitcoin that are not government-issued legal tender—could facilitate international remittances as virtual-currency- based products and services are developed. Federal agencies and international organizations have identified instances where money transfers have been used to launder proceeds from illicit activities such as human smuggling and trafficking, drug trafficking, and consumer fraud, including the following examples: In 2017, a large money transmitter entered into a $586 million settlement with the Department of Justice, the Federal Trade Commission, and the U.S. Attorney’s offices for several states after it was accused of, among other things, processing money transfers that were suspected of being used to pay human smugglers in China. In 2012, the Department of Justice found that a large money transmitter’s agents knowingly participated in a scheme in which victims wired funds to the transmitter’s agents and outlets in response to fraudulent claims such as promising victims they would receive large cash prizes or lottery winnings, falsely offering various high- ticket items for deeply discounted prices, falsely promising employment opportunities, or posing as a relative of the victim and claiming to be in trouble and in urgent need of money. In a 2011 case, seven people were sentenced for money laundering and drug trafficking involving the transfer of funds from the U.S. Virgin Islands to Alaska. Hundreds of thousands of dollars in payment for the drugs were sent using a large money transmitter in amounts averaging less than $2,000 per wire transfer, a money-laundering method known as structuring. See figure 2 for an illustrated example of structuring. Requirements to Assess and Manage Money-Transmitter Risk Present Challenges for Some Banks Banks Are Required to Assess Money-Transmitter Risks and Manage Risks through Due Diligence and Monitoring In April 2005, FinCEN and the federal banking regulators issued interpretative guidance to further clarify BSA/AML requirements to banks that provide banking services to MSBs (including money transmitters) operating in the United States. According to the interagency guidance, a bank’s level and extent of due diligence beyond the minimum expectations should be based on an assessment of the individual customer’s BSA/AML risks. If a particular MSB relationship indicates a low risk of money laundering or other illicit activity, the bank may not be routinely expected to perform further due diligence beyond minimum expectations. Minimum expectations include applying the bank’s customer identification program and confirming FinCEN registration (if required), agent status (if applicable), and state and local licensing requirements (if applicable). Banks are also to conduct a basic BSA/AML risk assessment to determine the level of risk associated with the account and whether further due diligence is necessary. In order to properly assess risks, the interpretive guidance clarifies that banks should consider the purpose of the account, the types of products and services offered by the MSB, the locations and markets it serves, and the anticipated account activity (see text box). Examples of Basic Information Banks Should Consider When Assessing a Money Transmitter’s Money-Laundering Risk, According to the Interagency Guidance Purpose of account: Whether the money transmitter needs the bank account to transfer funds to its principal U.S. account or to foreign-based agents in other countries. Products and services offered: Whether the money transmitter is a principal with a fleet of agents, or is it an agent itself, and whether money transmission the customer’s primary or ancillary business (such as a grocery store that derives a small fraction of its overall revenue from providing money transmission services). Locations served: Whether the money transmitter’s market domestic or international and whether it targets local residents or broad markets. Anticipated account activity: Relevant considerations include the expected transaction amounts and whether the money transmitter is operating out of one location and using one bank branch, or whether it has several agents making deposits at multiple branches throughout the bank’s network. If a bank concludes from its risk assessment that the MSB customer presents a higher level of money-laundering or terrorist-financing risk, it will be expected to conduct additional due diligence in a manner commensurate with the heightened risk. According to the interagency guidance, the appropriate amount of due diligence depends in part on the level of perceived risk and the size and sophistication of the particular MSB. Appropriate due diligence can include reviewing the MSB’s BSA/AML compliance program, the results of the MSB’s independent testing of its program, and written agent management and termination practices for the MSB, as well as conducting on-site visits to the MSB. The interagency guidance also provides examples of “risk indicators” to assist banks with their risk assessments. Examples of potentially lower- risk indicators include a money transmitter that primarily markets to customers that conduct routine transactions with moderate frequency in low dollar amounts; is an established business with an operating history; or only remits funds to domestic entities. Examples of potentially higher- risk indicators include a money transmitter that allows customers to conduct transactions in higher dollar amounts with moderate to high frequency; is a new business without an established operating history; offers only, or specializes in, cross-border transactions, particularly to countries posing heightened risk for money laundering or terrorism financing; or is located in an area designated as a HIFCA or HIDTA. The guidance notes that in determining the level of risk, a bank should not focus on any single indicator. Rather, an effective risk assessment should be a composite of multiple factors, and depending on the circumstances, certain factors may be weighed more heavily than others. Banks’ customer risk assessments also determine the level of ongoing monitoring for suspicious activity they must perform on each customer. The interagency guidance states that, based on the bank’s assessment of the risks of its MSB customers (including money transmitters), monitoring should include periodic confirmation that initial projections of account activity have remained reasonably consistent over time. Examples of potentially suspicious activity include a money transmitter transferring funds to a different jurisdiction than expected or depositing currency significantly in excess of expected amounts without any justifiable explanation, such as an expansion of business activity or new locations. Officials from several banks we spoke with described their additional due diligence procedures for implementing BSA/AML requirements when accepting new money transmitter customers or monitoring existing ones. These include obtaining and reviewing the money transmitter’s BSA/AML policies, using questionnaires and interviews to collect detailed information from the money transmitter on its business operations—such as services offered, transaction volume, and cash activity—and site visits to verify the information collected. Officials from one bank told us that additional due diligence includes a review of the money transmitter’s business location, longevity, principal owners, transaction volume, and cash activity. Bank staff collect this information via a questionnaire administered through an in-person interview at a branch. After reviewing the information, the bank’s BSA/AML compliance department may choose to speak one-on-one with the potential money transmitter customer or conduct a site visit. When monitoring a new money transmitter customer for suspicious activity, compliance staff compare answers from the due diligence questionnaire against the customer’s cash log and wire activity to determine if the activity is outside normal parameters. The compliance department investigates any suspicious leads and reports them to the bank’s SAR committee to decide whether to file a SAR. Federal Banking Examiners Determine Whether Banks Adequately Incorporate BSA/AML Risk into Their Compliance Programs Federal banking examiners determine whether a BSA/AML examination should include a review of a bank’s money transmitter accounts based on the overall risk profile of the bank. The FFIEC examination manual directs examiners to tailor the BSA/AML examination scope and procedures to the specific risk profile of the bank. Examiners begin a BSA/AML examination by reviewing and assessing the adequacy of the bank’s BSA/AML risk assessment. This review includes determining whether bank management has developed an accurate risk assessment that identifies significant risks to the bank (see text box). This determination is based on factors such as whether management has adequately considered all products, services, customers, transaction number and volume, and geographic locations, and whether management’s assessment methodology within these specific risk categories was adequate. Bank Secrecy Act/Anti-Money Laundering (BSA/AML) Examination Procedures for Banks In order to effectively apply resources and ensure compliance with BSA requirements, the Federal Financial Institutions Examination Council (FFIEC) examination manual is structured to allow examiners to tailor the BSA/AML examination scope and procedures to the specific risk profile of the bank. At a minimum, examiners are expected to follow core examination procedures to ensure that the bank has an adequate BSA/AML compliance program commensurate with its risk profile. The core procedures encompass four areas: Scoping and planning: Identifying the bank’s BSA/AML risks, developing the examination scope, and documenting the plan. BSA/AML risk assessment: Assessing the BSA/AML risk profile of the bank and evaluating the adequacy of the bank’s BSA/AML risk assessment process. BSA/AML compliance program: Determining whether the bank has developed, administered, and maintained an effective program for compliance with the BSA and all of its implementing regulations. Developing conclusions and finalizing the examination: Formulating conclusions, communicating findings to management, preparing report comments, developing an appropriate supervisory response, and closing the examination. In addition to the core examination procedures, the examination manual also contains sections of expanded examination procedures that address specific lines of business, products, customers, or entities that may present unique BSA/AML compliance challenges and exposures for which banks should institute appropriate policies, procedures, and processes. As examples, the examination manual contains expanded examination procedures with respect to nonbank financial institutions, electronic banking, and funds transfers. The examination manual indicates that not all of the core and expanded examination procedures are likely to be applicable to every bank. The specific examination procedures that need to be performed depend on the BSA/AML risk profile of the bank, the bank’s history of BSA/AML compliance, and other relevant factors. Examiners also review the bank’s written BSA/AML compliance program and determine whether the bank has adequately incorporated the risk it identified through its risk assessment into its BSA/AML compliance program. This review and determination include completing relevant core examination procedures for assessing key elements of the bank’s compliance program, such as the customer identification program and policies, procedures, and processes related to customer due diligence, suspicious activity reporting, and currency transaction reporting. As part of these core examination procedures, examiners conduct risk-based transaction testing, which OCC staff noted allows examiners to evaluate the adequacy of the bank’s compliance with regulatory requirements; determine the effectiveness of its policies, procedures, and processes; and evaluate suspicious activity monitoring systems. For example, examiners might determine to select and review a sample of customer accounts in testing the bank’s compliance with its policies, procedures, and processes or for possible suspicious activity. The FFIEC examination manual contains an expanded examination section for banks with significant relationships with nonbank financial institutions, which include MSBs. This expanded section references and incorporates the April 2005 interagency guidance for providing banking services to MSBs and includes related examination procedures. Consistent with this guidance, these procedures direct examiners to assess whether the bank has minimum due diligence policies, procedures, and processes in place for new or existing MSB accounts. Examiners are then to determine whether the bank’s policies, procedures, and processes to assess MSB risks effectively identify higher-risk accounts and the amount of further due diligence necessary. To assist in this effort, the manual directs examiners to perform risk- focused transaction testing on a sample of higher-risk MSB accounts. In discussion groups held with federal bank examiners, examiners from all discussion groups noted that their review of the transaction activity of money transmitter accounts is essential to determining whether the bank understands the money transmitter’s business and has appropriately assessed the risk. For example, one examiner said that customer due diligence procedures at account opening should include the appropriate qualitative and quantitative questions so that the bank can make a reasonable determination of the types and volumes of transactions that will be flowing in and out of the account. Examiners from all discussion groups said that when assessing the bank’s risk assessment of a money transmitter, they focus on whether the bank has considered the risk factors discussed in the examination manual, including geography, customer type, products, services, and transactional volume. In some discussion groups, examiners noted that they may review money transmitter accounts if these accounts are included in the sampling of bank customer accounts as part of the core examination procedures. One examiner said that because banks in her region do not tend to specialize in money transmitters or have a significant degree of risk from them, the only time she reviews money transmitter accounts is if they are included in her sample for transaction testing. Examiners from one discussion group said that they may review money transmitters as part of expanded examination review procedures for nonbank financial institutions if the bank has a large portfolio of money transmitter accounts. For example, one examiner said he generally does not set out to look for and review money transmitter accounts when conducting a BSA/AML examination, but in one case his examination team learned that during the course of a merger, a bank acquired a number of nonbank financial institutions, including MSBs. As this bank did not have prior experience with these kinds of customers, the examination team decided to include them in the scope of their review. Examiners in all discussion groups said that they neither instruct nor recommend that banks close accounts with money transmitters or other types of MSBs. Although IRS and state agencies also examine money transmitters and other MSBs, examiners from all discussion groups said that BSA/AML requirements and guidance do not allow banks to rely on IRS or state oversight. These examiners said these reports could provide banks with a useful additional source of information when conducting their due diligence on MSB customers. However, these examiners added that the reports would not substitute for or reduce the due diligence expected of banks in complying with BSA/AML compliance program requirements.. Examiners from most discussion groups observed that they know very little about the quality of state or IRS examinations of MSBs and their frequency. Examiners Identified BSA/AML Compliance Challenges for Some Banks with MSB Customers, Including Money Transmitters Examiners in our discussion groups said the challenges that some banks face in ensuring BSA/AML compliance for their MSB customers include those related to customer due diligence, risk assessments, customer identification, and BSA/AML compliance staff and resources. Customer due diligence. Examiners from most discussion groups said that some banks do not fully understand the customer due diligence requirements for banking MSBs. Examiners in some discussion groups said that banks do not always fully review or understand the documents and information obtained from their MSB customers in conducting due diligence. One examiner described an instance where bank staff could not understand documentation collected from MSB customers in a foreign language. Examiners in some discussion groups said banks do not understand the need to conduct ongoing monitoring of MSB accounts, including of the flow and volume of customers’ transactions. For example, one examiner in a different discussion group described an instance of a community bank that was unaware that an MSB account had $2 billion flowing through annually even though the bank had only $1 billion in assets. Examiners in some discussion groups said that banks also may not fully understand their automated software for monitoring suspicious activity or how to set the proper software parameters for capturing potentially suspicious transactions. One examiner in a different discussion group said that without proper monitoring, a bank would not know when sudden changes in MSB customers’ transaction types or volumes would be considered suspicious and should be reported. Risk assessment. Examiners in many discussion groups said some banks do not appropriately assess their MSB customers’ risk, either because they do not consider relevant risk factors or they rate all MSB customers at the same risk level. One examiner in a discussion group said he examined a bank with many money transmitter customers that transmitted funds to several countries and found that the bank did not assess the risk levels of the countries to which the money transmitters sent funds. An examiner in a different discussion group said that banks often assess all MSBs at the same level of risk because they do not understand the difference between the various risk levels. Another examiner in the same discussion group added that banks often do not understand the guidance clarifying that banks should assess each customer’s risk individually. This statement was corroborated by our review of several banks’ BSA policies that stipulated that all money transmitters and other MSBs should be considered high risk, contrary to the 2005 guidance. Customer identification. Examiners from many discussion groups said banks do not always identify their MSB customers—for example, when a bank acquires another bank without being aware that the acquired bank has MSB customers. Examiners in some discussion groups said that failure to properly identify MSB customers stems partly from inadequate due diligence or risk assessment. BSA/AML compliance staff and resources. Examiners in many discussion groups said that some banks do not have sufficient BSA/AML compliance staff or resources to manage their BSA/AML compliance programs. For example, an examiner in one discussion group described a bank with nearly 70 money transmitters and more than 200 check cashers but only four staff in its BSA/AML compliance department, which the examiner considered inadequate. Examiners in many discussion groups said that BSA/AML deficiencies generally stem from overall weakness in a bank’s BSA/AML compliance program or internal controls, and not from providing services to money transmitters or any particular customer type. An examiner from one discussion group noted that a bank with weak internal controls around money transmitters likely has weak internal controls across its BSA/AML compliance program. Examples of deficiencies provided by examiners across discussion groups include banks failing to follow written policies and procedures, rating entire categories of customers as high-risk rather than assessing individual customer risk, not conducting on-site customer reviews, failing to conduct other due diligence, and not properly monitoring and reporting suspicious activities. Moreover, our review of bank examination documents found that BSA/AML-related deficiencies mostly stemmed from weakness in banks’ BSA/AML compliance programs and internal controls overall—for example, in customer identification programs, customer due diligence procedures and practices, and risk assessments—and not from a bank providing services to MSBs or any other customer type. According to examiner discussion groups and examination documents we reviewed, not all banks with MSB customers experience BSA/AML compliance challenges. Examiners in some discussion groups noted that banks that successfully provide accounts to MSBs, including money transmitters, tend to have a strong BSA/AML compliance program. For example, examiners in some discussion groups said that such banks have internal controls commensurate with the BSA/AML risks of the MSB customers, including conducting appropriate monitoring and due diligence of customers, and understand the full scope of MSB customers’ activities. The examiners stated that these banks also have sufficient BSA/AML compliance staff who received training. Similarly, our review of bank examination documents included examples of banks with MSB customers that complied with BSA/AML compliance program requirements, such as a community bank with 80 money transmitters. In the examination documents we reviewed, examiners noted that although the bank engaged in higher-risk business, it was managing the risk appropriately. Some Examiners Identified Challenges in Assessing Banks’ Due Diligence for Money Transmitters While views among examiners in our discussion groups varied, examiners in some discussion groups identified challenges in assessing banks’ customer due diligence for money transmitters and other MSB customers. As discussed earlier, the FFIEC examination manual includes an expanded examination section for nonbank financial institutions that provides procedures and guidance for examiners when assessing banks’ compliance controls for MSB customers, including money transmitters. The procedures direct examiners to determine whether the banks’ policies, procedures, and processes to assess risks posed by MSB customers allow the banks to effectively identify higher-risk accounts and the amount of further due diligence that is necessary. The expanded examination guidance provides examples of actions banks can take to meet the additional due diligence requirement for customers they deem to be higher risk. Examiners from many discussion groups said they believe these procedures and guidance are sufficient. One examiner noted that assessing controls is the same for a bank’s MSB customers as for any other type of customer. However, examiners from some discussion groups said it was unclear how much due diligence is reasonable to expect banks to conduct for their money transmitters and other MSB customers. An examiner in one discussion group said it was not clear from the examination procedures and guidance how much banks were expected to question and request information from their MSB customers or monitor their MSB customers’ due diligence efforts without expecting banks to act as the de facto regulator for MSBs. Other examiners noted that although banks are responsible for understanding the kinds of transactions that flow through an MSB, to some extent banks do not have visibility into these individual transactions, as they are aggregated before flowing into the account at the bank. Similarly, another examiner said there was uncertainty about how critical an examiner should be of a bank’s due diligence efforts in cases where a bank’s documentation on an MSB customer’s BSA/AML compliance program is lacking. One examiner noted that while the examination guidance provides examples of due diligence actions banks can consider performing, those actions are not requirements. The examiner said it was therefore not clear to what extent examiners should apply these examples as criteria and expect banks to have implemented them. Further, examiners in some discussion groups said that it can be difficult to evaluate banks’ risk assessments, including processes for identifying higher-risk customers that require additional due diligence. One examiner said that it is unclear from the examination procedures how to determine whether banks’ risk assessment processes for identifying higher-risk customers are adequate. An examiner in a different discussion group said that in evaluating banks’ risk assessment of new money transmitter customers, he looks for whether banks ask why new customers switched banks. However, other examiners in the same discussion group noted that this is not a standard question. Our review of the expanded examination section found a lack of examples of specific steps or processes that examiners can take in assessing banks’ compliance for additional due diligence. For example, this section’s procedures contain only a general reference that examiners should determine whether the banks’ policies, procedures, and processes effectively allow the banks to identify and conduct risk-based due diligence for higher-risk customers and lack specific examples to assist examiners in evaluating additional due diligence activities. The section’s guidance states that examiners could take actions, including reviewing an MSB’s BSA/AML compliance program or conducting on-site visits to help evaluate a bank’s compliance. But neither the guidance nor the procedures clarify what these reviews or visits might entail. In comparison, the expanded section’s guidance and procedures include examples of specific steps that examiners can take when assessing banks’ compliance with minimum due diligence requirements for MSB accounts, such as applying the bank’s customer identification program and confirming FinCEN registration status and state licensing, if applicable. Officials from the Federal Reserve and OCC said that the examination manual is not intended to provide explicit criteria for examiners when they are assessing the adequacy of a bank’s program. They said that establishing explicit criteria would result in a “check the box” approach to BSA/AML compliance, such that banks are given a uniform set of requirements to follow, irrespective of the money-laundering or terrorism- financing risks associated with their banking activities. They said that if banks only needed to meet specific requirements, such an approach would encourage banks to do the minimum to establish a BSA/AML compliance program and would not effectively detect and deter money laundering and terrorism financing. As discussed earlier, the examination manual is instead structured to allow examiners to tailor the BSA/AML examination scope and procedures to the specific risk profile of the bank. Staff from the federal banking regulators said that as a result, examiners are expected to apply their judgment in evaluating banks’ BSA/AML compliance programs. However, while regulators want compliance programs to be tailored to the unique risks a bank’s operations present, examiners need sufficient guidance to determine whether a given bank’s BSA/AML-related policies, processes, and procedures are adequate. Regulators and FinCEN issued the 2005 interagency guidance to clarify BSA/AML requirements and supervisory expectations for banks when providing banking services to money transmitters and other MSBs. Since then, examiners have relied on this guidance when reviewing banks’ MSB customer accounts. However, the examination procedures and related guidance may not provide all of the information examiners need to conduct their assessments, as indicated by the examiners in some of our discussion groups who reported that it is not clear to them how to determine whether banks’ due diligence efforts are adequate. Providing clarifying information would not compromise examiners’ ability to exercise judgement during an examination. Rather, it would provide them with greater certainty that they are evaluating banks’ compliance with BSA/AML requirements appropriately. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving the defined objectives. Unless federal banking regulators take steps to improve examiners’ ability to evaluate banks’ compliance controls with respect to money transmitter accounts, examiners may not be fully achieving the BSA/AML examination objectives of identifying and assessing risks and banks’ ability to manage risks, as set out in the examination manual in assessing banks’ compliance with BSA/AML requirements. Internal control standards also state that agencies should internally communicate the necessary quality information to achieve their objectives. With respect to examiners, such communication could include providing updates to examination procedures, examiner training, or a combination of methods. Terminating or Limiting Bank Accounts with Money Transmitters May Raise Derisking Concerns and Can Affect Their Operations Survey Results Suggest That a Number of Banks Terminated or Limited Money Transmitters’ Accounts in 2014–2016 We estimate that 32 percent of banks nationwide provided accounts to money transmitters from 2014 through 2016, based on the results of a survey we conducted jointly with other GAO work on derisking. For calendar year 2016, of the 91 banks that reported having money transmitters as customers, 71 banks of varying asset sizes reported having 41,089 money transmitter accounts (see table 1). Overall, of the 91 banks that reported having money transmitters as customers, close to half of them (40 banks) terminated at least one of their money transmitter accounts and almost one-third of them (29 banks) limited the number of accounts with money transmitters, both for reasons related to BSA/AML risk, from 2014 through 2016 (see table 2). Because extra-large banks reported having a much greater number of accounts with money transmitters, these banks also reported a greater proportion of account terminations, compared with small and medium banks. Specifically, 18 banks of all sizes that responded to the survey reported that they terminated 1,098 accounts in 2016—with 89 percent of these account closures (976 out of 1,098) reported by six extra-large banks. In particular, one extra-large bank accounted for more than half (601 out of 1,098) of the account terminations in that year. See table 3 for more information on account terminations in 2016. See appendix II for more information on account terminations and limitations. Although Some Account Terminations and Limitations Are Associated with Managing BSA/AML Risk, Some Raise Derisking Concerns Some terminations and limitations of money transmitters’ bank accounts appear to be associated with managing BSA/AML risk. However, some terminations and limitations raise derisking concerns. Some Reasons for Terminating or Limiting Accounts Are Associated with Managing BSA/AML Risk Some reasons that banks reported for terminating accounts were associated with managing BSA/AML-related risk, including the filing of SARs associated with the account and customers failing to provide information necessary for the bank to conduct adequate BSA/AML due diligence. Some banks also reported terminating accounts to reduce the risk that a customer’s activity could harm a bank’s reputation, known as reputational risk (see table 4). These survey results are consistent with the results of our prior work on banks in the Southwest border region. The most commonly cited reason in our survey for terminating accounts was the filing of SARs. Officials we interviewed from one bank told us that they investigate customers that have triggered multiple SAR filings and considered setting up controls to limit account activities. Officials of another bank told us that a federal bank examiner suggested that the bank consider closing an account with a money transmitter customer because of SAR filings associated with it. The second most commonly cited reason for terminating accounts was that a customer failed to provide information requested by a bank for conducting BSA/AML due diligence. Officials we interviewed from two banks told us that customers may not be able to provide information and documentation or may not disclose that they are an MSB when opening new accounts. Officials of a bank that maintained accounts with money transmitters told us they terminated accounts in instances where a money transmitter did not submit required documentation. Another commonly cited reason for terminating accounts was reputational risk—the potential that negative publicity regarding an institution’s business practices, whether true or not, will cause a decline in the customer base, costly litigation, or revenue reductions. One bank’s officials said in an interview that when examiners inquired as to whether bank officials factor reputational risk into their decision-making about money transmitters, they viewed such inquiries as implicit suggestions that the bank had an issue with reputational risk that needed to be addressed. Examiners in our discussion groups also shared similar comments on suspicious activity monitoring and banks’ requests for information. Specific to suspicious activities, one examiner noted that banks generally have an internal policy stating that if a specific number of SARs are filed on the customer, the bank will automatically terminate the account. Regarding banks’ information requests, examiners in some discussion groups said they observed that banks may terminate an MSB’s account if the MSB does not comply with the bank’s request for due-diligence- related documentation. Three of the most common reasons banks reported for limiting accounts with money transmitters were that (1) the cost of BSA/AML compliance made the customer type unprofitable, (2) the banks were unable to manage the BSA/AML risk associated with the customer type, and (3) the customer type fell outside of a bank’s risk tolerance (see table 5). One of the most commonly cited reasons for limiting the number of accounts with money transmitters was compliance costs associated with managing BSA/AML risk. Officials of about two-thirds of the banks we interviewed said their BSA/AML compliance costs had increased over time, with eight institutions specifically citing past or planned upgrades to their monitoring software systems as one source of increasing costs. Moreover, officials of one bank said their compliance costs had increased in recent years as a result of regulatory scrutiny, which they said had increased as MSBs came to comprise a larger portion of their customer base. In response to this heightened scrutiny, officials said the bank had installed a new transaction-monitoring platform, which incurred a one-time migration cost and would incur higher monthly fees, and was considering expanding its compliance department. Officials of three banks told us in interviews that 50 percent of their compliance costs stem from BSA/AML compliance. As we have reported previously, money transmitters are generally low-profit customers for banks, in that the revenue from their accounts may not be sufficient for some banks to offset the associated costs of BSA/AML compliance. For example, officials of one bank said the bank spent about $250,000 annually to maintain its BSA-related monitoring software and training, which they believed was a significant portion of the bank’s $25 million annual income. These officials told us that unlike the bank’s other customers, which use the bank’s other products and refer business, money transmitters are not the bank’s core customers and do not use other products or services, so the bank would rather focus its time and resources on its core customers. Similarly, officials of another bank said they decided not to bank MSBs because any revenue generated would not cover the additional resource and compliance costs. Banks’ inability to manage BSA/AML risks associated with money transmitter customers was another commonly cited reason for limiting the number of accounts. For example, officials of one bank we interviewed said they did not accept any MSB customers, including money transmitters, because they were not willing or able to take on the required risk and level of BSA/AML monitoring. Another commonly cited reason for limiting accounts was that a customer type fell outside of a bank’s risk tolerance. In interviews, banks expressed concerns about their MSB customers’ ability to maintain an adequate BSA/AML compliance program. One bank’s officials told us that owners of gas stations may offer check cashing or money transmission services to generate additional revenue, but they may not be aware that offering such services would subject their business to BSA/AML compliance requirements. Another bank’s officials also said that many business owners do not know that they have to register with FinCEN to operate as an MSB. Officials from a third bank said that some MSBs may not understand the BSA/AML regulations and, at their customers’ request, may inadvertently commit a violation such as structuring that may generate a SAR (for example, by breaking up a money transfer in excess of $10,000 into multiple transfers to avoid generating a Currency Transaction Report). Similarly, examiners in many discussion groups said that the staffing and resource costs required for adequate monitoring and due diligence on MSB customers, including money transmitters, are reasons why some banks may choose not to bank MSBs. Moreover, examiners in many discussion groups also said some banks offer MSBs accounts and then find out that they do not have the necessary BSA/AML expertise or that the business is not profitable for them. For example, one examiner said that when a larger bank in his area terminated all of its money transmitter accounts, a number of smaller banks looking for profit offered accounts to these money transmitters. However, the examiner added that the smaller banks did not understand the level of customer due diligence and monitoring that was required for these accounts and the associated costs, and they terminated the accounts. In contrast, examiners in some discussion groups said that some community banks have accepted money transmitter customers as a way to generate potentially substantial fee income. Some Account Terminations and Limitations Raise Derisking Concerns According to survey responses from banks, the most commonly cited reason for limiting the number of money transmitter accounts was that the customer type drew heightened BSA/AML regulatory oversight—behavior that would indicate derisking. Banks also commonly cited this reason for terminating money transmitter accounts. For example, officials from one bank told us that the bank no longer offered services to MSBs because it wanted to be viewed favorably by regulators. Officials of another bank said that money transmitter account closures were generally the result of onerous regulatory requirements and increased regulatory scrutiny. Officials from two banks we spoke with said that they received greater regulatory scrutiny after increasing their number of MSB customers, which affected their willingness to open additional accounts with MSBs. According to officials of one of the two banks, when the bank increased its MSB customers from one to two, the institution was assessed as high risk by examiners. Related to heightened regulatory oversight, some banks’ officials we interviewed also expressed concerns that some examiners’ expectations go beyond what is described in the examination manual. For example, they said examiners expected banks to know their customers’ customers—although BSA/AML regulations do not require banks to obtain information on their customers’ customers. Bank officials said ascertaining such information was difficult because money transmitters’ customers are one step removed from the bank. Some banks’ officials also told us that they felt obligated to follow examiners’ verbal suggestions, even when the suggestions did not appear in the final examination report as recommendations. Other banks’ officials we interviewed stated that although examiners did not explicitly recommend that banks exit certain lines of business, officials felt pressure from the examiners to do so. For example, officials from one bank said examiners suggested that if the bank exited certain lines of business, the bank would not have deficiencies in its BSA/AML compliance program. We reported similar concerns in our March 2018 report. About half of the banks we interviewed for that report said that the fear of regulatory scrutiny served as a disincentive for banks to maintain accounts with money transmitters. Some banks’ officials expressed uncertainty about the amount of due diligence required for regulatory purposes because regulations included ambiguous language or because examiner practices exceeded regulations. These bank officials suggested that regulators could provide more specific guidance for banks on risk management, such as by including example scenarios and answers to frequently asked questions. Conversely, some banks we interviewed had a different experience. For example, officials of one bank told us that examiners’ interpretation of BSA/AML principles did not differ from the bank’s understanding of those principles. Officials added that when they initially began preparing risk assessments, they sought feedback and advice from their examiners and that examiners now use the bank’s risk assessment as an example for other banks. Moreover, these officials said that if they need clarification on BSA/AML compliance requirements, they contact FinCEN, which has been responsive to their questions. Officials of another bank told us they have a good relationship with their federal regulator and said that examiners follow BSA guidance and have been consistent in conducting their examinations. Officials of two other banks told us that their BSA/AML examinations have been consistent with guidance and requirements and that examiners have not told officials what types of customers to avoid. We also reported in February 2018 that recent BSA/AML law enforcement and regulatory enforcement actions have caused some banks to become more conservative in the types of businesses to which they offer accounts. In our interviews for the February 2018 report, officials of three banks and an industry group expressed concerns about potential enforcement actions, including civil penalties, if banks’ employees make mistakes in BSA/AML monitoring. In 2012, federal regulators assessed civil money penalties—including a $500 million penalty assessed by OCC and a $165 million penalty by the Federal Reserve—against HSBC Bank for, among other things, failing to maintain an effective BSA/AML compliance program and failing to conduct appropriate due diligence on foreign correspondent bank account holders. As another example, in March 2018, OCC issued consent orders for civil penalties against three senior executives of the Merchants Bank of California for violations of consent orders related to monitoring BSA/AML compliance. In our interviews, officials of an industry association told us that fines associated with BSA violations are especially difficult for community banks to absorb and could result in the bank going out of business. Similarly, examiners from a discussion group said some banks may decide not to offer accounts to MSBs to avoid heightened regulatory scrutiny. For example, examiners said some banks likely want to avoid BSA/AML risk entirely when they decide not to offer MSBs accounts. One examiner thought that some banks lack understanding regarding the business models of MSBs and that it is easier for them not to provide them accounts. In some cases, banks offer MSBs bank accounts but on a limited basis. For example, examiners from one discussion group said that in some cases, banks manage their BSA/AML risks by maintaining existing MSB accounts but not offering accounts to new MSB customers. In a 2015 speech, a senior Treasury official noted banks’ concerns about the cost of complying with BSA/AML requirements, uncertainty about supervisors’ expectations regarding appropriate due diligence, and the nature of the enforcement and supervisory response if they make a mistake. Moreover, the official stated that the banks held the perception that supervisory and enforcement expectations lack transparency, predictability, and consistency. The official also said that this perception feeds into higher anticipated compliance costs and may eclipse any potential economic gains of taking on new MSB customers. To address these concerns, the senior official stated that policymakers needed to continue to improve their understanding of the scope, nature, and drivers of the problem through better data collection and continue to explore ways to improve the effectiveness of their communication. Effects of Account Terminations and Limitations on Money Transmitters Include Ceasing of Operations and Higher Costs for Services According to money transmitters we spoke with, effects of account terminations due to derisking include ceasing of operations, loss of revenue, higher costs for services provided, and failure of the business. For example, officials from one large money transmitter that operates in the United States and internationally said that in recent years, about 100 of the money transmitter’s agents have lost accounts with their local and regional banks each month. The officials added that when banks terminate accounts with the money transmitter or its agents, the money transmitter cannot conduct the necessary transactions with its agents to facilitate the cash transfer. As a result, officials told us, account terminations can cause the money transmitter to cease operations in a particular country or cause the agents to go out of business. These officials also told us that some banks have terminated accounts with their institution while maintaining accounts with other money transmitters. These officials said they obtained legal injunctions for unfair competitor treatment in some of these cases. Officials of a smaller, regional money transmitter said that they have experienced 10 account terminations since 2006. Moreover, the officials said that they have to switch banks every 2 to 3 years because of account terminations and that it is getting more difficult to find a bank willing to take on money transmitters as customers. For example, the officials said that they called about 300 banks in a state and only two banks were willing to open accounts with them. The money transmitter’s officials said it has had to cease operations in three states due to account terminations. The officials said that the money transmitter now focuses on opening accounts with community banks and credit unions, but these institutions may be too small to handle the money transmitter’s volume of deposits. Another money transmitter told us that it takes about 3 months to open an account with a bank. Moreover, as a result of account terminations and limitations by banks, the money transmitter has had to reduce its number of employees from 220 to 180 and has not been able to open new locations. Another money transmitter said that account terminations have affected its ability to obtain accounts with other banks. In our March 2018 report, we found that some money transmitters—those that may be considered higher risk based on the 2005 interagency guidance—may utilize nonbank channels for transferring money as a response to account terminations. Specifically, we reported that as a result of banks’ account terminations and limitations, some money transmitters serving fragile nations have relied on nonbank channels, such as cash couriers and armored trucks, to transfer money domestically and abroad. We further reported that using cash couriers or armored trucks to move money increases costs and risks of theft and safety. Account terminations and limitations by banks also affect money transmitters that do not serve customers abroad—money transmitters that could be considered lower risk based on the 2005 interagency BSA guidance. For example, a company that acquired another business offering money transmission services to customers within the United States also experienced account terminations. When the company acquired the new business and thus the business’s money transmission license, its bank refused to service the company because of its newly acquired status as a money transmitter. In another example, officials of a money transmitter that serves only U.S. customers told us they have difficulty opening accounts and have experienced account terminations often. Officials said that their business has stopped at times because they did not have any bank accounts to facilitate money transmission. Additionally, account closures also may affect money transmitters’ customers. For example, some money transmitters we interviewed said they passed on increased costs resulting from account closures to their customers. Specifically, officials of one large money transmitter said that because of derisking, banks that still do business with them are charging higher fees. The officials added that they try to absorb the higher fees but have passed on the increased costs to their customers in some markets. In contrast, some money transmitters told us in interviews that although their costs have increased, they have not increased customer fees. Several money transmitters told us that banks did not always provide reasons for terminating their accounts. Some said they believe that banks terminate accounts due to regulatory pressure, compliance costs, or changes in a bank’s policy or risk appetite. One money transmitter stated that the problem of account terminations due to derisking stems from banks being too afraid to bank MSBs, including money transmitters. In response to banks’ account terminations and limitations, some money transmitters—including those with characteristics considered to be higher and lower risk according to the 2005 interagency guidance—now maintain accounts with multiple banks to help ensure they can continue operating should a bank close their account. For example, officials of the company that acquired another business offering domestic money transmission services told us they maintain accounts with more than one bank, but they said it is difficult and costly to do so. Officials of another money transmitter said that to help prevent disruptions to their ability to transfer funds when they experience an account closure, they try to have back-up accounts at other banks. Some money transmitters also engage with their banks’ management to better understand what banks expect from them in meeting compliance requirements. For example, an official from one money transmitter said the money transmitter tries to meet with its banks’ financial crimes teams to better understand how it can help minimize the risk of facilitating money transfers for terrorist-financing and money-laundering purposes. Officials of another money transmitter told us that as a result of meeting with bank management, the money transmitter added additional employees to its compliance department and bought new monitoring software to fulfill its bank’s requirement for monthly monitoring of transactions. FinCEN and the Federal Regulators Have Taken Some Steps to Address Derisking Concerns but Have Not Fully Addressed Our Prior Recommendation FinCEN and the Federal Regulators Have Issued Guidance to Banks Related to the Derisking of Money Transmitters FinCEN and the federal banking regulators have responded to concerns about the derisking of money transmitters and other MSBs on a national level by issuing guidance to banks to clarify expectations for providing banking services to these customer types. In March 2005, the federal banking regulators and FinCEN issued a joint statement noting that MSBs were losing access to banking services as a result of concerns about regulatory scrutiny, the risks presented by MSB accounts, and the costs and burdens associated with maintaining such accounts. According to the joint statement, these concerns might have stemmed, in part, from banks’ misperception of the requirements of the BSA and the erroneous view that MSBs present a uniform and unacceptably high risk of money laundering or other illicit activity. The joint statement recognized that the MSB industry provides valuable financial services, especially to individuals who may not have ready access to the formal banking sector. It further noted that it is important that MSBs comply with the requirements of the BSA and applicable state laws and remain within the formal financial sector and be subject to appropriate AML controls. The joint statement announced the intent of the regulators and FinCEN to issue the interagency guidance for banks on providing services to MSBs, which, as previously discussed, was intended to clarify BSA requirements and supervisory expectations as applied to accounts opened or maintained for MSBs. More recently, in November 2014, FinCEN issued a statement reiterating that banks can serve the MSB industry while meeting their BSA obligations and referring to the interagency guidance to banks on providing services to MSBs. The statement noted concerns that banks were indiscriminately terminating the accounts of all MSBs, or refusing to open accounts for any MSBs, thereby eliminating them as a category of customers. It noted, similar to the March 2005 joint statement, that regulatory scrutiny, the perceived risks presented by MSB accounts, and the costs and burdens associated with maintaining such accounts appeared to play a part in these decisions. In the 2014 statement, FinCEN cautioned that a wholesale approach to MSB customers runs counter to the expectation that financial institutions can and should assess the risks of customers on a case-by-case basis. Similarly, it noted that a blanket direction by U.S. banks to their foreign correspondents not to process fund transfers of any foreign MSBs, simply because they are MSBs, runs counter to the risk-based approach. FinCEN stated that refusing financial services to an entire segment of the industry can lead to an overall reduction in financial sector transparency, and that such transparency is critical to making the sector resistant to the efforts of illicit actors. Federal banking regulators also issued separate statements addressing BSA/AML risk posed by MSBs and foreign banks. See table 6 for a summary of key statements and guidance related to MSBs issued in recent years by FinCEN and the federal banking regulators. Regulators Have Taken Some Steps to Address Concerns That May Be Influencing Banks to Derisk but Have Not Reviewed the Full Range of Factors In 2018, we reported that regulators had taken only limited steps to understand how banks’ regulatory concerns and BSA/AML compliance efforts may be influencing banks to derisk. We reported that regulators had taken some actions in response to derisking, including issuing the guidance previously discussed, and that some agencies took steps aimed at trying to determine why banks may be terminating accounts. We also reported that regulators had conducted retrospective reviews on some BSA/AML requirements. We noted that actions regulators had taken to address concerns raised in BSA/AML retrospective reviews had focused primarily on the burden resulting from the filing of Currency Transaction Reports and SARs. However, we noted that these actions had not been aimed at addressing—and, if possible, ameliorating—the full range of factors that influence banks to engage in derisking, particularly how banks’ regulatory concerns and BSA/AML compliance efforts may be influencing their willingness to provide services. We concluded that without a broader assessment of the full range of BSA/AML factors that may be influencing banks to derisk, FinCEN, the federal banking regulators, and Congress do not have the information needed to determine if BSA/AML regulations and their implementation are achieving their regulatory objectives in the most effective and least burdensome way. Therefore, we recommended that FinCEN and the federal banking regulators conduct a retrospective review of BSA regulations and their implementation for banks, with a focus on how banks’ regulatory concerns may be influencing their willingness to provide services. According to the federal banking regulators and FinCEN, they and Treasury established an interagency working group in early 2018 that they believe will address our recommendation. The interagency working group is intended to identify ways to improve the efficiency and effectiveness of BSA/AML regulations, supervision, and examinations while continuing to meet the requirements of the BSA and its implementing regulations, supporting law enforcement, and reducing BSA/AML compliance burden. Staff from FinCEN and the federal banking regulators identified several interagency statements that the working group has completed. Interagency Statement on Sharing BSA Resources (issued on October 3, 2018): This statement clarified how banks may reduce the costs of meeting BSA requirements effectively by sharing employees or other resources in a collaborative arrangement with one or more banks. The statement highlighted potential benefits to sharing resources and provided examples of resources that may be appropriate to share, such as certain internal controls, independent testing, and BSA/AML training functions. The statement also highlighted potential risks of sharing resources and cautioned that any collaborative arrangements should be designed and implemented according to each bank’s risk profile. Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing (issued on December 3, 2018): This statement clarified the working group’s position with respect to innovative approaches in BSA/AML compliance and encouraged banks to consider such approaches. For example, some banks are experimenting with artificial intelligence and digital identity technologies applicable to their BSA/AML compliance programs. The statement notes that these innovations and technologies can strengthen BSA/AML compliance approaches and that the regulators welcome these types of innovative approaches to further efforts to protect the financial system against illicit financial activity. According to the statement, pilot programs undertaken by banks to test and validate the effectiveness of innovative approaches should not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful. Joint Statement on Risk-Focused Bank Secrecy Act/Anti-Money Laundering Supervision (issued on July 22, 2019): This statement was intended to improve the transparency of the risk-focused approach used for planning and performing BSA/AML examinations. In this statement, FinCEN and the banking regulators emphasized that they scope their examinations in response to the unique risk profile for each bank because banks vary in focus and complexity. The regulators also clarified common practices for assessing a bank’s risk profile, including leveraging available information such as the bank’s own risk assessment, contacting the banks between examinations, and considering the bank’s ability to identify, measure, monitor, and control risks. Federal banking regulators and FinCEN staff said the working group’s focus on regulatory reform and on reducing the burden associated with BSA/AML compliance may indirectly address derisking concerns, including those related to money transmitters. In particular, they said these efforts may help agencies as they clarify their supervisory expectations for banks with respect to managing BSA/AML risk. For example, the staff said that the joint statement on the risk-focused approach to supervision clarifies that the role of the examiner is not to determine what level of risk a bank should assume. Instead, the examiners should review risk management practices to evaluate whether a bank has effective processes to identify, measure, monitor, and control risks and to assess the effectiveness of a bank’s processes. They said that reminding examiners and institutions of the risk-focused approach will help dispel the perception that banks will be criticized for taking certain higher-risk customers when the bank is properly managing that risk. Similarly, they said that the joint statement on innovation could help address derisking concerns because it allows banks to leverage new technologies and innovative approaches to help reduce costs of implementing the strong risk management practices that may be necessary to provide banking services to some higher-risk customers. The actions taken to date by the interagency working group are important steps toward improving the efficiency and effectiveness of BSA/AML regulations and supervision. As previously discussed, one reason some banks reported terminating or limiting money transmitter accounts was because of the cost associated with BSA/AML compliance. The interagency statements on sharing BSA resources and innovative efforts to combat money laundering and terrorist financing could help reduce banks’ implementation costs associated with providing banking services to potentially higher-risk customers. However, consistent with our prior work, our evidence demonstrates that banks terminate or limit customer accounts not only as a way to address legitimate money-laundering and terrorist-financing threats, but also as a way to manage regulatory concerns, which may indicate derisking. Reminding examiners and banks of the risk-focused examination approach may help to dispel the perception that banks will be criticized for taking certain higher-risk customers when the bank is properly managing that risk and may indirectly address some factors that influence banks to derisk. Nevertheless, the working group has not yet considered whether there are other supervisory concerns that factor into banks’ decisions to derisk. As we stated in our prior work, it is important to evaluate and address the full range of factors that may be influencing banks to derisk. Therefore, we maintain that FinCEN and the banking regulators should continue to work toward implementing our prior recommendation to conduct a retrospective review of BSA/AML regulations focusing on how banks’ regulatory concerns may be influencing their willingness to provide services. Conclusions Regulators and FinCEN issued the 2005 interagency guidance to clarify BSA/AML requirements and supervisory expectations with regard to accounts banks open or maintain for money transmitters and other MSBs. However, some examiners in our discussion groups said they were unclear about how much due diligence is reasonable to expect banks to conduct for their money transmitters. Improving examiners’ ability to evaluate banks’ BSA/AML compliance controls with respect to money transmitter accounts would help ensure that such evaluations are done in accordance with BSA/AML examination objectives of identifying and assessing risks and banks’ ability to manage risks, as set out in the examination manual. Options for making such improvements could include providing examiners with more detailed examination procedures, enhanced information, additional training, or a combination of methods. Recommendations We are making a total of four recommendations to the Federal Reserve, OCC, FDIC, and NCUA: The Board of Governors of the Federal Reserve System should, in coordination with the other federal banking regulators, and with input from BSA/AML examiners and other relevant stakeholders, take steps to improve examiners’ ability to evaluate the effectiveness of banks’ BSA/AML compliance controls with respect to money transmitter accounts. Steps may include providing updates to examination procedures, examiner training, or a combination of methods. (Recommendation 1) The Comptroller of the Currency should, in coordination with the other federal banking regulators, and with input from BSA/AML examiners and other relevant stakeholders, take steps to improve examiners’ ability to evaluate the effectiveness of banks’ BSA/AML compliance controls with respect to money transmitter accounts. Steps may include providing updates to examination procedures, examiner training, or a combination of methods. (Recommendation 2) The Chairman of the Federal Deposit Insurance Corporation should, in coordination with the other federal banking regulators, and with input from BSA/AML examiners and other relevant stakeholders, take steps to improve examiners’ ability to evaluate the effectiveness of banks’ BSA/AML compliance controls with respect to money transmitter accounts. Steps may include providing updates to examination procedures, examiner training, or a combination of methods. (Recommendation 3) The Chairman of the National Credit Union Administration should, in coordination with the other federal banking regulators, and with input from BSA/AML examiners and other relevant stakeholders, take steps to improve examiners’ ability to evaluate the effectiveness of banks’ BSA/AML compliance controls with respect to money transmitter accounts. Steps may include providing updates to examination procedures, examiner training, or a combination of methods. (Recommendation 4) Agency Comments We provided a draft of this report to the Federal Reserve, FDIC, NCUA, OCC, and Treasury’s FinCEN for review and comment. The federal regulators provided technical comments on the draft report, which we have incorporated as appropriate. The Federal Reserve, FDIC, NCUA, and OCC also provided written comments (reproduced in appendixes III through VI). They agreed with GAO’s recommendations and expressed a commitment to implement them. We are sending copies of this report to the appropriate congressional committees, the Director of the Financial Crimes Enforcement Network, the Chairman of the Board of Governors of the Federal Reserve System, the Chairman of the Federal Deposit Insurance Corporation, the Comptroller of the Currency, and the Chairman of the National Credit Union Administration. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report (1) describes regulators’ Bank Secrecy Act (BSA)/anti-money laundering (AML) supervisory expectations for banks that provide services to money transmitters and other money services businesses (MSB) and examiner views on bank challenges in complying with these requirements; (2) examines challenges reported by examiners in conducting BSA/AML assessments; (3) examines the extent to which banks are terminating or limiting money transmitters’ access to banking services and the effects on money transmitters; and (4) evaluates how the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) and the federal banking regulators have assessed and responded to concerns about the derisking of money transmitters. The federal banking regulators included in our review are the Board of Governors of the Federal Reserve System (Federal Reserve), the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), and the National Credit Union Administration (NCUA). We define “derisking” as the practice of banks limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering. We developed this definition in our prior work addressing account terminations and branch closures in the U.S. Southwest border region. To describe regulators’ BSA/AML supervisory expectations for banks that provide services to money transmitters and other MSBs and federal bank examiners’ views on banks’ challenges in complying with these requirements, we reviewed joint guidance issued by FinCEN and the federal banking regulators in April 2005 on banking MSBs and the Federal Financial Institutions Examination Council’s (FFIEC) BSA/AML examination manual, which federal banking regulators use to examine banks for BSA/AML compliance. We also interviewed the federal regulators named above. Further, we interviewed representatives of 16 banks, six credit unions, and relevant industry groups and trade associations. Because of our judgmental sampling, the views expressed by these groups may not be representative. To identify the universe of banks for interviews, we used data from FDIC’s Statistics on Depository Institutions database as of December 31, 2016. Next, we excluded banks that did not offer the product types relevant to our study, including credit card banks and banks that offer nontraditional accounts; multiple subsidiaries of large holding companies; and federal branches of foreign banks. We also excluded banks with insufficient information to determine the types of accounts offered. In addition, we excluded banks selected to participate in a web- based survey (we describe our survey methodology below). After these exclusions, our initial list consisted of 5,922 banks. Because the primary regulators (Federal Reserve, OCC, and FDIC) do not track which banks have money transmitter customers, we used a judgmental sample to randomly select banks to interview from each of the primary regulators based on asset size (small, medium, and large). For small and medium banks, we interviewed one bank of each size from each of the three regulators. For large banks, all were regulated by OCC, and we interviewed two of these banks. We defined banks’ asset-size categories as follows: (1) “small” consisted of banks with assets of less than $1 billion, (2) “medium” consisted of banks with assets of $1 billion to less than $10 billion, and (3) “large” consisted of banks with assets of $10 billion to less than $50 billion. Once we selected our sample, we contacted each bank to confirm that it had money transmitter or other types of MSB customers. If a bank did not have money transmitter or other MSB customers or declined to speak with us, we selected another bank in the same asset-size category. We initially selected nine banks to interview—three in each asset-size category—but one large bank declined to speak with us. Because there were no other large banks in our sample, we interviewed two large banks, for a total of eight small, medium, or large banks. We also jointly interviewed eight extra-large banks (with assets of $50 billion or more) in coordination with our other work on derisking. Because NCUA tracks which credit unions have money transmitter customers, we obtained data from NCUA on credit unions that served money transmitters as of April 2017 and stratified them according to small, medium, and large asset-size categories. We defined credit unions’ asset-size categories as follows: (1) “small” consisted of credit unions with assets of less than $100 million, (2) “medium” consisted of credit unions with assets of $100 million to $500 million, and (3) “large” consisted of credit unions with assets of more than $500 million. We chose three credit unions with the largest numbers of money transmitter customers and randomly selected one credit union from each asset-size category, for a total of six credit unions. From our initial selection, we emailed or called each of the six credit unions to ascertain if it had a money transmitter customer. If a credit union did not have a money transmitter customer or declined to speak with us, we selected another credit union in the same asset-size category. We then conducted two discussion groups per regulator with bank examiners from the Federal Reserve, OCC, FDIC, and NCUA to understand how they applied the FFIEC manual in assessing BSA/AML compliance controls of banks with money transmitter customers. To determine the composition of the discussion groups, we identified BSA/AML specialists or subject-matter experts from the district and regional offices of each federal banking regulator located in geographic areas with relatively large numbers of money transmitters. To do this, we first identified the states with the largest numbers of registered money transmitters by analyzing FinCEN money transmitter registration data from January 2015 through May 2017. We then requested rosters of staff designated as BSA/AML subject-matter experts and specialists from each regulator for each district or regional office in those states. We administered a questionnaire to the individuals on each roster asking about their experience with examining banks with money transmitter customers and other questions, such as years of experience in conducting bank examinations. We excluded from consideration BSA/AML subject-matter experts and specialists who either self-identified as supervisors or who had not examined a bank with a money transmitter customer in the past 3 years. We then randomized and selected BSA/AML subject-matter experts and specialists for participation in our discussion groups. Depending on scheduling and availability, the number of participants for each discussion group ranged from six to 14. Each session was digitally recorded and transcribed by an outside vendor, and we used the transcripts to summarize participants’ responses. An initial coder assigned a code that best summarized the statements from discussion group participants and provided an explanation of the types of statements that should be assigned to a particular code. A separate individual reviewed and verified the accuracy of the initial coding. The initial coder and reviewer discussed orally and in writing any disagreements about code assignments and documented consensus on the final analysis results. Discussion groups were intended to generate in-depth information about the reasons for the participants’ views on specific topics. The opinions expressed by the participants represent their points of view and may not represent the views of all BSA/AML subject-matter experts and specialists at the federal banking regulators. For purposes of this report, we used the following terms to describe the number of discussion groups in which an issue is mentioned: “some” to describe two to three groups out of the eight discussion groups, “many” to describe four to five discussion groups, and “most” to describe six to seven discussion groups. To examine challenges reported by federal bank examiners in assessing banks’ BSA/AML compliance controls around money transmitters, we asked examiners in our discussion groups to identify any challenges they encountered when assessing these compliance controls. We also reviewed examination guidance and procedures for assessing BSA/AML compliance controls around money transmitters. We assessed this information against federal internal control standards related to identifying risks and communicating information. We also reviewed bank examination and related documentation from the federal BSA/AML examinations of 56 selected banks and credit unions to gain additional context about BSA/AML examinations, including BSA/AML compliance violations—10 from FDIC, 12 from the Federal Reserve, 22 from OCC, and 12 from NCUA. For the documentation review, we selected a nongeneralizable sample of banks and credit unions based on asset-size categories and geographic location (based on each regulator’s field, district, or regional offices) from each federal banking regulator. For banks, we used the same asset-size categories described earlier for our interview selection process. We also included six banks that were issued final BSA/AML enforcement actions—two each from OCC, FDIC, and the Federal Reserve—for calendar years 2014 through 2016. For credit unions, we selected randomly from the same asset-size categories we used for selecting credit unions for interviews—along with geographic locations—and randomly selected four credit unions from each asset-size category, for a total of 12 credit unions. To obtain geographic representation, we ensured that each bank and credit union selected within each asset-size category also represented multiple geographic locations. For each of the 56 banks and credit unions, we requested and reviewed bank examination reports and related workpaper documentation for 2014, 2015, and 2016, including scoping and planning memorandums, bank- or examiner-prepared BSA/AML risk assessments, and conclusion memorandums or documents that summarized BSA examiner findings. For some banks, we also received banks’ BSA policies as part of the examination report and supplemental documentation package. To examine the extent to which banks are terminating or limiting money transmitters’ access to banking services and their reasons why, we administered a web-based survey to a nationally representative sample of banks in the United States for a total survey sample of 406 banks. We did not include credit unions in our sample. In the survey, we asked banks about terminations of money transmitter accounts and limitations on account offerings related to BSA/AML risk and the reasons for these decisions for the 3-year period from January 1, 2014, to December 31, 2016. We obtained a weighted survey response rate of 46.5 percent. While we designed the survey to be nationally representative of all banks in the United States, some results are statistically nongeneralizable because of the relatively low number of banks that reported having money transmitters as customers. For survey questions that are statistically nongeneralizable, we present only the number of responses to each survey question, and these results are not generalizable to the population of banks. Moreover, not all banks responded to every survey question or provided information for every year covered by our survey; therefore, we are not able to provide trend information from 2014 through 2016. We administered the survey from July 2017 to September 2017. To obtain information on the effects of bank account terminations on and limitations in the number of accounts with money transmitters, we interviewed a nongeneralizable sample of representatives from 11 money transmitters. To select the money transmitters, we obtained money transmitter licensure data from the Conference of State Banking Supervisors’ Nationwide Multistate Licensing System. Using the number of state licenses as a proxy for the size of the money transmitter, we developed five size categories and selected the top four money transmitters in the first stratum (40 or more licenses) along with one money transmitter in the second, third, and fourth strata (20–39, 10–19, and 2–9 licenses, respectively) and four money transmitters in the fifth stratum (one license). To evaluate how FinCEN and the federal banking regulators have assessed and responded to concerns about derisking of money transmitters, we reviewed agency documentation and guidance the agencies issued to banks related to derisking and MSBs, and we interviewed agency management. We also reviewed a prior GAO report that evaluated regulators’ response to derisking along the Southwest border and assessed actions regulators have taken to respond to a recommendation we made in that report. We utilized multiple data sources throughout our review. We assessed the reliability of FDIC’s Statistics on Depository Institutions database by reviewing related documentation and conducting electronic testing for missing data, outliers, or any obvious errors. Furthermore, we used NCUA data that track which credit unions bank money transmitters, the Nationwide Multistate Licensing System, and FinCEN’s MSB registration database to help select our nongeneralizable samples of credit unions and money transmitters to interview. We did not assess the data reliability of these sources because we used these data purely to inform our sampling population, and once we selected our samples, we took additional steps to confirm that the institutions we selected had MSB or money transmitter customers and were willing to speak to us. For FinCEN’s MSB registration database, as previously discussed, we used the data to help identify which states had the most money transmitters registered. In analyzing the data, we found a clear difference in the number of MSB registrations between the top five states (California, Texas, Michigan, Florida, and Illinois) with the most MSBs (ranging from close to 800 to almost 4,000 MSBs) and the remaining states (all with fewer than 500 MSBs). Because we used these data to help facilitate the identification of BSA/AML subject-matter experts and specialists who had experience examining banks with money transmitter customers, we did not need to confirm the exact number of MSBs registered. As a result, we did not assess the reliability of FinCEN’s registration database. We concluded that all applicable data were sufficiently reliable for the purposes of describing BSA/AML risks and compliance challenges and identifying banks to survey on account terminations and limitations. We conducted this performance audit from August 2016 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Responses to Selected Questions from GAO’s Survey of Banks on Account Terminations and Limitations From July 2017 through September 2017, we administered a web-based survey to a nationally representative sample of banks. In the survey, we asked banks about account terminations and restrictions (also referred to as limitations) for reasons associated with managing Bank Secrecy Act/anti-money laundering (BSA/AML) risk; whether banks are terminating or limiting accounts with money transmitters; and the reasons for these decisions. We collected information for the 3-year period from January 1, 2014, to December 31, 2016. Responses to selected questions from our survey that are directly applicable to the research objectives in this report are shown in tables 7–19 below. While we designed the survey to be nationally representative of all banks in the United States, results specific to money transmitters are statistically nongeneralizable because of the relatively low number of banks that reported having money transmitters as customers. Because these survey questions are statistically nongeneralizable, we present only the number of responses to each survey question, and the results are not generalizable to the population of banks. Moreover, not all banks responded to every survey question or provided information for every year covered by our survey; therefore, we are not able to provide trend information from 2014 through 2016. Our survey included multiple-choice and open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Stefanie Jonkman (Assistant Director), Kun-Fang Lee (Analyst-in-Charge), Carl Barden, Lilia Chaidez, Giselle Cubillos-Moraga, Joshua Garties, Toni Gillich, Shamiah Kerney, Jill Lacey, Patricia Moye, Aku Shika Pappoe, Jennifer Schwartz, Jena Y. Sinkfield, Tyler Spunaugle, Verginie Tarpinian, and Deme Yoo made key contributions to this report.
Why GAO Did This Study The World Bank and others have reported that some money transmitters have been losing access to banking services. Money transmitters play an important role in the financial system, in part because they provide financial services to people less likely to use traditional banking services. GAO was asked to review the causes and potential effects of derisking by banks. This report examines, among other issues, (1) the extent to which banks are terminating or limiting services for money transmitters, (2) challenges in assessing banks' BSA/AML compliance related to money transmitters, and (3) regulators' actions to address derisking concerns. GAO reviewed bank examination reports and documents, held eight discussion groups with federal bank examiners, surveyed a nationally representative sample of 406 banks (excluding credit unions), and interviewed federal and bank officials, money transmitters, industry associations, and other stakeholders. What GAO Found From 2014 through 2016, 40 of 86 banks with money transmitter customers that responded to GAO's survey indicated they terminated at least one money transmitter account for money-laundering-related reasons. Money transmitters transfer money for their customers to recipients domestically or internationally. Common reasons given for terminating accounts included the customer not providing information needed to satisfy the banks' due diligence requirements under Bank Secrecy Act (BSA)/anti-money laundering (AML) regulations and that the cost of BSA/AML compliance made these customers unprofitable. However, banks also cited concerns that these customers drew heightened regulatory oversight; this may indicate “derisking,” the practice of banks limiting services or closing accounts with customers to avoid any perceived regulatory concerns about facilitating money laundering. Federal bank examiners in some of GAO's discussion groups identified challenges in assessing banks' compliance with due diligence requirements. In 2005, the Department of the Treasury's (Treasury) Financial Crimes Enforcement Network (FinCEN) and the federal banking regulators issued interagency interpretive guidance to clarify BSA/AML requirements and supervisory expectations for banks providing banking services to money transmitters. The guidance was incorporated in the Federal Financial Institutions Examination Council BSA/AML examination manual. However, examiners from some discussion groups said it was unclear how much due diligence is reasonable to expect banks to conduct for their money transmitter customers. For example, while the manual's examination guidance pertaining to money transmitters states that due diligence on higher-risk accounts can include reviewing the money transmitter's BSA/AML compliance program or conducting on-site visits, the related examination procedures do not clarify what these reviews or visits might entail. Unless federal banking regulators take steps to improve examiners' ability to evaluate banks' compliance with BSA/AML requirements as applied to money transmitter accounts, examiners may not be fully achieving examination objectives. In response to derisking concerns associated with money transmitters, FinCEN and the federal banking regulators have issued general guidance that discourages banks from terminating accounts with any particular customer type without evaluating individual customers' risks. In prior work, GAO noted that regulators had not fully evaluated how banks' regulatory concerns may be influencing decisions to derisk. GAO recommended that FinCEN and the federal banking regulators conduct a retrospective review of BSA regulations and their implementation, with a focus on how banks' regulatory concerns may affect their decisions to provide services. According to federal banking regulators and FinCEN, they and Treasury established an interagency working group in early 2018 that they believe will address the recommendation. The working group has taken important steps toward improving the efficiency and effectiveness of BSA/AML supervision, including issuing an interagency statement intended to improve the transparency of the risk-focused approach examiners use to plan and conduct BSA examinations. However, the working group has not yet evaluated the full range of factors that may influence banks to derisk. What GAO Recommends GAO is making a total of four recommendations to the federal banking regulators that each regulator improve examiners' ability to evaluate banks' BSA/AML compliance as applied to money transmitter accounts. The federal banking regulators agreed with GAO's recommendations. GAO also reiterates its recommendation in GAO-18-263 that FinCEN and the federal banking regulators conduct a retrospective review of BSA regulations and implementation.
gao_GAO-20-222
gao_GAO-20-222_0
Background OCWR allocates functions among its Board of Directors, Executive Director, and General Counsel (see fig. 1). This organizational structure is largely due to statutory requirements in the CAA. As of February 2019, OCWR had 28 full-time equivalent positions, which includes five part-time board members (counted as one full-time equivalent) appointed by congressional leadership. This represents an increase of five full-time equivalents since April 2018. OCWR manages an Administrative Dispute Resolution (ADR) process to resolve alleged violations of workplace rights and protections, such as discrimination. The Reform Act overhauled the ADR process, including removing mandatory counseling and mediation periods and a waiting period prior to filing a claim (see fig. 2). OCWR Implemented Some Reform Act Requirements; Other Requirements Are in Progress OCWR Implemented Three of Four Reform Act Requirements Effective June 2019 To advance worker protections, the Reform Act mandated that OCWR implement various new requirements. OCWR has implemented three of the four requirements that generally became effective on June 19, 2019 (see table 1). As of October 2019, OCWR had completed three requirements. Managing changes to the ADR process. OCWR officials stated that because they had initiated a multi-year process to revise procedural rules in 2016, they were more familiar with the steps and timeline needed to implement this requirement in 2019. Appointing a confidential advisor. Similarly, an OCWR official stated that the confidential advisor role was similar to OCWR’s counselor role prior to the Reform Act, which made implementing this requirement more manageable. Creating a secure electronic system to file claims. The online system, SOCRATES, was operational starting June 26, 2019, 7 days after the requirement’s effective date of June 19, 2019. Between June 19 and June 26, 2019, OCWR implemented a fillable PDF form so that claims could be submitted electronically (email or fax). OCWR officials reported that no claims were filed during the 7-day delay, and therefore, they believe that the delay did not negatively affect employees’ ability to file claims. According to OCWR, testing the system the week prior to June 19, 2019, revealed numerous problems with SOCRATES. For example, if a user did not submit his or her claim within a certain amount of time, the system refreshed the page without saving the user’s data, forcing the user to restart the claim. Also, during a June 17, 2019, meeting between OCWR and congressional staff, OCWR received requests to further revise forms associated with SOCRATES. OCWR was unable to implement these changes before the June 19, 2019, deadline. As a result, OCWR delayed the launch of SOCRATES until June 26, 2019, to allow time to resolve these issues and fully test the system. However, OCWR did not communicate the decision to delay the full launch of SOCRATES to congressional stakeholders who had expected that the system would be delivered on time. As of October 2019, OCWR had not completed one requirement that was due by June 19, 2019. Establishing a program to permanently retain records. The Reform Act required OCWR to establish and maintain a permanent records retention program, which includes records of preliminary reviews, mediations, hearings, and other proceedings. Since November 2017, OCWR has operated under an interim records retention policy that requires it to permanently keep all records. According to OCWR, it is not destroying or deleting any records. OCWR’s interim permanent records retention policy states that OCWR will establish standards and procedures for records integrity, privacy, and confidentiality. However, as of October 2019, about 4 months after this requirement became effective, OCWR had not developed these standards or established other policies or procedures for maintaining a permanent records retention program other than the interim policy. According to OCWR, it scanned paper records to create electronic files and hired a separate contractor in September 2019 to help further develop its records retention program. OCWR Is Implementing Three Reform Act Requirements with Time Frames beyond June 2019 As of October 2019, OCWR was implementing the other three requirements which have varying deadlines, time frames, and effective dates extending beyond June 19, 2019 (see table 2). Tracking and reporting data and assessments. The Reform Act created new reporting requirements for OCWR. For example, it required OCWR to issue annual, itemized reports on awards and settlements. The Reform Act also required OCWR to issue a one- time report on awards and settlements previously paid, which OCWR published on January 20, 2019. OCWR plans to issue the report on 2019 awards and settlements by January 31, 2020, and subsequent reports annually. The Reform Act also required OCWR to use SOCRATES data to assess the effectiveness of ADR procedures in resolving claims in a timely manner and to publish these assessments in semi-annual reports to Congress. OCWR plans to issue the first semi-annual report by January 31, 2020. Conducting a workplace climate survey. The Reform Act required OCWR to conduct a secure survey of legislative branch offices covered by the act by December 20, 2019 (within one year of enactment), and every 2 years thereafter. The survey would assess the workplace environment, including attitudes toward sexual harassment. As of October 2019, OCWR officials reported that they were waiting for additional input from congressional staff before proceeding with certain aspects of the survey. According to OCWR officials, OCWR’s House and Senate oversight committees had different views of what the survey should include. Therefore, OCWR plans to conduct separate surveys for House offices, Senate offices, and other legislative branch offices. According to OCWR officials, they may be able to launch the House survey by the December 20, 2019, deadline, with the other surveys following. However, the timeline for conducting these surveys is uncertain until OCWR can confirm the surveys’ content with congressional staff and conduct various tests, such as separately pilot testing each of the three surveys. Additional work remaining includes: reviewing changes to the survey questions, obtaining input from legislative branch stakeholders, conducting internal testing of the survey, pilot testing the survey with legislative offices, and finalizing the survey and communications to survey recipients. Educating and assisting legislative branch agencies. OCWR updated various education and training materials, such as: creating a new workplace rights brochure; redesigning a poster notifying employees of rights, protections, and procedures under the CAA; and establishing audio and visual teleconferencing access for out-of- area employees (i.e., legislative branch employees in elected officials’ district and state offices). An OCWR official reported that, in October 2019, OCWR developed a training video on new procedures under the Reform Act. A link to the training video was included in the September 2019 quarterly e- newsletter sent to all legislative branch employees covered under the CAA. According to the official, OCWR also plans to launch another training video in November 2019 and develop three new training classes. OCWR Could Better Incorporate Key Management Practices to Improve the Ongoing Implementation of Requirements We found that OCWR incorporated some key management practices when implementing Reform Act requirements (see appendix II for a list of management practices we used to assess OCWR). However, we also found that OCWR did not consistently incorporate key management practices for some requirements and that opportunities exist to improve the remaining implementation and administration of Reform Act requirements. OCWR Incorporated Some Key Management Practices to Implement Requirements We found that OCWR incorporated some key change management or project management practices applicable to implementing Reform Act requirements. For example: OCWR defined the Reform Act requirements and created 21 task teams for implementing them. OCWR dedicated an implementation team to manage the transformation process. OCWR designated a manager to track the implementation status for all task teams. The task team leaders also met weekly. OCWR established an overall project schedule with interim milestones and time frames for revising procedural rules, part of the requirement to manage changes to the ADR process. OCWR also established an overall project schedule for conducting the workplace climate survey. OCWR officials reported that having this schedule has enabled them to track progress, determine that the survey is behind schedule (as of October 2019), and communicate revised expectations to stakeholders. In addition, OCWR officials stated they identified and addressed risks associated with the requirement to appoint a confidential advisor. These risks included the perception of a potential conflict of interest that could arise if an attorney contracted from a private law firm served as the confidential advisor. To mitigate this perception, OCWR hired the confidential advisor as an employee to ensure that the confidential advisor cannot refer claimants to his or her own law firm for legal representation. OCWR Did Not Use Project Schedules to Manage SOCRATES and Other Requirements Project schedules provide a detailed plan for delivering products, services, and results in a timely manner, as well as serve as a communication tool for managing stakeholder expectations. OCWR used project schedules to revise the procedural rules and develop the workplace climate survey but did not use schedules to manage the implementation of other requirements. In particular, for SOCRATES, OCWR officials reported that they proposed a draft schedule but did not finalize this draft or otherwise document changes to the schedule. According to these officials, they did not update the schedule because their implementation plans had changed significantly, and the compressed timeframe resulted in making changes “on the fly.” For example, they revised the system architecture as late as 3 weeks before the mandated deadline, which required signing an interagency agreement for hosting the system with the Library of Congress the day before the mandated deadline. In addition, OCWR encountered last-minute issues when testing the system, as we previously discussed. As a result, OCWR delayed the full launch of SOCRATES but did not communicate this decision to congressional stakeholders who had expected that the system would be delivered on time. Although not a long delay, a project schedule could have helped manage stakeholder expectations. Without a schedule for SOCRATES, OCWR missed opportunities to take corrective actions earlier, communicate with congressional stakeholders, and better manage expectations. OCWR has ongoing cybersecurity activities and planned upgrades to other information technology (IT) systems, but has not yet established project schedules for them. Moving forward, it will be important for OCWR to establish project schedules to manage IT projects and allow key stakeholders to monitor OCWR’s progress. OCWR Has Not Addressed Risks for Its Records Retention Program OCWR has taken interim steps to establish a permanent records retention program. These steps include changing its records retention policy in November 2017 to make all records permanent, hiring a contractor in May 2019 to scan paper records and store them electronically, and hiring another contractor in September 2019 to help develop its records retention program. Key management practices call for organizations to identify and assess risks that could affect their ability to achieve their goals and objectives and to monitor and manage these risks as the projects progress. OCWR identified the largest potential risk to establishing and maintaining a permanent records retention program as the loss of control over confidential files. For example, an OCWR official confirmed that OCWR maintains a physical file for every electronic file, which results in multiple storage locations and duplicate records. Although this ensures the availability of records, multiple storage locations can make ensuring the confidentiality and security of these records more difficult. However, as of October 2019, OCWR has not yet fully addressed this risk by developing a policy to manage it or identified other risks. OCWR officials stated that the contractor will help with these tasks. They also reported that they plan to develop policies for the records retention program, particularly for maintaining the privacy and security of records, based on records management requirements for executive branch agencies. According to OCWR officials, addressing risks for its records retention program is not a high priority, and demand for records is low. Nevertheless, if OCWR does not address the potential risks, and any emerging risks, associated with permanently retaining sensitive records, OCWR may be less able to manage its records and ensure their confidentiality, integrity, and availability. OCWR Lacks Results- Oriented Performance Goals, Related Measures, and Reports Assessing Progress We have previously reported that a critical element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance and to measure progress toward these goals. Strategic goals are intended to be the starting point for an organization’s performance measurement efforts. To provide a connection between the long-term strategic goals and the day-to-day activities, organizations should also establish near-term performance goals and measures. Finally, an organization needs to report on its performance to provide information to its stakeholders on the extent to which it has met its performance goals and what it accomplished with its resources. Leading organizations then apply this performance information to continuously improve organizational processes, identify performance gaps, and set improvement goals. OCWR’s 2019-2023 strategic plan includes several broad, long-term, outcome-related goals that address Reform Act requirements. These goals are supported by objectives, called initiatives. For example, OCWR has a long-term strategic goal to “provide an efficient and effective ADR program.” A supporting initiative is to “ensure that ADR processes meet statutory and regulatory mandates, including mandates for maintaining confidentiality.” However, this initiative does not state what is to be achieved and by when, and none of the supporting initiatives reflect near- term performance goals that allow for an objective assessment of progress. Performance goals, which are used to assess progress toward long-term goals, should be stated in objective, measureable, and quantifiable terms. OCWR identifies performance measures in its strategic plan, but the measures lack target levels of performance which would help assess progress toward goals. For example, one of OCWR’s initiatives is to “empower stakeholders to effectively resolve their workplace disputes without having to engage in protracted dispute resolution proceedings.” A supporting performance measure is to “track the rate of cases resolved by negotiated settlements.” This measure provides a starting point for collecting performance information but does not specify how it can be used to assess progress on the initiative. We have previously reported that successful performance measures commonly demonstrate results, are limited to the vital few, respond to multiple priorities, and link to responsible programs. OCWR does not report on progress toward goals in its annual report, partly because of the lack of performance goals and measures assessing progress. OCWR’s annual reports summarize statistical data about the number of employees using OCWR’s services and reasons for ADR claims from each fiscal year, which is information required to be published under the CAA. However, these statistics do not compare actual performance against measurable performance goals. Further, OCWR does not report how it used the data to improve its programs. For example, in its fiscal year 2018 annual report, OCWR reported the number and types of workplace issues that employees inquired about in general information requests and raised during formal counseling requests. However, OCWR did not report how it used this information to identify trends and develop training programs to target the indicated issues. According to OCWR officials, OCWR does not set more specific or measurable goals and measures beyond what is included in its strategic plan. In addition, they stated that OCWR’s current performance goals and measures reflected their concern that changes from the Reform Act could affect their workload, such as the number of cases filed and how they would be settled. They plan to reassess their performance starting in June 2020, about 1 year after many Reform Act requirements became effective, and establish new performance measures and targets based on updated baseline performance levels. Clearly defining performance goals, such as establishing measureable performance targets and milestones, and related performance measures would provide OCWR information to determine if it is making progress toward its long-term goals and better communicate with congressional and other key stakeholders about its progress. Moreover, such performance data would allow OCWR to make more informed decisions to improve performance, such as determining what activities are working as intended and achieving results, and which are not and could be improved. Finally, sharing this information in publically available annual reports could make it more useful and transparent for stakeholders, as well as strengthen OCWR’s accountability for making progress toward its goals. OCWR Conducts Various Education and Outreach Activities and Has Opportunities to Better Evaluate the Effectiveness of Its Efforts OCWR has a broad mandate to provide education and information to Members of Congress and covered legislative branch offices and employees about their rights, protections, and responsibilities under the CAA. OCWR routinely conducts educational activities, such as holding brown bag events, creating online training, and posting resources on its website and social media channels. OCWR also performs outreach by meeting with congressional committees regularly, communicating with stakeholders (e.g., House and Senate Employment Counsel), meeting with heads of legislative branch employing offices at least sending an annual notice of rights to all legislative employees. However, we found that OCWR’s assessment of these activities is limited, such as collecting feedback forms from attendees of its brown bag events. While important, these efforts do not enable OCWR to assess the effectiveness of its education and outreach activities and the extent to which they are reaching all covered legislative branch populations. Key management practices call for continuous monitoring to identify areas that require additional attention. In 2004, we recommended that OCWR use various approaches, such as feedback surveys, to increase its understanding of the actual level of awareness of its activities among target populations. In response to the recommendation, from 2008 to 2009, OCWR surveyed legislative branch employees but had a low response rate, which rendered the survey data of limited value. OCWR officials attributed the low response rate to not having all respondent email addresses, as well as the lack of statutory authority to conduct surveys. Through the Reform Act’s requirement to conduct a workplace climate survey every 2 years, OCWR has new opportunities to collect data on the extent to which legislative branch employees are aware of OCWR’s services and their rights under the CAA. Because the Reform Act states that OCWR must consult with congressional committees on how to carry out the survey, OCWR has sought guidance from its oversight committees on what information to collect for the survey and the use of the results. In addition to developing the climate survey, an OCWR official stated that, in March 2019, OCWR also hired a training and education project manager who is responsible for developing an education and outreach strategy. This effort is intended to include assessing awareness levels of OCWR’s activities among legislative branch populations and improving awareness where needed. However, as of October 2019, OCWR did not provide documentation of the strategy and a timeline for its completion. A mechanism for collecting feedback more widely from all covered legislative branch employees could improve OCWR’s understanding of the reach and effectiveness of its education and outreach efforts. For example, it could help OCWR determine if it may be missing certain subsets of legislative branch populations, such as maintenance workers who may have limited computer access. Further, such information could help inform management and resource allocation decisions, such as where to focus education and outreach efforts and how to increase their effectiveness. OCWR Could Better Support IT Initiatives with Strategic Planning and Human Capital Management In 2004, we reviewed OCWR’s management practices and made 20 recommendations to help OCWR: strengthen strategic planning and develop results-oriented performance measures; ensure an effective, results-oriented program structure; build effective communication emphasizing outreach and coordination with congressional and legislative branch stakeholders; and create and sustain an enhanced management control environment, particularly for managing human capital and performance. Between 2004 and 2013, we determined that OCWR had implemented 18 of the 20 recommendations. In this review, we found that, of these 18 recommendations, OCWR subsequently stopped implementing an information technology (IT) planning recommendation that could have strengthened its ability to better manage and implement the requirements in the Reform Act. We had recommended that OCWR ensure that IT planning and implementation be an integral component of the strategic planning process. This recommendation focused on positioning OCWR to effectively leverage technology in achieving strategic mission goals and outcomes. To do this, OCWR needed to establish certain basic IT management capabilities, such as: developing a picture or description, based on OCWR’s strategic plan, of what it wanted its future IT environment to look like; establishing and following a defined and disciplined process for allocating limited resources across competing IT needs; using a rigorous IT system acquisition management process; and ensuring that needed IT skills have been identified and obtained. OCWR took steps in 2003 and 2005 to create an IT task force and issue a multi-year IT plan, respectively. However, these efforts were not sustained. An OCWR official explained that OCWR had not conducted IT planning, including developing an IT strategic plan, in recent years because of limited resources and difficulties attracting a candidate for the IT manager position. These challenges resulted in the position remaining vacant for approximately 18 months from 2016 to 2018. In past work, we have reported that IT strategic planning can help an organization align its IT goals and investments with its strategic goals. A key element of IT strategic planning is developing an IT strategic plan that can serve as an organization’s vision or road map to guide its efforts and investments. OCWR officials reported that they will be developing an IT strategic plan. However, as of October 2019, they were unable to provide additional documentation or a timeline for completion. Without IT strategic planning, OCWR may be less able to set forth a long- term vision of OCWR’s IT environment and measure progress in carrying out its strategic initiatives. For example, OCWR envisioned developing an electronic claims filing system as one of its strategic initiatives as early as fiscal years 2013 to 2015. However, that system was not implemented until 2019, in part because OCWR did not have an IT strategic plan and related IT expertise to support the initiative. With increased funding for implementing Reform Act requirements, OCWR has recently re-focused on human capital management. In September 2018, it hired an IT manager whose responsibilities include IT planning and cybersecurity. In September 2019, OCWR hired a contractor to help update its human capital plan, which had not been updated since 2009. We have previously reported that effective human capital management is critical to sustaining an IT workforce with the necessary skills to execute a range of management functions that support the agency’s mission and goals. Given OCWR’s strategic and ongoing IT initiatives, it will be important for OCWR to consider leading practices in human capital management to ensure that it has the appropriate skills and capacity to meet its current and future responsibilities. These leading practices include consulting with key stakeholders when developing human capital strategies, having a system in place to continually assess and improve human capital planning and investment, determining critical skills and competencies its workforce needs to achieve current and future agency goals, linking the strategic workforce plan with the organization’s strategic plan, developing customized strategies to recruit highly specialized and having an ongoing succession planning process for identifying and developing a diverse talent pool. If OCWR does not continue to strategically assess and manage its human capital needs, it could again find itself with IT or other skills gaps that could negatively affect its ability to meet its mission. Incorporating these leading practices in human capital management could help OCWR develop a workforce plan that better aligns with its mission and goals, as well as develop long-term strategies for recruiting and retaining staff to achieve those goals. Conclusions Although small in size, OCWR has important responsibilities—to advance the safety, health, and workplace rights of employees and employers in the legislative branch. The Reform Act updated how OCWR carries out these responsibilities, such as requiring OCWR to offer an electronic option for filing Administrative Dispute Resolution (ADR) claims and to conduct a workplace climate survey of legislative branch employees. To meet these new requirements, OCWR developed new procedures, trained and hired staff, and balanced new responsibilities with existing ones. As a result, OCWR completed three requirements—managing changes to the ADR process, appointing a confidential advisor, and creating a secure electronic claims reporting system. However, as of October 2019, OCWR had not fully completed the requirement to establish and maintain a program for permanent records retention. To meet this requirement, OCWR needs to develop and implement policies and procedures to administer and manage the program, as well as identify and address potential risks related to the privacy and security of records, among other actions. To help OCWR meet requirements with ongoing work and build upon completed work, it will be important for OCWR to incorporate key practices for project management, such as consistently developing and using project schedules and assessing risk. These practices could help OCWR better manage expectations and prioritize high-risk work. Further, establishing results-oriented performance goals and measures and collecting and using performance information could help OCWR evaluate and better focus its education and outreach efforts, as well as assess progress toward its strategic goals. Finally, OCWR should use its strategic planning process to clearly articulate how its IT initiatives support its organizational goals, such as how the electronic claims reporting system supports a more efficient and effective ADR program. Establishing performance measures and monitoring actual-versus-expected performance of those measures can help determine whether IT is making a difference in improving performance, and in turn whether OCWR is better serving the legislative community. Additionally, OCWR needs to evaluate its human capital management strategies to ensure that it can recruit and retain staff with the appropriate skills to plan and manage IT projects, as part of a larger effort to conduct IT planning. Recommendations for Executive Action We are making the following six recommendations to OCWR: The Executive Director of OCWR, in collaboration with relevant managers, should establish a policy that requires a schedule of tasks to be developed, documented, and updated throughout the lifetime of IT system projects. (Recommendation 1) The Executive Director of OCWR should identify and assess risks in establishing and maintaining a permanent records retention program, and develop policies and procedures to ensure that risks are properly addressed. (Recommendation 2) The Executive Director of OCWR should identify desired performance results, develop performance measures that demonstrate the degree to which the desired results were achieved, and report progress toward those results in OCWR’s annual reports. (Recommendation 3) The Executive Director of OCWR should collect relevant data through a survey or other mechanisms, and use the information to evaluate the effectiveness of education and outreach efforts and the extent to which they are reaching all covered legislative branch populations. (Recommendation 4) The Executive Director of OCWR should integrate IT planning and implementation into the agency’s strategic planning process. (Recommendation 5) The Executive Director of OCWR should incorporate key strategic human capital management practices, such as developing strategies to recruit and retain staff with mission-critical skills, into the strategic planning process. (Recommendation 6) Agency Comments We provided a draft of the report to OCWR for review and comment. In its comments—reproduced in appendix III—OCWR agreed with our findings and concurred with our six recommendations. To address the recommendations, OCWR noted that it has already taken some actions, such as hiring a contractor to assess risks related to permanent records retention. It intends to implement additional changes, such as developing a policy for IT project planning. Going forward, OCWR agreed to update us on its progress implementing the recommendations. We are sending copies of this report to the appropriate congressional committees, the Executive Director of OCWR, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our first objective was to determine the status of Office of Congressional Workplace Rights’ (OCWR) efforts to address new requirements from the Congressional Accountability Act of 1995 Reform Act (Reform Act). To meet this objective, we reviewed applicable laws and identified the new requirements. We reviewed the Reform Act and grouped Reform Act requirements into seven categories of activities based on similar characteristics, such as requirements related to amending the claims process, and how these requirements aligned with OCWR’s task teams working on these requirements. We also collected and reviewed documentation on OCWR’s implementation process and management practices, such as OCWR’s list of tasks and task teams, task team meeting notes, progress reports, agreements with outside vendors, and email communications. Our second objective was to assess how OCWR is incorporating key management practices to implement the Reform Act’s new requirements. To meet this objective, we analyzed OCWR’s implementation of new requirements against key practices for organizational change management we identified in our 2003 report, Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations (GAO-03-669) and key practices for project management from the Project Management Institute Inc.’s A Guide to the Project Management Body of Knowledge, PMBOK Guide®. We determined which key practices and related implementation steps were applicable to OCWR based on the following factors: (1) if the practices aligned with the scope and nature of OCWR’s work, and (2) if the practices applied to OCWR’s implementation timeline given Reform Act deadlines. We shared these key management practices with OCWR. Our third objective was to determine the extent to which OCWR implemented recommendations from our 2004 report, Office of Compliance: Status of Management Control Efforts to Improve Effectiveness (GAO-04-400). To meet this objective, we reviewed OCWR’s plans and documentation of its activities, such as strategic plans and annual reports, to address the recommendations. We then assessed OCWR’s plans and activities against our original recommendations and the recommendations’ implementation status to determine the extent to which OCWR implemented the recommendations in the past and has continued to take similar actions. For all three objectives, we interviewed OCWR officials and conducted semi-structured interviews with a nonprobability sample of key stakeholders and officials from offices covered by the Reform Act. Although results from these interviews are not generalizable to all stakeholders or offices covered by the act, they provided views and illustrative examples about OCWR’s efforts to address new Reform Act requirements, OCWR’s efforts to incorporate key management practices to implement those new requirements, and the extent to which OCWR implemented some of our previous recommendations. These stakeholders and offices include the Architect of the Capitol, Senate Chief Counsel for Employment, and Office of House Employment Counsel. To obtain perspectives from a range of stakeholders and offices, we considered the following factors in our selection: size of the office or agency by number of employees; extent to which offices/agencies are involved in outreach by number of Administrative Dispute Resolution cases and Occupational Health and Safety Inspections/Americans with Disabilities Act inspections; past interviews with offices/agencies discussing OCWR outreach for balance of perspective (e.g., both chambers of Congress). We also interviewed the House Office of Employee Advocacy and the House Office of the Chief Administrative Officer safety team to obtain additional views on their interactions with OCWR. We conducted this performance audit from January 2019 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Management Practices We determined that the following key organizational change management practices and key project management practices, as well as related implementation steps, were relevant for assessing the Office of Congressional Workplace Rights’ (OCWR) management practices. Appendix III: Comments from the Office of Congressional Workplace Rights Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Steven Lozano (Assistant Director), Elizabeth Fan (Analyst in Charge), David Blanding, Giny Cheong, Patrick Dibattista, Karin Fangman, Ben Licht, Patricia Powell, and Edith Yuh made key contributions to this report. Karen Brindle, Hannah Brookhart, Kisa Bushyeager, Terrell Dorn, Robert Gebhart, Lisa Hardman, Ted Hu, Susan Irving, Sonya Johnson, Amalia Konstas, Kaelin Kuhn, Patricia McClure, Zina Merritt, Edda Emmanuelli Perez, Robert Robinson, Sukhjoot Singh, Jon Ticehurst, Alicia White, and Rebecca Woiwode also provided valuable assistance.
Why GAO Did This Study OCWR is an independent, non-partisan office that administers and enforces various provisions related to fair employment and occupational safety and health within the legislative branch. Responding to concerns about sexual harassment in the workplace, Congress passed the Reform Act in 2018, which expanded worker protections and overhauled the process for resolving workplace claims, including claims relating to discrimination and harassment. The act also required OCWR to create a secure, electronic claims system and appoint a confidential advisor to assist claimants, among other requirements. The Reform Act includes a provision for GAO to review OCWR's management practices. This report examines (1) the status of OCWR's efforts to address new requirements in the Reform Act; (2) how OCWR is incorporating key management practices to implement the new requirements; and (3) the extent to which OCWR implemented recommendations from a related 2004 GAO report. GAO reviewed documentation on OCWR's processes, interviewed officials from OCWR and selected legislative branch offices, and assessed how OCWR's actions aligned with key organizational change management practices that GAO identified and key project management practices from the Project Management Institute. What GAO Found The Office of Congressional Workplace Rights' (OCWR) mission is to effectively implement and enforce the Congressional Accountability Act of 1995 (CAA), as amended in 2018 by the Congressional Accountability Act of 1995 Reform Act (Reform Act). OCWR has implemented three of the four Reform Act requirements that generally became effective June 19, 2019, as shown below. Three other Reform Act requirements—track and report data and assessments, conduct a workplace climate survey, and educate and assist legislative branch offices—are in progress. OCWR has incorporated some key management practices when implementing requirements, such as managing risks associated with appointing a confidential advisor. However, opportunities exist to further incorporate key management practices in OCWR's work. For example: Addressing risks . OCWR has not yet developed policies and procedures to address the risks associated with permanently retaining sensitive records, such as ensuring they remain confidential when stored in multiple locations. Measuring performance . OCWR has not established measurable performance targets and milestones or related performance measures. Doing so would allow OCWR to determine if it is making progress toward its long-term goals and better communicate with congressional and other stakeholders about its progress. Monitoring effectiveness . OCWR routinely conducts educational activities, such as holding brown bag events and online training, and performs a variety of outreach activities. OCWR has new opportunities every 2 years to collect data through the workplace climate survey on the extent to which legislative branch employees are aware of OCWR's services and their rights under the CAA. GAO found that OCWR implemented most recommendations from a 2004 GAO report examining OCWR's management controls. GAO also found that OCWR later stopped implementing a recommendation related to information technology (IT) planning, including ensuring that it obtained necessary IT skills. Without IT strategic planning, including recruiting and retaining staff with mission-critical IT skills, OCWR may be less able to carry out its strategic initiatives. What GAO Recommends GAO is making six recommendations to OCWR to better incorporate key management practices as it implements requirements, and to improve its strategic planning. OCWR agreed with GAO's recommendations.
gao_GAO-19-702
gao_GAO-19-702_0
Background HECM Program A reverse mortgage is a nonrecourse loan against home equity that does not require mortgage payments as long as the borrower meets certain conditions. In contrast to traditional forward mortgages, reverse mortgages typically are “rising debt, falling equity” loans (see fig. 1). As the borrower receives payments from the lender, the lender adds the principal and interest to the loan balance, reducing the borrower’s home equity. Also unlike traditional forward mortgages, reverse mortgages have no fixed term. Prospective borrowers must meet a number of requirements to be eligible for a HECM (see sidebar). The amount of money a borrower can receive from a HECM—called the principal limit—depends on three things: (1) the age of the youngest borrower or eligible nonborrowing spouse, (2) the lesser of the appraised value of the home or the FHA mortgage limit as of the date of loan closing (for calendar year 2019, $726,525), and (3) the expected average interest rate. The borrower can receive funds in a variety of ways—for example, as monthly payments, a line of credit, a combination of the two, or a single lump sum. A large majority of borrowers choose the line of credit option. The interest rate lenders charge is typically an adjustable rate, although the lump sum option can be chosen at a fixed interest rate. session given by a HECM counselor approved by the Department of Housing and Urban Development (HUD) make timely payment of ongoing property charges (e.g., taxes and insurance) HECMs that are due and payable because the borrower has not paid property charges, met occupancy requirements, or maintained the home. HECM borrowers (or their heirs) satisfy the debt by (1) paying the loan balance using their own funds, (2) selling the home and using the proceeds to pay off the loan balance, (3) providing a deed-in-lieu of foreclosure (which transfers title for the property to the lender to satisfy the debt), or (4) selling the home for at least the lesser of the loan balance or 95 percent of the property’s appraised value (also known as a short sale). According to FHA regulations, the borrowers or their heirs generally have 30 days after being notified that the loan is due and payable to satisfy the debt or bring the loan out of due and payable status. Servicers generally have 6 months to take first legal action to initiate foreclosure from the date that they, as applicable, notified, should have notified, or received approval from FHA that the HECM is due and payable. According to FHA regulations, the borrower is generally allowed to correct the condition that resulted in the due and payable loan status and reinstate the loan, even after foreclosure proceedings have begun. Figure 2 illustrates the reasons why HECMs terminate and how borrowers typically satisfy the debt under various termination scenarios. If the servicer experiences a loss because the loan balance exceeds the recovery from selling the property, the lender can file a claim with FHA for the difference. Additionally, when the loan balance reaches 98 percent of the maximum claim amount (the lesser of the appraised value of the home at origination or FHA’s loan limit), the lender can “assign” the loan to FHA and file a claim for the full amount of the loan balance, up to the maximum claim amount. Lenders can only assign HECMs in good standing to FHA (that is, assignments can only be for HECMs not in a due and payable status). FHA continues to service the assigned loans using a contractor until the loans become due, either due to the death of the borrower or for other reasons. Additionally, the FHA insurance guarantees borrowers will be able to access their loan funds, even if the loan balance exceeds the current value of the home or if the lender experiences financial difficulty. Further, if the borrower or heir sells the home to repay the loan, he or she will not be responsible for any loan amount above the value of the home. As of the end of fiscal year 2018, FHA had insured over 1 million HECMs. According to FHA data, these include an active HECM portfolio of approximately 551,000 loans serviced by various FHA-approved servicers, 79,000 FHA-assigned loans serviced by an FHA contractor, and about 468,000 terminated loans (see fig. 3). HECM terminations have exceeded new originations every year since fiscal year 2016, and the number of HECMs assigned to FHA has grown substantially since fiscal year 2014. As of the end of fiscal year 2018, FHA’s total insurance-in-force for HECMs (total insured mortgage balances outstanding) was roughly $100 billion. HECMs are held in two FHA insurance funds. HECMs originated prior to fiscal year 2009 are in the General Insurance and Special Risk Insurance Fund (roughly 27 percent of all HECMs), and those originated in fiscal year 2009 and later are in the Mutual Mortgage Insurance Fund (roughly 73 percent of all HECMs). When the post-2008 HECM portfolio became part of FHA’s Mutual Mortgage Insurance Fund, it also was included in the fund’s capital ratio assessment and became subject to annual actuarial reviews. As we found in a November 2017 report, subjecting HECMs to the annual actuarial review requirements has improved the transparency of the program’s financial condition and has highlighted the financial risks of the HECM portfolio to FHA. According to FHA, the financial performance of the HECM portfolio has been historically volatile, largely due to uncertainty in future home prices, interest rates, and other factors. In recent years, FHA has responded with several policy changes to help strengthen the portfolio’s financial performance and mitigate risks. Because FHA’s projected losses on HECMs depend on factors such as maximum claim amount, the length of time the borrower stays in the home, changes in home prices, and interest rates, most of FHA’s policy changes have been aimed at better aligning expected revenues (charging borrowers premiums) with expected costs (cash outflows due to paying insurance claims). For example, FHA has made changes to insurance premiums and principal limits, the most recent of which took effect in fiscal year 2018. Effective in fiscal year 2019, FHA also revised property appraisal practices for new HECMs to guard against inflated property valuations. According to agency officials, FHA made this change to address appraisal bias concerns identified in research by an economist in HUD’s Office of Policy Development and Research. HECM Market Participants The HECM market includes various participants. After a lender originates a HECM, the loan must be serviced until it terminates. HECM lenders and servicers must be FHA-approved and can be the same entity but often are not. HECM lenders often sell the mortgage to another entity, which FHA refers to as an investor, and this entity has the right to enforce the mortgage agreement. HECM servicers are typically third parties that contract with lenders or investors but do not have ownership in the loans they service. As previously discussed, HECM servicers perform a number of functions, such as making payments to the borrowers and providing monthly account statements. Servicers also must monitor borrower compliance with various mortgage conditions and, if necessary, communicate with borrowers about any violation of these conditions (defaults) and, as appropriate, ways they can avoid being foreclosed on. HECM servicers also transfer up-front and annual insurance premiums to FHA each month and file claims with FHA for losses on insured HECMs. In carrying out these duties, servicers are responsible for complying with various requirements, including FHA regulations, policies, and procedures, as well as federal consumer financial laws. Historically, commercial banks, thrifts, and credit unions were the primary lenders and servicers of mortgage loans. Following the 2007–2009 financial crisis and subsequent revisions to regulatory bank capital requirements, banks reevaluated the benefits and costs of being in the mortgage lending market, as well as retaining mortgages and the right to service them. Since the financial crisis, some banks have exited or reduced their mortgage lending and servicing businesses. This development, among others, created an opportunity for nonbank servicers to increase their presence in the mortgage market. Nonbank issuers such as mortgage originators and servicers are not subject to the same comprehensive federal safety and soundness standards as banks. While banks offer a variety of financial products to consumers, nonbank servicers are generally involved only in mortgage-related activities and do not take deposits from consumers. Almost all HECMs are originated, owned, and serviced by nonbank entities: Lenders. According to FHA, in fiscal year 2018, 54 lenders originated HECMs, including 49 nonbank entities and five banks. Investors. As of the end of fiscal year 2018, six investors (all nonbank entities) and the Federal National Mortgage Association (Fannie Mae) owned roughly 92 percent of the privately owned (non-FHA-assigned) HECM portfolio, while the remaining 8 percent was owned by a mixture of bank and nonbank entities. Servicers. Five nonbank entities serviced over 99 percent of the privately owned HECM portfolio as of the end of fiscal year 2018. As previously noted, FHA has a contractor (also a nonbank entity) that services FHA-assigned HECMs. Federal Entities Involved in Reverse Mortgage Oversight A number of federal agencies have roles in overseeing the reverse mortgage market, including the following: FHA. Insures HECMs and administers the HECM program, including issuing program regulations and enforcing program requirements. FHA supplements regulations through additional policies, procedures, and other written communications for the HECM program. For example, FHA officials said the agency utilizes its Single Family Housing Handbook, HECM handbook, and mortgagee letters to communicate changes about the HECM program. In 2013, Congress enacted a law that allowed FHA to make changes to HECM program requirements by notice or mortgagee letter in addition to regulation. Since then, FHA has made several policy changes to the HECM program through mortgagee letters. CFPB. Supervises nonbank reverse mortgage lenders and servicers for compliance with, and enforces violations of, federal consumer financial protection laws. CFPB can also issue regulations under the federal consumer protection laws addressed specifically to protecting consumers considering reverse mortgages. Additionally, CFPB examines entities for compliance with federal consumer financial laws to obtain information about an institution’s compliance management systems and procedures and to detect and assess risks to consumers and markets. Further, CFPB collects consumer complaints regarding consumer financial products or services (including reverse mortgages) and educates consumers about their rights under federal consumer financial protection laws. Federal depository institution regulators. These regulators monitor compliance with relevant laws and regulations, such as provisions of the Federal Trade Commission Act and the Truth in Lending Act, primarily through periodic examinations, for federally regulated lenders that originate HECMs. Consumer Protections and Foreclosure Prevention Options Several features and requirements of the HECM program provide consumer protections to borrowers. For example, borrowers must undergo preloan counseling, the program limits costs and fees lenders can charge, and lenders must provide certain disclosures. In addition, FHA has made several changes to the HECM program in recent years to help borrowers who have defaulted due to unpaid property charges. As previously discussed, if a HECM borrower does not pay his or her property charges, FHA regulations generally require the servicer to pay the property charges on the borrower’s behalf to help avoid a tax foreclosure by the local authority and protect the investor’s and FHA’s interest in the home. FHA regulations also allow servicers to charge certain fees once a loan is called due and payable. These are typically amounts related to attorney or trustee fees, property preservation, and appraisal fees during the foreclosure process. The payments and fees that servicers make on behalf of borrowers—referred to as servicer advances in this report—are added to the loan balance and accrue interest. In 2010, HUD’s Office of Inspector General reported that HUD was not tracking borrower defaults or servicer advances for the HECM program and made several recommendations to FHA. To address these recommendations, FHA took several steps. For example, in 2011, FHA stopped the practice of allowing servicers to defer foreclosing on loans that were in default due to unpaid property changes and issued a mortgagee letter addressing how to handle these loans. Additionally, in September 2012, FHA announced the launch of a new data system for the HECM program, the Home Equity Reverse Mortgage Information Technology (HERMIT) system which would be used starting in October 2012. With this new system, FHA combined former legacy systems that had been used to collect insurance premiums, service FHA-assigned loans, and process claims. According to FHA, adopting the HERMIT system allowed FHA to better monitor and track the HECM portfolio in real time and to automate insurance claim processing. Finally, FHA modified program features to help minimize potential borrower defaults and help strengthen borrower eligibility requirements. For example, in 2013, FHA reduced the amount of equity borrowers could generally withdraw during the first year from 100 to 60 percent of the principal limit. According to FHA, this change was designed to encourage borrowers to access their equity slowly over time rather than all at once to reduce risks to borrowers and FHA’s insurance fund. In 2015, the financial requirements for HECMs changed to include a financial assessment of the prospective borrower prior to loan approval. FHA began requiring HECM lenders to look at the prospective borrower’s credit history, income, assets, and financial obligations. Based on the results of the financial assessment, the lender may require a set-aside for the payment of property charges. Additionally, FHA made several program changes to help distressed HECM borrowers by allowing servicers to offer options to help borrowers delay or in some cases avoid foreclosure if they are behind on paying property charges. These foreclosure prevention options include repayment plans, at-risk extensions, and extensions for low-balance arrearage, as described later in this report. FHA also has taken steps to help nonborrowing spouses stay in their homes after a borrowing spouse dies by deferring repayment of the HECM as long as the nonborrowing spouse fulfills certain conditions. In these cases, the servicer can assign the HECM to FHA under what FHA refers to as the mortgagee optional election assignment process. HECM Defaults Have Increased, and Use of Foreclosure Prevention Options Is Limited or Unknown Death of the Borrower Is the Most Common Reason for HECM Terminations, but Defaults Have Increased in Recent Years Our analysis of FHA data found that about 272,155 HECMs terminated from fiscal years 2014 through 2018. Over that period, the number of terminations rose from about 24,000 in fiscal year 2014 to a peak of roughly 82,000 in fiscal year 2016, before declining to about 60,000 in fiscal year 2018, as previously shown in figure 3. As shown in figure 4, death of the borrower was the most common reported reason for HECM terminations, followed by borrower defaults. The relative size of each termination category varied from fiscal years 2014 through 2018, with borrower defaults accounting for an increasing proportion of terminations in recent years. In fiscal year 2018, borrower defaults made up 18 percent of terminations. Specific results for all major termination categories over the 5-year period were as follows:  Death. About 34 percent of terminations (approximately 87,000 loans) were due to the death of the borrower. Borrower deaths ranged from roughly 29 percent to 40 percent of annual terminations over the 5- year period.  Default. About 15 percent of terminations (approximately 40,000 loans) were due to borrower defaults. As discussed in appendix IV, this percentage varied widely by location and was highest in Michigan (36 percent) and lowest in the District of Columbia (1 percent). About 29,000 defaults were for noncompliance with occupancy or residency requirements, about 11,000 were for nonpayment of property charges, and about 200 were for not keeping the property in good repair. The borrowers of these loans likely lost their homes through foreclosure or a deed-in-lieu of foreclosure. However, it is possible that some of these borrowers would have ultimately lost their homes even if they had not taken out a HECM. For example, as noted in CFPB’s 2012 report to Congress on reverse mortgages, some borrowers may have taken out a HECM to help pay off their traditional mortgage rather than as a way to pay for everyday expenses. In these cases, the money borrowers received from their HECMs may have helped them temporarily but may ultimately have been prolonging an unsustainable financial situation. In addition, some borrowers who did not meet occupancy or residency requirements may have permanently moved out of their homes—for example, to assisted living or nursing home facilities. Borrower defaults as a percentage of annual HECM terminations grew from 2 percent of terminations in fiscal year 2014 to 18 percent in fiscal year 2018. Noncompliance with occupancy requirements was the primary cause of defaults each year, but unpaid property charges represented a growing share. From fiscal years 2014 through 2018, property charge defaults as a percentage of all defaults grew from 26 percent to 45 percent, and property charge defaults as a percentage of all terminations grew from less than 1 percent to 8 percent. Loan balance repaid. About 9 percent of terminations (approximately 23,000 loans) were due to the borrower repaying the loan balance. This category accounted for a declining share of terminations over the 5-year period, falling from 24 percent in fiscal year 2014 to 4 percent in 2018. Refinanced. About 8 percent of terminations (about 20,000 loans) were due to the borrower refinancing into a new HECM. This category remained relatively stable over the 5-year period, accounting for about 5 percent to 10 percent of terminations each year. Borrower moved or conveyed title. About 3 percent of terminations (approximately 8,000 loans) were due to the borrower either moving or conveying title to the property to someone else. The percentage of terminations in this category declined from 6 percent in fiscal year 2014 to 2 percent in fiscal year 2018. Unknown. For about 30 percent of terminations (roughly 78,000 loans), we were unable to readily determine a termination reason from FHA’s data. Over the 5-year period, this category accounted for over 25 percent of terminations each year and reached a high of 39 percent in fiscal year 2018. We discuss challenges related to determining termination reasons later in this report. HECM Servicers Advanced Almost $3 Billion on Behalf of Borrowers for Unpaid Property Charges or Other Costs For HECMs that terminated in fiscal years 2014 through 2018, servicers advanced almost $3 billion on behalf of borrowers for unpaid property charges or various other costs that are charged once a loan becomes due and payable. The advances increased from $508 million in fiscal year 2014 to a peak of $731 million in fiscal year 2016, before declining to $453 million in fiscal year 2018 (see fig. 5). This pattern aligns with the overall trend in terminations, which also peaked in fiscal year 2016. Over the 5-year period, advances for property charges made up 58 percent of the total. The remaining 42 percent consisted of advances for other costs, many of them foreclosure-related, such as attorney fees and appraisal costs. From fiscal years 2014 through 2018, HECM servicers advanced a total of $567 million on behalf of living borrowers who defaulted on their HECMs due to unpaid property charges. For these loans, the median advance was $7,007. About One-Quarter of HECM Borrowers with Overdue Property Charges Received Repayment Plans, and Use of Other Foreclosure Prevention Options Is Limited or Unknown From April 2015 (the effective date of FHA’s current repayment plan policy) through the end of fiscal year 2018, 22 percent of HECM borrowers with overdue property charges had received repayment plans, and FHA’s information on the use of other foreclosure prevention options was limited. As previously noted, property charge defaults and issues surrounding nonborrowing spouses not being included on the mortgage have been long-standing problems in the HECM program. Since 2015, FHA has made program changes to allow servicers to offer different types of foreclosure prevention options to distressed HECM borrowers and nonborrowing spouses of deceased borrowers (see table 1). These options can help delay and, in some cases, avoid foreclosure. According to officials from HUD’s Office of General Counsel, HUD does not have the statutory authority to require servicers to provide HECM borrowers foreclosure prevention options. Our analysis of FHA data found that servicers’ use of selected foreclosure prevention options for HECM borrowers was limited or that FHA did not have readily available information to assess the extent of use, as follows: Mortgagee optional election assignments. According to information generated by FHA, HECM servicers submitted 1,445 requests for mortgagee optional election assignments from June 2015 (when FHA made this option available) through September 30, 2018 (see table 2). In total, FHA approved roughly 70 percent (1,013) of the requests and denied the remaining 30 percent (432). According to FHA officials, the top two reasons for denying mortgagee optional election assignments were HECM servicers not meeting the deadline for electing to pursue the assignment and not meeting the deadline to initiate the assignment. FHA officials told us the third most common reason for denial was a nonborrowing spouse not submitting evidence of marketable title to the property or the legal right to remain in the property for life within required time frames. With respect to the 432 denials, FHA provided information indicating that as of May 31, 2019, 79 percent (342) of the associated loans had not terminated; 14 percent (62 loans) terminated because the loan balance had been paid off; and the remaining 7 percent ended in foreclosure (22 loans), deed-in-lieu of foreclosure (four loans), or short sale (two loans). Estimating the universe of HECMs potentially eligible for mortgagee optional election assignments is difficult because nonborrowing spouses were not listed on loan documentation for HECMs originated prior to August 4, 2014. As a result, FHA does not know how many eligible nonborrowing spouses could have, but did not, apply for the mortgagee optional election assignment, or how many are potentially eligible to apply for it in the future. FHA officials told us they have relied on an industry association and HECM servicers to estimate how many nonborrowing spouses may be associated with pre-August 2014 HECMs. FHA officials told us they sent letters to borrowers with FHA-assigned HECMs that were originated prior to August 4, 2014, to inform them of the mortgagee optional election process and ask them to self-identify whether there was a nonborrowing spouse associated with their loan. FHA officials also noted they were drafting a similar letter for servicers to send to borrowers with HECMs not assigned to FHA. As of August 2019, the servicer letter was still in draft form, pending completion of an ongoing internal review of FHA’s mortgagee optional election assignment processes and the related time frames. FHA officials said once the ongoing review is complete, they anticipated that FHA would issue a new mortgagee letter with revised time frames that would afford both HECM servicers and borrowers more time to meet FHA requirements for mortgagee optional election assignments. Repayment plans. Our analysis of FHA data showed that 22 percent of borrowers with property charge defaults were granted a repayment plan from April 2015 (the effective date of FHA’s current repayment plan policy) through the end of fiscal year 2018. All five legal aid organizations we interviewed said the availability of repayment plans was a top concern. For example, for some of their clients, repayment plans were unavailable because the borrowers did not meet certain financial requirements. In contrast, representatives of the top five HECM servicers told us they generally do offer repayment plans when feasible to help borrowers delay or avoid foreclosure. Servicers we interviewed noted that while repayment plans can delay or avoid foreclosure, they are rarely successful in the long-run and borrowers in such plans often miss payments. Servicers said the same reasons that typically contribute to initial defaults also explain why repayment plans are rarely successful. For example, borrowers on limited incomes may struggle to pay increasing property tax and insurance costs or may fall behind on property charges when the death of a spouse reduces their income. At-risk extensions. Our analysis of FHA data found that from April 2015 (the effective date of FHA’s at-risk extension policy) through the end of fiscal year 2018, about 2 percent of borrowers with property charge defaults received an at-risk extension. To grant an at-risk extension, FHA requires HECM servicers to provide valid documentation that the youngest living borrower is at least 80 years of age and has critical circumstances such as a terminal illness, long-term physical disability, or a unique occupancy need (for example, terminal illness of family member living in the home). Representatives from one legal aid organization told us that some HECM servicers have straightforward requirements for the documentation borrowers must submit to obtain an at-risk extension, while others do not. Representatives from another legal aid organization said that meeting FHA’s annual renewal requirement for at-risk extensions was challenging for some borrowers because they have to submit documentation to HECM servicers every year as they age and continue to struggle with serious health issues or disabilities. Low-balance extensions. FHA officials told us they do not track how often HECM servicers use the option to delay calling a loan due and payable if the borrower has unpaid property charges of less than $2,000. Our analysis of FHA data on servicer advances found that approximately 8,800 HECMs that terminated in fiscal years 2014 through 2018 had unpaid property charges of less than $2,000 at the time of termination. Some of these HECMs may have been eligible for a low-balance extension when they terminated. Representatives from one legal aid organization said they represented a HECM borrower who was at risk of foreclosure for having 27 cents in unpaid property charges. HECM servicers told us they use the low-balance extension option to varying degrees. For example, representatives from one servicer said the servicer follows instructions from the entity that owns the HECM and, in some cases, the owners of the loan do not want to offer the low-balance extension to the borrower. In these cases the servicer calls the loan due and payable for any amount in unpaid property charges and initiates the foreclosure process in accordance with FHA regulations. Another HECM servicer told us it tries to use the low-balance extension every time a borrower owes less than $2,000 in unpaid property charges. Weaknesses Exist in HECM Termination Data, Performance Assessment, and Portfolio Monitoring FHA Lacks Comprehensive Data on Reasons for HECM Terminations Since fiscal year 2013, FHA has used the HERMIT system to collect data on the servicing of HECMs, but the system does not contain comprehensive and accurate data about the reasons why HECMs terminate, a key servicing event. According to the HERMIT User Guide, servicers should provide a reason in HERMIT when they terminate a HECM. However, as noted previously in figure 4, for about 30 percent of HECM terminations from fiscal years 2014 through 2018 (roughly 78,000 loans), we were unable to determine the reason for termination. Specifically, for these loans we could not identify in HERMIT any associated borrower death or default, or evidence that the borrower repaid, refinanced, moved, or conveyed title. Instead, these loans were coded as terminating for “other reasons” or coded based on how the debt was satisfied rather than an actual termination reason. The HERMIT User Guide provides a list of termination codes available in the system, but the list and guide have shortcomings that limit analysis of HECM terminations. First, the list includes codes servicers can use to indicate that a loan terminated for “other reasons,” but the guide does not specify what these other reasons are. However, servicers have been using the “other reasons” code increasingly over the past 5 years. We asked servicers how they used the “other reasons” code and found inconsistency in and uncertainty about its use. For example, servicers’ responses ranged from not using it at all, to using it when they did not intend to file an insurance claim with FHA, to not being sure under what circumstances they used it. Second, the list of termination codes consists of both reasons for termination and descriptions of how the debt was satisfied. As a result, the final status code of some loans in HERMIT shows only the way in which the debt was satisfied—for instance, a deed-in-lieu of foreclosure, foreclosure, or short sale. These codes could apply to terminations resulting from the borrower dying, defaulting, or moving and do not ultimately provide a specific reason for loan termination. FHA officials were unaware of any proxy variables that we could use to help identify the underlying termination reasons for these loans. The officials said the termination reasons are available on an individual loan basis in the HERMIT system but not in an extractable form. As discussed later in this report, FHA does not regularly track and report on HECM termination reasons, due partly to this system limitation. The limitations in FHA’s data are inconsistent with federal internal control standards, which require agencies to use quality information to achieve their objectives. To meet this internal control standard, agencies can obtain relevant data that are reasonably free from error and bias and evaluate sources of data for reliability. FHA’s annual report to Congress states that the HECM program helps seniors remain in their homes and age in place. However, without comprehensive and accurate data on HECM terminations, FHA does not have a full understanding of loan outcomes—information FHA and Congress need in order to know how well the HECM program and FHA’s policies are working to help seniors age in place. FHA’s Performance Assessment Has Limitations While FHA has taken steps to improve the performance of the HECM program in recent years, it has not incorporated key elements of performance assessment into its management of the program. We have previously reported that a program performance assessment contains three key elements: program goals, performance metrics, and program evaluations. Performance assessment can provide important information about whether, and why, a program is working well or not. Additionally, OMB Circular A-129 states that agencies must establish appropriate performance indicators for federal credit programs, such as the HECM program, and that such indicators should be reviewed periodically. It states further that agency management structures should clearly delineate accountability and responsibility for defining performance indicators and monitoring and assessing program performance. We found limitations in FHA’s performance assessment of the HECM program, specifically a lack of performance indicators and recent program evaluations: Lack of HECM performance indicators. According to HUD’s strategic plan for fiscal years 2018–2022 and the agency’s most recent annual performance report, the HECM program falls under the strategic goal of advancing economic opportunity and the strategic objective of supporting fair, sustainable homeownership and financial viability. The strategic plan and annual performance report include some strategies for achieving this objective, such as modernizing FHA underwriting guidelines, lending standards, and servicing protocols to serve the needs of borrowers, protect taxpayers, and ensure the sustainability of FHA’s program. However, none of the six performance indicators associated with this strategic objective and discussed in the strategic plan or corresponding performance report are HECM-specific. Four of the indicators focus on FHA-insured forward mortgages. Another indicator focuses on construction of manufactured housing. The remaining indicator— maintaining a capital reserve ratio for FHA’s Mutual Mortgage Insurance Fund that meets or exceeds the statutory minimum requirement— encompasses both forward mortgages and HECMs but does not provide specific information about HECM loan outcomes, risk factors, or loan characteristics. Additionally, FHA’s annual reports to Congress on the financial status of the insurance fund contain multiple tables of HECM data but limited information on loan outcomes. For example, among other things, the fiscal year 2018 report provides the number of new HECM originations, the average age of new borrowers, the amount of HECM insurance claims paid, and estimates of the HECM portfolio’s capital position. However, the report does not include other information that would provide insight into loan outcomes, such as the percentage of HECM terminations due to borrower defaults, the proportion of active HECMs with delinquent property charges, or the percentage of distressed HECM borrowers who have received foreclosure prevention options. Limited program evaluations. The last comprehensive evaluations of the HECM program were done in 2000 and 1995. Officials said they were in the planning phase for a new evaluation of the HECM program but had not set a start date and did not expect the evaluation to include an analysis of reasons for HECM terminations or the use of foreclosure prevention options for borrowers in default. Instead, the officials told us the evaluation would focus on the impact of an FHA policy change implemented in 2015 that requires prospective HECM borrowers to undergo a financial assessment to evaluate their ability to pay ongoing property charges. While financial assessments could help reduce tax and insurance defaults, and ultimately foreclosures, they only apply to new HECMs issued on or after the effective date of the policy (April 27, 2015) and are not relevant to other HECMs within the portfolio. Therefore, for most of the HECM portfolio, an equally important consideration is the impact of FHA’s policy changes that created foreclosure prevention options for distressed borrowers. As previously noted, borrower defaults have accounted for an increasing proportion of terminations in recent years, and in fiscal year 2018, borrower defaults made up 18 percent of terminations. Expanding the program evaluation to include the impact of foreclosure prevention options would provide a more complete picture of how well FHA is reducing defaults in the HECM portfolio and helping HECM borrowers. FHA officials acknowledged the need for more performance assessment of the HECM program. The officials said their recent focus has been on financial aspects of the program, in particular losses associated with insurance claims. According to the FHA Commissioner, a key challenge for the HECM program is that FHA has historically administered it without a designated program head. The 2000 program evaluation noted that lenders and servicers found it frustrating that FHA did not have one person with responsibility for the HECM program. Further, the 2000 program evaluation noted that the division of responsibility for the program fell across many offices and that it was hard to find senior managers with a sense of ownership for the HECM program. In January 2019, an economist from HUD’s Office of Policy Development and Research transferred to the Office of Housing (which includes FHA) to serve as a Senior Advisor to the Deputy Assistant Secretary for Single Family Programs, with a focus on the HECM program. Without more comprehensive performance indicators and program evaluations, FHA lacks information that could be useful for monitoring the effects of recent policy changes and may be missing evidence of the need for further program improvements. Additionally, in the absence of performance indicators and reporting, FHA and Congress lack insight into how well the HECM program is helping senior homeowners. FHA’s Internal Reporting and Analysis for the HECM Portfolio Have Shortcomings According to OMB Circular A-129, agencies must have monitoring, analysis, and reporting mechanisms in place to provide a clear understanding of a program’s performance. The circular says these mechanisms should be sufficiently flexible to perform any analysis needed to respond to developing issues in the loan portfolio. However, we found shortcomings in FHA’s internal reporting. We also found that FHA had not analyzed the implications of its foreclosure prioritization process for FHA-assigned loans. Internal reporting for the HECM program is limited. Although FHA adopted the HERMIT system to improve oversight of the HECM portfolio, it has not used program data to regularly report key loan performance information—for example, HECM termination reasons, servicer advances, and use of foreclosure prevention options. FHA officials said they have been more focused on the analysis and reporting of claims and other financial data for the HECM program. However, according to OMB Circular A-129, effective reporting provides accurate, timely information on program performance, early warnings of issues that may arise, and analytics to drive decision-making. FHA has generated some reports from HERMIT to help oversee the HECM portfolio, but it has been slow to develop regular and comprehensive reporting mechanisms. FHA officials told us that while data on defaults and use of foreclosure prevention options have generally been available in HERMIT since 2015, FHA was unable to obtain reports on these topics until the summer of 2018 because of contract funding limitations. FHA officials said that starting in September 2018, FHA began receiving regular reports from its HERMIT system contractor on issues such as HECMs assigned to FHA; HECM origination, assignment, and termination activity by month; summary information on the number and dollar amount of HECMs originated each year; and HECMs with a default date. Additionally, around the same time, FHA requested and received ad hoc reports (one-time reports created for specific purposes) from the contractor that included spreadsheets of all active HECMs with a repayment plan and all active HECMs for which there was an identified nonborrowing spouse. FHA officials said the purpose of the reports generated from HERMIT is to help FHA better manage HECM program performance. However, our review of these regular and ad hoc reports found that many are lists of loans that meet certain criteria and do not provide summary statistics that could be used to readily identify patterns or trends in metrics, such as the number of or reasons for HECM terminations or use of different foreclosure prevention options. The reports require additional analysis to generate meaningful management information. According to OMB Circular A-129, graphics, tables, and trend analysis that compare performance over time and against expectations and other information can provide critical context for understanding program performance. Further, the circular says dashboards (easy-to-comprehend summaries of key quantitative and qualitative information) and watch lists are tools that can help all levels of the organization receive appropriate information to inform proactive portfolio management and ensure program decisions are informed by robust analytics. FHA’s lack of analysis and internal reporting on HECM termination reasons hampered the agency’s ability to respond to a 2017 Freedom of Information Act request about the number of and reasons for HECM foreclosures. FHA’s response contained data showing that over 99 percent of HECM foreclosures occurring from April 2009 through December 2016 resulted from the death of the borrower. However, FHA officials told us they subsequently looked more closely into the issue and redid the analysis using more reliable and updated information from January 2013 through December 2017. The revised analysis showed that 61 percent of foreclosures over that period were due to borrower deaths, 37 percent were due to borrower defaults, and 2 percent were due to conveyance of title. If FHA had regular and meaningful management information about HECM terminations, it could have initially responded to the 2017 request with more reliable information. FHA officials told us that HERMIT is an accounting system to process HECM claims and has limitations as a broader portfolio monitoring tool. However, our analysis of HERMIT data and reports generated by FHA’s HERMIT contractor suggest that the system can be used for this broader purpose. Without more robust program analysis and internal reporting, FHA is not well positioned to detect and respond to any emerging issues and trends in the HECM portfolio. As previously discussed, these trends include growing numbers of HECMs entering default and an increasing number of loans being assigned to FHA. FHA has not evaluated its foreclosure prioritization process for FHA- assigned HECMs. As previously noted, FHA-assigned loans are a growing part of the HECM portfolio. According to FHA officials, the agency generally does not foreclose on borrowers whose HECMs have been assigned to FHA and who are in default due to unpaid property charges. According to FHA, the properties associated with these loans are typically occupied. FHA officials said the agency prioritizes processing foreclosures on assigned HECMs for which the property is vacant (because the borrower passed away, for example). FHA officials said that prioritizing foreclosure processing for those loans and delays by the Department of Justice in completing those foreclosures has effectively resulted in few foreclosures on assigned loans with property charge defaults. However, FHA regulations state that servicers generally must initiate foreclosure within 6 months of calling a loan due and payable due to a death or default (if the borrower or heirs have not yet paid the debt off). FHA’s prioritization of processing vacant properties for foreclosure and generally not foreclosing on FHA-assigned HECMs with a property charge default raises issues and potential risks that FHA has not fully analyzed. First, defaulted borrowers whose loans are privately owned (that is, have not been assigned to FHA) face a greater risk of foreclosure than defaulted borrowers with FHA-assigned loans. According to a representative from one HECM servicer we interviewed, FHA’s practice is unfair because it treats HECM borrowers inconsistently. Second, FHA’s foreclosure prioritization processing may create a financial incentive for HECM borrowers with assigned loans to not pay their property charges, which, in turn, can have negative financial consequences for FHA, localities, and taxpayers. For example, because FHA does not foreclose on assigned loans in tax and insurance default, FHA advances tax and insurance payments on behalf of the borrower and adds them to the loan balance to secure and maintain its first-lien position on the mortgaged property. This makes it more likely that the loan balance will increase to a point that it exceeds the value of the home. When the borrower dies or vacates such a property, FHA may not be able to recoup the loan balance in a foreclosure sale, resulting in a loss to the insurance fund. As of August 2019, FHA had not evaluated the various risks of generally not foreclosing on assigned HECMs with property charge defaults. As a result, FHA does not know how its process for prioritizing foreclosures for assigned loans affects the HECM portfolio, HECM borrowers, neighborhoods, and FHA’s insurance fund. FHA’s Oversight of Servicers and Collaboration on Oversight between FHA and CFPB Are Limited FHA Has Not Performed On-Site Reviews of HECM Servicers for More Than 5 Years and Lacks Current Review Procedures FHA’s oversight of HECM servicers is limited. FHA requires HECM servicers, among other things, to inform borrowers of their loan status, including any conditions resulting in a loan becoming due and payable; to notify struggling borrowers of the availability of housing counseling and foreclosure prevention options; to inform surviving nonborrowing spouses of conditions and requirements for the deferral period; and to manage the transfer of loan servicing from one entity to another. These requirements are identified in FHA regulations, handbooks, and mortgagee letters. If properly implemented, these requirements can help ensure that HECM borrowers and nonborrowing spouses are aware of their mortgage responsibilities, options for resolving situations that can result in foreclosure, and who to contact with loan servicing questions. FHA officials said they maintain communication with HECM servicers, including through an industry working group, about their compliance with FHA requirements. The officials also noted that FHA conducts reviews of due and payable requests and insurance claims, which can include checks for some of the requirements discussed above. However, FHA has not performed comprehensive on-site reviews of HECM servicers’ compliance with program requirements since fiscal year 2013 and does not have current procedures for conducting these reviews. The lack of on-site reviews of HECM servicers is inconsistent with OMB requirements for managing federal credit programs. OMB Circular A- 129 states that agencies should conduct on-site lender and servicer reviews biennially where possible and annually for lenders and servicers with substantial loan volumes or those with other risk indicators such as deterioration in their credit portfolio, default rates above acceptable levels, or an abnormally high number of reduced or rejected claims. The purpose of these reviews is to evaluate and enforce lender and servicer performance and identify any noncompliance with program requirements. The circular encourages agencies to develop a risk-rating system for lenders and servicers to help establish priorities for on-site reviews and to monitor the effectiveness of required corrective actions. The circular also says that agencies should summarize review findings in written reports with recommended corrective actions. FHA previously conducted on-site reviews of HECM servicers. However, according to agency data, FHA has not performed on-site reviews since fiscal year 2013. From fiscal years 2010 through 2013, FHA’s Quality Assurance Division (a component of the Office of Lender Activities and Program Compliance) conducted 14 on-site reviews of HECM servicers (see table 3). These reviews examined compliance with FHA servicing requirements and included detailed reviews of samples of loans. FHA provided us three examples of HECM servicing reviews conducted in fiscal year 2013. While not representative of all reviews, the three reviews identified multiple violations of FHA requirements, as follows: Quality control plans. Two of the three reviews found that the servicers’ quality control plans—an internal control mechanism to help ensure compliance with FHA requirements—were missing required elements. For example, one review found that the servicer’s plan lacked 13 required elements, including those intended to ensure compliance with fair lending laws and immediate reporting of fraud or other serious violations. Another review found deficiencies with the servicer’s plan, including in the areas of customer service, servicing transfers, and fees and charges. Communication with borrowers. In these same two reviews, FHA found that the servicers did not always provide borrowers with a designated contact person or timely and accurate information about their loan status. For both servicers, FHA’s reviews of files for a sample of active loans found no evidence that the servicer had provided the borrower a contact person to handle inquiries. FHA requires servicers to designate for borrowers a contact person knowledgeable about servicing and provide the name of the person annually and whenever the contact person changes. Additionally, both reviews found that the servicers’ annual loan statements to borrowers were missing critical information, such as the net principal limit (total loan funds available), and that the servicers did not provide borrowers with statements after every loan disbursement, as required. Filing claims. In two of the three reviews, FHA found deficiencies in the servicers’ filing of insurance claims. For example, in one review, FHA identified multiple cases where the servicer submitted claims that were greater than the amounts warranted, including excess attorney and appraisal fees, property preservation and protection expenses, and interest costs. In another review, FHA found numerous instances where the servicer missed various deadlines—including for submitting claims, commencing foreclosure, and obtaining appraisals—and therefore was not entitled to the full claim amounts it received. Loan disbursements. One of the three reviews found numerous instances in which the servicer did not respond to borrowers’ requests for payment plan changes within the required time frame of 5 business days, and therefore did not make timely loan disbursements to borrowers. FHA required these servicers to take corrective actions, including updating quality control plans, revising policies and procedures, reimbursing FHA for unwarranted claim amounts, indemnifying FHA for losses on a loan, and paying late charges to borrowers who did not receive timely loan disbursements. FHA has the option of referring violations of FHA requirements to HUD’s Mortgagee Review Board, which can take administrative actions such as issuing letters of reprimand, suspending or withdrawing approval to participate in FHA programs, entering into settlement agreements to bring an entity into compliance, and imposing civil money penalties. FHA officials said they had not referred any HECM servicers to the board as a result of findings from on- site reviews. According to FHA’s current Director of the Quality Assurance Division, under previous leadership, the division suspended on-site reviews of HECM servicers after fiscal year 2016 because of servicers’ concerns about the clarity and consistency with which FHA was conducting the reviews and applying enforcement remedies. He said the Quality Assurance Division had intended to revise its guidance for conducting the reviews and then resume them, but the effort had stalled during a change in leadership. The current Director said he was not aware that HECM servicing reviews had been suspended until the fall of 2017, when the division began targeting on-site reviews for fiscal year 2018, and noticed that HECM servicers were not included in the prior year’s targeting methodology. The lack of recent HECM servicer reviews is problematic for a number of reasons. First, as previously noted, the number of HECM borrowers defaulting on their loans has grown in recent years. As a result, knowing whether servicers are providing borrowers with accurate and timely communications about their mortgage obligations and the status of their loans has become increasingly critical. Second, FHA has recently made program changes and implemented foreclosure prevention options, such as at-risk extensions and mortgagee optional election assignments, to help struggling borrowers and nonborrowing spouses delay or avoid foreclosure. But FHA does not know how effectively servicers inform borrowers of these options and use these tools due to its lack of oversight. Third, as discussed earlier, the majority of HECM servicers are nonbank entities that may pose risks because they are not subject to the same comprehensive federal safety and soundness regulations as banks and rely on funding sources, such as lines of credit, that may be less stable than deposits. The Director of the Quality Assurance Division said FHA plans to begin conducting HECM servicer reviews in fiscal year 2020 but will first need to revise its procedures for reviewing HECM servicers, which were last updated in 2009. However, the Director told us the division decided not to develop criteria for selecting HECM servicers for review. Instead, he said FHA plans to review all HECM servicers with significant portfolios at least once every 3 years, starting with the three servicers that account for 96 percent of the HECM portfolio. While FHA’s plan to review HECM servicers with significant portfolios captures one aspect of portfolio risk (loan volume), it does not account for other risk indicators that OMB Circular A-129 says agencies should consider. The circular also encourages agencies to develop risk-rating systems that incorporate these indicators. While the current HECM servicing market is dominated by a small number of companies, the ability to prioritize on-site reviews based on risk ratings will be important if the market becomes less concentrated in the future. Additionally, some HECM servicers may warrant review more frequently than once every 3 years if their business volume or performance poses substantial risks to FHA or to borrowers. FHA’s plans do not account for these contingencies. CFPB Conducts Oversight of Reverse Mortgage Servicers, but It Has Not Completed Steps to Share Examination Results with FHA CFPB oversees reverse mortgage servicers through examinations designed, among other things, to identify whether servicers engage in acts or practices that violate federal consumer financial laws. CFPB issued its Reverse Mortgage Examination Procedures in 2016 and began conducting examinations in 2017. CFPB’s procedures include reviewing servicers’ compliance with the Real Estate Settlement Procedures Act of 1974 and its implementing regulations (which, among other things, contain requirements for notifying borrowers of servicing transfers, responding to borrowers’ written information requests and notices of error, and disclosures relating to force-placed insurance); the Truth In Lending Act and its implementing regulations (which impose requirements on servicers governing the use of late fees and delinquency charges, provisions for payoff statements, and disclosures regarding rate changes for adjustable-rate mortgages); and other consumer protection laws. Additionally, CFPB’s procedures include a review of whether a HECM servicer is following selected elements of FHA’s HECM program requirements. For example, CFPB’s examiners are directed to determine whether information provided to the borrowers about life expectancy set- aside accounts (an FHA requirement) is clear, prominent, and readily understandable, and whether the borrower incurred penalties or unnecessary charges in the event the servicer failed to make disbursements of set-aside funds for insurance, taxes, and other charges with respect to the property in a timely manner. CFPB examiners also are directed to determine whether the servicer referred a HECM to foreclosure improperly after the death of a borrower, such as when an eligible nonborrowing spouse still occupies the home. If CFPB’s reverse mortgage examinations identify violations, CFPB may require the examined entity to take corrective actions, which are recorded in the examination results as matters requiring attention. CFPB examinations of reverse mortgage servicers have found deficiencies in monitoring of servicing actions, compliance with consumer protection laws, and communications with consumers. For example, CFPB reported in the March 2019 edition of its Supervisory Highlights that one or more reverse mortgage servicing examinations found cases where the servicer did not provide the heirs of deceased borrowers a complete list of the documents needed to evaluate their case for a foreclosure extension. (Extensions can give heirs additional time to sell or purchase the property and delay or avoid foreclosure.) As a result, in some instances, one or more servicers foreclosed rather than seeking a foreclosure extension from FHA. According to CFPB, in response to the examinations, one or more servicers planned to improve communications with borrowers’ heirs, including specifying the documents needed for a foreclosure extension and the relevant deadlines. CFPB officials said they plan to continue examining reverse mortgage servicers. In addition to conducting examinations and issuing matters requiring attention, CFPB officials said the bureau has other options—including issuing warning letters and taking enforcement actions—to stop unlawful practices or promote future compliance by supervised entities. Warning letters advise companies that certain practices may violate federal consumer financial law. Enforcement actions are legal actions against an entity initiated through federal district court or by an administrative adjudication proceeding. CFPB officials told us the bureau had not issued any warning letters or enforcement actions against HECM servicers as of August 2019. While CFPB has examined reverse mortgage servicers and plans to continue doing so, CFPB officials said the bureau and FHA do not have an agreement in place to share supervisory information, which inhibits sharing of examination results. Information-sharing agreements may address topics such as what and how information will be shared and handling of sensitive information. CFPB officials said that an agreement with FHA would be needed to ensure that supervisory information in the bureau’s examinations is kept confidential. Under the Dodd-Frank Wall Street Reform and Consumer Protection Act, CFPB must share results of the examination of a supervised entity with another federal agency that has jurisdiction over that entity, provided that CFPB received from the agency reasonable assurances as to the confidentiality of the information disclosed. In addition, in previously issued work, we noted that interagency collaboration can serve a number of purposes, including, among other things, policy development, oversight and monitoring, and information sharing and communication. CFPB officials said CFPB and FHA had taken initial steps in 2017 toward developing an information-sharing agreement. However, as of August 2019, an information-sharing agreement had not been completed. CFPB officials told us there were existing ways for the two agencies to share examination findings, but that an information-sharing agreement would facilitate the process. CFPB officials said developing information- sharing agreements can be a lengthy process and that both agencies had other competing priorities. However, because of the limited information sharing between CFPB and FHA, FHA is not benefiting from oversight findings about servicers it could rely on to help implement the HECM program. Having this information is particularly important given that FHA does not comprehensively review HECM servicers itself and CFPB’s examinations address a number of FHA requirements. Access to CFPB’s examination results could enhance FHA’s oversight of HECM servicers and potentially help it respond to consumer protection issues facing HECM borrowers. CFPB Collects and Analyzes Consumer Complaints on Reverse Mortgages, but FHA Does Not Use All Available Data CFPB Has Received About 3,600 Reverse Mortgage Complaints since 2011 CFPB collects, analyzes, and reports on consumer complaints related to reverse mortgages. The bureau began collecting reverse mortgage consumer complaints in December 2011 and has collected about 3,600 complaints since then. CFPB collects complaints through an online forum on its website called the Consumer Complaint Database, as well as via email, mail, phone, fax, or referral from another agency. CFPB’s authority to collect complaints comes from the Dodd-Frank Wall Street Reform and Consumer Protection Act, which states that one of the bureau’s primary functions is collecting, investigating, and responding to consumer complaints. CFPB officials told us the bureau uses consumer complaints as part of its criteria for selecting entities to examine, including reverse mortgage servicers, and to inform its educational publications. For example, in June 2015, CFPB released a report on reverse mortgage advertising and consumer risks. In August 2017, CFPB released an issue brief on the costs and risks of using a reverse mortgage to delay collecting Social Security benefits. In February 2015, CFPB issued a report on reverse mortgage consumer complaints it received from December 2011 through December 2014. CFPB found that consumer complaints indicated frustration and confusion over the terms and requirements of reverse mortgages. CFPB also found that many complaints were about problems with loan servicing. For example, some consumers complained that they were at risk of foreclosure due to nonpayment of property taxes or homeowners insurance and that they faced obstacles when trying to prevent a foreclosure. CFPB officials told us they did not currently have plans to publish additional reports on reverse mortgage complaints, but that CFPB would continue to produce educational materials on reverse mortgages and internally review the data on a routine basis. For this report, we performed a high-level analysis of roughly 2,500 reverse mortgage complaints received by CFPB from calendar years 2015 through 2018. We analyzed patterns in the number of complaints by year, state, submission method, and company. By year. Complaint volumes varied across the 4 years, with the most complaints received in 2016 and the least received in 2018 (see table 4). By state. The states with the most complaints were California (accounting for 16 percent of reverse mortgage complaints), Florida (11 percent), New York (8 percent), and Texas (7 percent). These states are among the most populous, and three of them (California, Florida, and Texas) also had the greatest numbers of HECMs. By submission method. A majority of reverse mortgage complaints (56 percent) were submitted through CFPB’s website. The remaining complaints were submitted through referrals to CFPB from other agencies (22 percent), by phone (12 percent), by postal mail (7 percent), and by fax (3 percent). Compared to the percentage of all types of mortgage complaints filed during the 4-year period, the percentage of reverse mortgage complaints filed through the website (56 percent) was lower than the corresponding percentage for complaints about all types of mortgages (67 percent). Representatives from legal aid organizations representing HECM borrowers said that reverse mortgage consumers may be less likely to file a complaint through a website because of limitations sometimes related to aging— for example, lack of internet access or computer skills. Additionally, representatives from three of the five organizations said seniors may suffer from health or capacity issues, such as hearing, vision, or memory loss, that may make it difficult for them to file or follow up on a complaint. For these reasons, seniors may not be submitting complaints through CFPB’s website and seniors’ complaints about reverse mortgages may be underreported in general. By company. Companies that were the subject of reverse mortgage complaints included both lenders and servicers. From 2015 through 2018, five companies were the subject of more than 100 complaints each, ranging from a low of 116 to a high of 506. Together, these five companies accounted for 61 percent (1,509) of the reverse mortgage complaints CFPB received. Additionally, one company received the most complaints in 4 out of the 5 years reviewed. We also conducted a more detailed analysis of a random, generalizable sample of 100 consumer complaint narratives from among the 2,472 total reverse mortgage complaints CFPB received in calendar years 2015 through 2018. The purpose of this analysis was to identify patterns in consumer-described issues about reverse mortgages. We created issue categories by reading the consumer narratives. Figure 6 shows the estimated percentage of reverse mortgage complaints received by CFPB over the 4-year period by consumer-described issue categories, based on our sample of 100 complaint narratives. Among the largest consumer-described issue categories were foreclosures; poor communication from lenders or servicers; problems at loan origination; estate management; and unfair interest rates, fees, or costs. Being at risk of foreclosure or in foreclosure. The largest consumer-described issue category (47 percent) involved consumers (or someone complaining on behalf of the consumer) who said they were at risk of foreclosure or in the foreclosure process. For example, some consumers said they or the borrower they represent had received a notice of default, were in due and payable status, or were at risk of foreclosure. Some consumers sought help in preventing foreclosure or felt they were wrongly being foreclosed on. In 16 of the 47 complaints about being at risk of or in foreclosure, consumers also cited concerns about property taxes, insurance, or other property charges. Poor communication on a servicing or lending issue. The second largest consumer-described issue category (42 percent) involved complaints about poor communication on a reverse mortgage servicing or lending issue. These complaints included concerns about a lack of communication or communications that were unclear or unresponsive to the consumer’s needs. Complaints in this category often overlapped with those about being at risk of or in foreclosure. For example, some of these complaints included consumers’ concerns that they had not received information about the status of or reason for a possible foreclosure from their servicer or did not get responses to their inquiries. Loan origination issues. The third largest complaint category involved problems occurring at loan origination (29 percent). These complaints included consumers’ concerns that the amount of funds available from their reverse mortgage was less than expected or that interest rates or fees were not disclosed or explained to them. The complaints also included cases where the adult children of borrowers said they felt the lender took advantage of their parents. Estate-management issues. Twenty-seven percent of consumer complaints were about estate management issues. Complaints involving estate-management were often submitted by deceased borrowers’ families or heirs. In some cases, heirs said that they were unable to get information about the status of the reverse mortgage. In other cases, the heirs said that because of the reverse mortgage, they were at risk of losing the home, which was also their place of residence. Unfair interest rates, fees, or costs. Twenty-seven percent of consumer complaints were about being charged higher-than-expected costs, fees, or interest. For example, in a few complaints, consumers said that their servicers required them to pay for insurance products (for example, flood insurance) that they felt were not needed. According to CFPB officials, the bureau (1) refers consumer complaints about financial products and services to the companies the complaints are about or other federal regulators with supervisory jurisdiction over those companies or (2) makes complaint information available to other federal agencies with jurisdiction over the relevant product or service. CFPB officials said the bureau does not currently refer reverse mortgage complaints to FHA; however, they told us reverse mortgage complaints are available to FHA through CFPB’s public website and through a secure portal FHA can access that has more data available than on the public website. FHA Does Not Analyze Data on Consumer Complaints to Help Inform HECM Program Policies FHA collects and records inquiries and complaints about HECMs and, as previously mentioned, has access to CFPB data on reverse mortgage complaints. However, FHA does not use its inquiry and complaint data to help inform HECM program policies and oversight, and the way data are collected does not produce quality information for these purposes. Additionally, FHA has not leveraged CFPB’s complaint data for HECM program oversight. Federal internal control standards state that agencies should use quality information to achieve the entity’s objectives, including using relevant data from reliable internal and external sources. Additionally, in prior work we identified practices to enhance collaboration across agencies, including leveraging agency resources. According to agency officials, FHA’s two main methods for collecting customer inquiries and complaints are hotlines operated by the agency’s National Servicing Center and the FHA Resource Center. Historically, the National Servicing Center was FHA’s primary method for collecting inquiries and complaints about the HECM program. From calendar years 2015 through 2018, the National Servicing Center received about 105,000 HECM-related calls. During this same period, the FHA Resource Center received 147 HECM-related calls. In April 2019, the FHA Resource Center became the primary entity for collecting, recording, and responding to all HECM-related calls. FHA officials told us they transferred these responsibilities from the National Servicing Center to the FHA Resource Center to help improve call management. While this change could help improve customer service, it would not fully resolve limitations we found in FHA’s approach to collecting and recording HECM inquiries and complaints that diminish the usefulness of the information for program oversight. These limitations include the following: Information is not suitable for thematic analysis. Both the National Servicing Center and the FHA Resource Center do not collect call information in a way that would allow FHA to readily analyze the data for themes. For example, both centers do not reliably differentiate between inquiries and complaints—a potentially important distinction for determining appropriate agency-level responses (for example, creating informational materials to address frequently asked questions from borrowers or investigating problematic servicing practices after repeated complaints). Additionally, while both the centers collect data on the reason for calls, neither did so in a systematic way that would allow FHA to readily determine how frequently issues are being raised. For example, neither centers’ data systems contained standardized categories or menus with options for recording reasons for calls. As a result, the FHA Resource Center’s data from 2015 through 2018 contained more than 100 separate reasons for 147 HECM-related calls. Some of the reasons the center recorded were too specific (for example, a property address or a case number) to be useful for identifying themes, while others were so similar that they do not provide meaningful distinctions (but could be combined into fewer, potentially more useful categories). We noted similar limitations in the National Servicing Center’s data, which included ambiguous call reasons such as “history” and “documents,” and categories that could be collapsed, which hinders thematic analysis. Customer type is not recorded. The National Servicing Center, which received the large majority of HECM-related calls to FHA, did not record information on the type of customer that made the call. National Servicing Center guidance for staff says customers include borrowers, nonprofit organizations, government entities, real estate brokers and agents, title companies, attorneys, lenders and servicers, and HUD employees, but its data system does not include these categories. Information on customer type could be useful in identifying issues facing different populations of callers and could help FHA tailor strategies for addressing their concerns. In contrast, the FHA Resource Center’s data system does include categories for customer type for the smaller number of HECM-related inquiries and complaints it received. Because the FHA Resource Center’s system is now FHA’s primary repository for new HECM-related calls, information on customer type should be available for future inquiries and complaints. However, this information is not available for the bulk of HECM-related calls FHA received in prior years. FHA officials said the agency uses customer complaint and inquiry data to improve customer service. For example, FHA officials said the National Servicing Center monitors calls on a daily basis to ensure that prompt responses are provided. Similarly, FHA officials said they review call data monthly to identify training needs of servicers or contractors and potential process changes to improve customer experience with the call process. However, FHA does not analyze data for other purposes that could enhance program oversight, such as determining which HECM servicers and lenders receive the most complaints, targeting entities for on-site reviews, or identifying topics that may need additional borrower education. FHA also does not use CFPB’s consumer complaint data to inform management and oversight of the HECM program, even though some of the information could be useful to the agency. For example, according to CFPB’s complaint data for 2015 through 2018, approximately 6 percent of reverse mortgage complaints were about FHA’s servicing contractor. FHA officials said they do not review CFPB’s complaint data because they believe the data are too limited to be useful and because they have concerns about CFPB’s controls over data integrity. However, as our analysis shows, CFPB’s data can be used to identify consumer concerns—such as difficulties avoiding or navigating foreclosure or problems communicating with servicers—that may merit additional attention by FHA. Additionally, CFPB’s Office of Inspector General recently reviewed CFPB’s management controls for the Consumer Complaint Database and did not identify major data integrity issues that would preclude use of the data for general oversight purposes. Periodically analyzing CFPB consumer complaint data and internally collected consumer complaint data could help FHA to detect and respond to consumer protection issues regarding HECMs. Housing Market Conditions, Exit of Large Bank Lenders, and Policy Changes Help Explain the Decline in Reverse Mortgages since 2010 Since 2000, the take-up rate—the ratio of HECM originations to eligible senior homeowners—has been limited (see fig. 7). This rate, which provides an indication of how popular HECMs are among the population of senior homeowners, has not reached 1 percent and has fallen in recent years. In addition, since calendar year 2010, the volume of HECM originations has declined and is about half of what originations had been at their peak. For example, in calendar years 2007–2009, more than 100,000 new HECMs were originated each year, compared with roughly 42,000 in calendar year 2018. The relatively high homeownership rate and low retirement savings of U.S. seniors suggest that reverse mortgages could be a way for many older Americans to tap their home equity and supplement retirement income. However, the popularity of reverse mortgages has declined in recent years for a number of possible reasons. We developed an econometric model to examine the relationship between HECM take-up rates and a number of explanatory variables. For additional information and detailed results from our econometric model of factors associated with HECM take-up rates, see appendix II. Among other factors, our model results indicate that house price changes, home equity, and prior use of other home equity lending products were statistically significant (at the 1 percent level) in explaining the decrease in HECM take-up rates since 2010. Changes in house prices. The decline in take-up rates may reflect lower house prices, which have limited the number of households with sufficient home equity (as a percentage of home value) to benefit from a HECM. Our model estimated that, controlling for other factors, take- up rates were higher when house price growth was large and there was a history of house price volatility compared to either relatively low house price appreciation or stable house prices. This result is consistent with senior homeowners using reverse mortgages to insure against house price declines. For example, researchers have noted that in states where house prices are volatile and the current level is above the long-term norm, seniors anticipate future reductions in house prices and lock in their home equity gains by obtaining a reverse mortgage. Home equity and prior home equity borrowing. Additionally, we found that controlling for other factors, take-up rates were higher where home equity (house value minus any mortgage debt) was high. In these cases, senior homeowners tap into their high home equity to help supplement income with proceeds from the HECM. Further, we found that among seniors who had previously used other home equity lending products, such as home equity loans, take-up rates were high. This result is consistent with seniors using HECMs to pay off these loans. Academics and industry experts have also noted possible reasons why the popularity of reverse mortgages is limited. For example, senior homeowners can tap their home equity by other means, such as home equity loans, home equity lines of credit, and cash-out refinancing. Some of these options may be less expensive than reverse mortgages. Seniors can also downsize––sell their current home and buy or rent a less expensive one—and keep the difference to supplement retirement savings. Seniors have other ways to supplement their retirement income and age in place—for example, one academic noted that some seniors rent out rooms in their homes, potentially using online marketplaces such as Airbnb. Additionally, our literature review and interviews with academics identified other factors that have may have contributed to limited interest in reverse mortgages, including the following: Exit of large bank lenders. As previously noted, banks, thrifts, and credit unions were historically the primary lenders and servicers of mortgage loans. Following the 2007–2009 financial crisis and subsequent revisions to regulatory bank capital requirements, banks reevaluated the benefits and costs of being in the mortgage lending market, as well as retaining mortgages and the right to service them. Today, the reverse mortgage market is dominated by a relatively small number of nonbank entities. The exit of large, well-known lenders, such as Bank of America and Wells Fargo, from the HECM market created opportunities for smaller nonbank lenders to enter the market. According to an academic we spoke with, in addition to new capital requirements, large banks may have exited the market partly out of concern that they risked damage to their reputations if they foreclosed on seniors who defaulted on their HECMs. Additionally, a 2018 survey of lenders found a variety of reasons why lenders have stopped originating HECMs, including potential reputation risk and concerns about HECMs being a distraction from their forward mortgage business. Although the HECM market is currently served by several nonbank lenders, their smaller scale, limited access to capital, and limited name recognition may limit their ability to reach more potential borrowers. FHA policy changes to the HECM program. FHA has made several policy changes in recent years to help stabilize the financial performance of the HECM portfolio and strengthen financial criteria for HECM borrowers. Although many of the HECM policy changes introduced since 2010 were intended to minimize program losses, they also may have reduced take-up rates. For example, in 2010 FHA reduced the amount of money a borrower can get from a HECM. Some academics we interviewed said reductions in the loan amounts that borrowers can receive likely reduced demand for HECMs. In 2015, FHA changed financial requirements for HECMs to include a financial assessment of the prospective borrower prior to loan approval. Some academics said these changes made other home equity extraction options that already had similar requirements more competitive with HECMs. Consumers’ misunderstanding and product complexity. A 2013 survey of U.S. homeowners aged 58 and older revealed a lack of knowledge of reverse mortgages. The survey found that awareness of reverse mortgages is high, but knowledge of mortgage terms is limited. Additionally, the survey found that respondents perceived reverse mortgages to be fairly complex. Consumers’ perception of the product. Academics we spoke with told us that consumers’ negative perception of reverse mortgages likely has a negative influence on take-up rates. For example, three academics elaborated that consumers build their perception of the product based on the industry’s marketing and advertising, which includes television commercials with celebrity spokespeople that may appeal to individuals facing economic hardship. Additionally, a 2016 survey of Americans aged 55 to 75 found that many respondents had reservations about reverse mortgages, including that they are often considered a financial tool of last resort. For example, only 27 percent of survey respondents stated that, in general, it was better to use a reverse mortgage earlier in retirement as opposed to using it as a last resort. Relatively high origination costs and fees. HECMs also may be unpopular with borrowers because they can be more expensive than other home equity lending products, such as home equity lines of credit. For example, HECM borrowers are charged various fees, such as the up-front insurance premiums that FHA charges as compensation for its insurance guarantee and origination fees lenders charge. The up-front insurance premium is 2 percent of the mortgage’s maximum claim amount. Also, for origination fees, lenders can charge the greater of $2,500 or 2 percent of the first $200,000 of the mortgage’s maximum claim amount plus 1 percent of the maximum claim amount over $200,000. However, origination fees are currently capped at $6,000. Further, because borrowers do not make monthly payments on the loans, the interest will accumulate over time, and compounding the interest, the loan balance can rise quickly. Seniors’ attitudes toward debt and desire to leave a bequest. Some academics have noted that seniors tend to be financially conservative and avoid debt in old age––behavior driven by their desire to leave a bequest or save for emergency expenses or long- term care costs. For example, academics have noted that some impediments to home equity extraction are behavioral and have to do with seniors’ long-held values, beliefs, and attitudes, such as to maximize wealth transfer to heirs by leaving a bequest. As a result, they may be reluctant to take out a HECM, even if it could help pay for some future expenses. Conclusions HECMs allow seniors to tap a portion of their home equity to supplement their retirement income, but these loans can present risks to borrowers and their spouses. The growing number of borrowers who have defaulted on their HECMs and faced foreclosure in recent years highlights the importance of monitoring loan outcomes and overseeing loan servicing policies and practices in the HECM program. FHA has taken some steps to enhance the data it receives from servicers and has created foreclosure prevention options for distressed borrowers. However, FHA could significantly improve its monitoring of loan outcomes and oversight of servicing in the HECM program in the following areas: FHA’s lack of comprehensive termination data limits understanding of the reasons why HECMs end, how the debt is satisfied, and how well the program is helping seniors age in place. By, for example, updating and providing more guidance to servicers on how to record termination reasons, FHA could improve the completeness and accuracy of HECM termination data. FHA has not effectively assessed the performance of the HECM program. By establishing performance indicators and periodically assessing them, FHA could better oversee the program and communicate information on program performance to Congress. Further, FHA could use the performance data to help make informed decisions about any needed program changes in the future. FHA’s internal monitoring and reporting on loan outcomes has been limited. Adopting analytic tools could better position FHA to evaluate loan outcomes and help ensure senior officials have information needed to make key decisions. FHA has not fully analyzed the implications of how it prioritizes foreclosures for assigned HECMs. FHA’s current process generally results in no foreclosures on assigned loans with property charge defaults. Analyzing the implications of this process could help FHA optimize how it services assigned loans. Because FHA does not currently perform on-site reviews of HECM servicers, it lacks assurance that servicers are complying with rules and program requirements. While FHA plans to begin reviewing HECM servicers in fiscal year 2020, its plan does not include development of a risk-rating system to prioritize reviews and identify servicers that should be reviewed more frequently. CFPB does not share the results of its examinations of HECM servicers with FHA, in part because the two agencies have not completed a formal information-sharing agreement. Sharing these results could aid FHA’s oversight of HECM servicers by providing additional information about the servicers’ performance and operations. FHA’s collection and use of consumer complaint data could be improved. More organized collection of complaints and better monitoring of internal and external complaint data could help FHA detect and respond to emerging consumer protection issues regarding HECMs. By addressing these issues, FHA could help ensure that the HECM program achieves program goals, effectively oversees servicers, and provides appropriate borrower protections. Recommendations for Executive Action We are making a total of nine recommendations, eight to FHA and one to CFPB: The FHA Commissioner should take steps to improve the quality and accuracy of HECM termination data. These steps may include updating the termination reasons in the HERMIT system or updating the HERMIT User Guide to more clearly instruct servicers how to record termination reasons. (Recommendation 1) The FHA Commissioner should establish, periodically review, and report on performance indicators for the HECM program—such as the percentage of terminations due to borrower defaults, the proportion of active HECMs with delinquent property charges, the amount of servicer advances, and the percentage of distressed borrowers who have received foreclosure prevention options—and examine the impact of foreclosure prevention options in the forthcoming HECM program evaluation. (Recommendation 2) The FHA Commissioner should develop analytic tools, such as dashboards or watch lists, to better monitor outcomes for the HECM portfolio, such as reasons for HECM terminations, defaults, use of foreclosure prevention options, or advances paid by servicers on behalf of HECM borrowers. (Recommendation 3) The FHA Commissioner should evaluate FHA’s foreclosure prioritization process for FHA-assigned loans. Such an analysis should include the implications that the process may have for HECM borrowers, neighborhoods, and FHA’s insurance fund. (Recommendation 4) The FHA Commissioner should develop and implement procedures for conducting on-site reviews of HECM servicers, including a risk-rating system for prioritizing and determining the frequency of reviews. (Recommendation 5) The FHA Commissioner should work with CFPB to complete an agreement for sharing the results of CFPB examinations of HECM servicers with FHA. (Recommendation 6) The CFPB Director should work with FHA to complete an agreement for sharing the results of CFPB examinations of HECM servicers with FHA. (Recommendation 7) The FHA Commissioner should collect and record consumer inquiries and complaints in a manner that facilitates analysis of the type and frequency of the issues raised. (Recommendation 8) The FHA Commissioner should periodically analyze available internal and external consumer complaint data about reverse mortgages to help inform management and oversight of the HECM program. (Recommendation 9) Agency Comments and Our Evaluation We provided HUD and CFPB with a draft of this report for review and comment. HUD provided written comments, which are reproduced in appendix V, that communicate FHA’s response to the report. CFPB’s written comments are reproduced in appendix VI. CFPB said that it did not object to our recommendation to complete an agreement for sharing the results of CFPB examinations of HECM servicers with FHA (recommendation 7) and that it would work to complete such an agreement with FHA. FHA agreed with six of our eight recommendations and neither agreed nor disagreed with the remaining two. Recommendation 1. FHA agreed with our recommendation to improve HECM termination data and said it would convene a working group to update the HERMIT system and User Guide and develop clear directions for HECM servicers to record termination reasons in HERMIT. Recommendation 2. Regarding our recommendation on HECM performance indicators and program evaluation, FHA agreed that periodic review and reporting of HECM performance indicators is critically important and said it would work to expand its reporting to include the level of foreclosure prevention activity. However, FHA added that there were no HECM metrics for early default or delinquency rates, as those measures are linked to the amortizing nature of forward mortgages. We agree that early default and delinquency rates are not suitable metrics for HECMs, and our draft report did not suggest that they are. Our report focuses on metrics that would be pertinent to HECMs and that would provide additional insight into HECM loan performance. These include the percentage of HECM terminations due to borrower defaults, the proportion of active HECMs with delinquent property charges, and the amount of funds servicers have advanced on behalf of borrowers. We revised the recommendation in our final report to more specifically describe the types of performance indicators that FHA should establish and report on. In addition, FHA disagreed with a statement in our draft report that its evaluation of the HECM program has been limited. FHA said it engages in robust HECM program evaluation and cited an example that led to recent changes in FHA’s appraisal practices for HECMs. While our draft report described the change in FHA’s appraisal practices, we updated our final report to include reference to the FHA study that prompted the appraisal change. However, we maintain that FHA’s evaluation of the HECM program has been limited because the last comprehensive program evaluation was completed 19 years ago and FHA has not assessed the impact of HECM foreclosure prevention options. Recommendations 3 and 4. FHA agreed with our recommendations to develop analytic tools for monitoring HECM loan outcomes and to evaluate its foreclosure prioritization process for FHA-assigned loans. Regarding the latter, FHA said that it is evaluating alternative disposition options to reduce the number of loans that must go through foreclosure and that it would take steps to evaluate the impact of its prioritization process to assist in future decision-making. Recommendation 5. FHA agreed with our recommendation to develop and implement procedures for conducting on-site reviews of HECM servicers, including a risk-rating system for prioritizing and determining the frequency of reviews. As noted in our draft report, FHA said it is in the process of updating procedures for on-site reviews and plans to implement them in fiscal year 2020. FHA disagreed with a statement in our draft report that FHA’s oversight of HECM servicers is limited. FHA said the HECM servicing community is small, which allows the agency to maintain regular communication with HECM servicers, including through training sessions and industry working group meetings. Our draft report acknowledged FHA’s communications with servicers, but these activities are not a substitute for in-depth compliance reviews of servicers’ operations. As our draft report stated, FHA has not conducted on-site HECM servicer reviews since fiscal year 2013. Given the 5-year lapse in FHA’s use of this key oversight tool, we maintain that FHA’s oversight of HECM servicers has been limited. Recommendation 9. FHA agreed with our recommendation to periodically analyze internal and external consumer complaint data about reverse mortgages. FHA said it is expanding its data and reporting capabilities as part of an information technology modernization initiative. FHA also said that routing consumer inquiries through the FHA Resource Center should improve data collection and analysis. FHA did not explicitly agree or disagree with our recommendations to work with CFPB to complete an agreement for sharing examination results and to collect and record consumer inquiries and complaints in a manner that facilitates analysis (recommendations 6 and 8, respectively). FHA said it would explore opportunities to coordinate with CFPB where appropriate. FHA also said that routing inquiries through the FHA Resource Center would help identify common issues, track servicer performance, and inform policy decisions. Fully implementing our recommendations will help ensure that FHA has the information it needs to effectively oversee the HECM program. We are sending copies of this report to the Secretary of the Department of Housing and Urban Development, the Director of the Consumer Financial Protection Bureau, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or CackleyA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report examines issues related to reverse mortgages made under the Home Equity Conversion Mortgage (HECM) program administered by the Department of Housing and Urban Development’s (HUD) Federal Housing Administration (FHA). The Consumer Financial Protection Bureau (CFPB) also plays a role in overseeing reverse mortgages, including HECMs. Our objectives were to examine (1) what FHA data show about HECM terminations, servicer advances, and the use of foreclosure prevention options; (2) FHA’s assessment and monitoring of HECM portfolio performance, servicer advances, and foreclosure prevention options; (3) FHA’s and CFPB’s oversight of HECM servicers; (4) how FHA and CFPB collect, analyze, and respond to consumer complaints about HECMs; and (5) how and why the market for HECMs has changed in recent years. To address all of our objectives, we reviewed relevant laws, regulations, and requirements, such as HECM authorizing legislation, the Reverse Mortgage Stabilization Act of 2013, FHA regulations, and mortgagee letters governing the HECM program. We also interviewed FHA and CFPB officials and staff from other relevant HUD offices such as the Office of Policy Development and Research and the Office of General Counsel. We reviewed FHA’s annual reports to Congress on the financial status of the Mutual Mortgage Insurance Fund, actuarial reports on the HECM portfolio, and FHA’s annual management reports. We also reviewed our prior reports and reports by HUD’s Office of Inspector General about the HECM program. Additionally, we identified the largest HECM servicers using FHA data on the number of loans serviced as of the end of fiscal year 2018. We found that five companies serviced more than 99 percent of the HECM portfolio (excluding loans assigned to FHA, which are serviced by an FHA contractor) as of the end of fiscal year 2018. We developed a questionnaire to solicit information applicable to our objectives from these five servicers. We took steps to verify the information gathered in the questionnaire. We reviewed responses for completeness and held teleconferences with each HECM servicer to discuss, clarify, and amend responses. Where possible, we corroborated servicers’ responses with information or analysis from other sources, such as our analysis of FHA loan-level data or FHA documents. We use summary statements and illustrative examples from these questionnaires and our interviews with the five servicers throughout the report. We also interviewed representatives from five legal aid organizations representing HECM borrowers in the states of California, Florida, New York, Texas, and Washington. We selected these states because they had the highest number of HECM originations in the past decade and because they provided some geographic diversity; the five states span the West (California), Northwest (Washington), Northeast (New York), Southeast (Florida), and South (Texas). We selected the specific legal aid organizations within those states because they represented a large number of HECM borrowers, according to organization representatives. We conducted semistructured interviews with organization representatives that included questions on the top consumer protection issues facing HECM borrowers, how recent HECM program changes may have helped borrowers delay or avoid foreclosure, and characteristics of HECM borrowers that may affect their ability to file consumer complaints. We use summary statements and illustrative examples from these interviews throughout the report. HECM Terminations, Servicer Advances, and Foreclosure Prevention Options To address the first objective, we analyzed FHA data to determine the number of and reasons for HECM terminations, the amounts of servicer advances, and the number of borrowers approved for selected foreclosure prevention options (for example, repayment plans). We used data from the Home Equity Reverse Mortgage Information Technology (HERMIT) system, which FHA adopted in fiscal year 2013. FHA provided us a HERMIT case detail table from its Single Family Data Warehouse that contained loan-level information as of the end of fiscal year 2018. We separately obtained several ad hoc HERMIT reports from FHA’s HERMIT system contractor, as described below. For some of our analyses, we merged data from the ad hoc reports with data from the case detail table using the unique FHA case number for each HECM. Unless otherwise noted, we analyzed data for the 5-year period spanning fiscal years 2014–2018. We assessed the reliability of data from the HERMIT system by reviewing FHA documentation about the data system and data elements. For example, we reviewed the HERMIT User Guide and notes on HERMIT system updates. Additionally, we interviewed FHA and contractor staff knowledgeable about the HERMIT system and data to discuss interpretations of data fields and trends we observed in our analyses. We also conducted electronic testing, including checks for duplicate loans, outliers, missing data fields, and erroneous values. Where appropriate, we removed from our analyses any loans missing an endorsement (insurance approval) date as well as cases with erroneous values. When possible, we corroborated our analyses with external reports such as FHA’s annual reports to Congress, management reports, and production reports. Based on these steps, we determined the data we used from the HERMIT system were sufficiently reliable for summarizing trends and generating descriptive statistics for HECM terminations, servicer advances, and selected foreclosure prevention options over the 5-year period. Termination Analysis We analyzed FHA loan-level data from the HERMIT system to determine the total number of terminated HECMs and reasons for terminations by fiscal year. We first identified terminations occurring in fiscal years 2014– 2018 using data fields for case status and termination date (see table 5). We then removed any terminated loans that had previously been assigned to FHA (16,008) using the data field that records the date FHA accepted assignment of the loan. We removed these loans because FHA officials told us the agency generally does not foreclose on FHA-assigned HECMs that default and keeping them in the analysis would have resulted in understating the proportion of terminations stemming from defaults. Accordingly, the denominator for our terminations analysis was 256,147 loans (272,155 total terminations minus the 16,008 loans previously assigned to FHA). We then identified the reported termination reasons for the 256,147 loans. We analyzed loan-level data from the HERMIT system to identify the number of loans that fell into various termination reason categories. To identify terminations stemming from a HECM becoming due and payable, we used data from two reports that we obtained from FHA’s HERMIT system contractor: the Default Key Dates Report and the Due and Payable Delinquency Report. From these reports, we identified the number of terminations due to a borrower’s death, conveyance of title, default due to unpaid property charges, default due to failure to meet occupancy or residency requirements, and default due to failure to keep the home in good repair. To identify terminations stemming from repayment, refinancing, moving, or other (undetermined) reasons, we used information on case substatus from the HERMIT case detail table from the Single Family Data Warehouse. Our undetermined reasons category included loans for which the case substatus either was labeled “terminate-other” or showed how the debt was satisfied (such as through a deed-in-lieu of foreclosure, foreclosure, or short sale) rather than providing a termination reason. For the full results of our terminations analysis, see appendix III. Servicer Advances Analysis We analyzed servicer advances to HECM borrowers using data from an ad hoc HERMIT report we requested from FHA’s HERMIT system contractor. We analyzed the data to determine the amounts and types of servicer advances in fiscal years 2014 through 2018 for terminated HECMs. We distinguished between servicer advances for unpaid property charges and servicer advances for other costs. Examples of the latter are attorney, trustee, and appraisal fees typically incurred during the foreclosure process. For each year and for the 5-year period as a whole, we calculated total servicer advances and the amount and percentage of advances for property charges and for other costs. Additionally, we distinguished between servicer advances for unpaid property charges before and after a HECM borrower’s death using the date of death of the last surviving borrower in HERMIT. This allowed us to determine the amount of servicer advances for unpaid property charges on behalf of living borrowers. We calculated the total amount of these advances over the 5-year period as well as the mean, median, and 25th and 75th percentile values. We also calculated the number and percentage of loans for which property charge advances on behalf of living borrowers were less than $2,000 (the threshold for one of FHA’s foreclosure prevention options). Foreclosure Prevention Options Analysis We analyzed data from HERMIT on the use of selected foreclosure prevention options—repayment plans and at-risk extensions—for borrowers who defaulted because of unpaid property charges. We analyzed data from April 2015 (the effective date of FHA’s current repayment plan and at-risk extension policies) through fiscal year 2018. To conduct the analysis of repayment plans, we used the HERMIT Due and Payable Delinquency Report noted previously, which includes data fields for loan default status and the dates borrowers were approved for a repayment plan. We calculated the percentage of borrowers with property charge defaults who were approved for repayment plans during the period examined. To conduct the analysis of at-risk extensions, we requested an ad hoc report from FHA’s HERMIT system contractor showing whether and when borrowers had been approved for at-risk extensions and appended it to the default status within the Due and Payable Delinquency Report using FHA case numbers. We calculated the percentage of borrowers with property charge defaults who were approved for at-risk extensions during the period examined. We also reviewed and summarized information that FHA provided us from HERMIT on nonborrowing spouses who applied for mortgagee optional election assignments from June 2015 (the effective date of FHA’s mortgagee optional election assignment policy) through fiscal year 2018. FHA provided information on the number of requested, approved, and denied mortgagee optional election assignments during that period. We also reviewed documentation from FHA and interviewed agency officials about the mortgagee optional election assignment process and reasons for denials. For the denied mortgagee optional election assignments, we reviewed information that FHA provided us from HERMIT on the current status of the associated loans as of May 31, 2019. For example, for the denied mortgagee optional election assignments, FHA determined whether the loan had been terminated as of that date. For those that had terminated, we summarized whether the debt was paid off or whether the debt was satisfied because of a foreclosure, deed-in-lieu of foreclosure, or short sale. Performance Assessment and HECM Portfolio Monitoring To address the second objective, we reviewed agency reports and interviewed agency officials to determine how the agency assesses the performance of the HECM program, including the use of any performance indicators or program evaluations. For example, we reviewed HUD’s strategic plan for fiscal years 2018–2022 and its most recent annual performance report to identify any goals and performance indicators related to the HECM program. Additionally, we reviewed program evaluations completed for the HECM program. We also interviewed FHA and HUD Office of Policy Development and Research officials about previous program evaluations and HUD’s plans for forthcoming evaluations of the HECM program. We compared FHA’s practices against leading practices identified in our previous work on assessing program performance and against Office of Management and Budget (OMB) policies and procedures on managing federal credit programs (OMB Circular A-129). Additionally, we reviewed FHA documents and interviewed FHA officials concerning the agency’s internal reporting and analysis of the HECM portfolio. For example, we reviewed examples of regular and ad hoc reports FHA received from its HERMIT system contractor. These internal reports contained information on HECM origination, assignment, and termination activity and HECM defaults. We interviewed FHA officials to understand the purpose of the reports, when they were developed, and how agency management uses them. We compared FHA’s internal reporting practices to OMB Circular A-129 on reporting mechanisms and formats for federal credit programs. FHA’s and CFPB’s Oversight of Servicers To address the third objective, we reviewed FHA and CFPB policies and procedures for overseeing HECM servicers and interviewed agency staff with oversight responsibilities. To assess the extent to which FHA oversees HECM servicers’ compliance with servicing requirements, we requested information on the number of on-site monitoring reviews of HECM servicers that FHA completed from fiscal years 2010 through 2019. We also reviewed corrective actions FHA can take to address noncompliance. We reviewed and summarized a nongeneralizable sample of three reports from on-site servicer reviews FHA conducted in fiscal year 2013, the most recent year in which FHA had completed a review. Additionally, we interviewed the director of FHA’s Quality Assurance Division, which is responsible for conducting on-site reviews of FHA-approved lenders and servicers, about the division’s past practices for reviewing HECM servicers and plans for future reviews. We compared FHA’s practices and plans to criteria in OMB Circular A-129 regarding the frequency, targeting methodology, and other aspects of on-site lender and servicer reviews. Further, we interviewed FHA officials about the extent of information sharing between FHA and CFPB on HECM servicer oversight. To examine CFPB’s oversight of HECM servicers, we reviewed CFPB’s reverse mortgage examination procedures and the examinations completed under those procedures as of fiscal year 2018. We also reviewed CFPB’s methodology for selecting reverse mortgage servicers for examination and documentation on its plans for future examinations. We reviewed CFPB’s examination findings and corrective actions as of August 2019. We interviewed CFPB officials about the examination process and agency efforts to share examination results with FHA. We reviewed statutes and regulations related to CFPB’s authority to share the results of its examinations, and we compared CFPB’s information-sharing efforts with FHA against practices for interagency collaboration we identified in previous work. Consumer Complaints To address our fourth objective, we analyzed CFPB data on reverse mortgage consumer complaints from the bureau’s online website, called the Consumer Complaint Database. The database includes information provided by consumers on their location (state), the company they are complaining about, and the nature of their complaint. For example, consumers can submit narratives describing their complaints about reverse mortgage lenders or servicers. Because CFPB had published an analysis of reverse mortgage consumer complaints using data from calendar years 2011 through 2014, we analyzed reverse mortgage complaints and narratives received by the bureau from calendar years 2015 through 2018. We analyzed all 2,472 complaints filed in those 4 years to determine the number of complaints by year, state, submission method (for example, internet, phone, or email), and company. For the analysis by submission method, we compared the results to those for complaints about all types of mortgages filed during the same period. To identify patterns in consumer-described issues about reverse mortgages, we reviewed a generalizable sample of 100 complaint narratives and categorized these complaints by topic. For this analysis, two independent reviewers read the complaints and categorized them into predetermined topics based on their content. We used nine complaint issue topics, including complaints where the consumer (or someone complaining on behalf of the consumer) said he or she (1) was at risk of foreclosure or in foreclosure; (2) was charged unfair interest rates, fees, or costs; (3) experienced problems after the loan was transferred to a new servicer; (4) had issues with, or defaulted as a result of, property taxes, insurance, or other property charges; (5) experienced poor communication on a servicing or lending issue; (6) had an issue involving occupancy requirements; (7) had concerns or issues involving the management of the estate after the borrower died or left the property; (8) had difficulties gaining approval for a mortgagee optional election assignment or recognition of a nonborrowing spouse; or (9) experienced problems during loan origination. If a complaint narrative in our sample did not contain enough information or was not clear enough to determine a complaint topic, we replaced it with another randomly selected complaint narrative. In cases where the two reviewers categorized a complaint differently, a third independent analyst read the complaint narrative and adjudicated the difference to place the complaint in a topic category. We calculated confidence intervals for these categories at the 95 percent confidence level. We determined that the CFPB data were sufficiently reliable for the purposes described above by reviewing CFPB documentation and reports from CFPB’s Office of Inspector General on CFPB’s consumer complaint database and by interviewing CFPB officials about our interpretation of data fields. Also, we interviewed CFPB officials about their collection, analysis, and use of the consumer complaint data. To determine the extent to which FHA collects consumer inquiries and complaints about HECMs, we reviewed the HECM-related calls received by FHA’s National Servicing Center and the FHA Resource Center from calendar years 2015 through 2018. We calculated the total number of HECM-related calls each center received over that period. The data from both centers included fields to capture a description of the issue raised by the caller. However, unlike CFPB’s consumer complaint data, the information in the issue description was recorded by FHA customer service staff (rather than the complainants themselves) and did not differentiate between inquiries and complaints. We determined there was not enough information in these descriptions to perform an analysis similar to the one we performed on CFPB’s consumer complaints. Both the National Servicing Center and the FHA Resource Center record the reasons for calls. However, neither entity records this information in a consistent or standardized way that would allow for analysis. For example, the data we reviewed from the National Servicing Center included about 100 reasons. Additionally, we reviewed CFPB and FHA policies and procedures for collecting and addressing consumer complaints and interviewed officials on how consumer complaints were incorporated into their oversight of HECM servicers. We interviewed officials from both agencies about their collection and use of customer complaint data. We also interviewed CFPB and FHA officials about the extent to which they share consumer complaint data or access and use the other agency’s data. Finally, we compared CFPB’s and FHA’s efforts against federal internal control standards for using quality information and against approaches we identified in prior work for enhancing collaboration across agencies. HECM Market, Originations, and Take-Up Rates To address our fifth objective, we analyzed FHA data on HECM originations from calendar years 1989 through 2018 to identify any trends in HECM program activity. Additionally, using FHA and Census Bureau data, we calculated HECM take-up rates—the ratio of HECM originations to eligible senior homeowners—from calendar years 2000 through 2017. We also developed an econometric model to examine, to the extent possible, factors affecting HECM take-up rates from calendar years 2000 through 2016 (the last year we could include in the model due to data constraints). Following the existing research literature, we hypothesized that HECM loan originations could be affected by several demand- and supply-related factors that could be represented by demographic and socioeconomic characteristics, housing market conditions, and product features. Accordingly, our model used a variety of data from FHA, the Census Bureau, the Federal Housing Finance Agency, and other sources. For a detailed description of our econometric model—including the model specification, factors used, data sources, and results—and a list of selected studies we consulted to develop the model, see appendix II. We also reviewed relevant literature and interviewed academic and HUD economists about FHA policy changes and behavioral and structural factors (for example, consumers’ perception of reverse mortgages) that we could not account for in our econometric model but that may influence HECM take-up rates. These individuals included three academic economists who have conducted extensive research on reverse mortgages and economists from FHA and HUD’s Office of Policy Development and Research. We present summary information about these factors in this report. We conducted this performance audit from July 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of and Results for GAO’s Econometric Model of Home Equity Conversion Mortgage Take-Up Rates We developed an econometric model to examine, to the extent possible, factors associated with Home Equity Conversion Mortgage (HECM) take- up rates—the ratio of HECM originations to eligible senior homeowners— using data from calendar year 2000 through 2016. Take-up rates provide an indication of how popular HECMs are among the population of senior homeowners. A number of factors may have affected the take-up rates over this period. For instance, it has been asserted that demand for HECMs would be high for elderly people that are house-rich but cash- poor, but behavioral factors such as their desire to leave a bequest could limit demand. Also, the limited number of large, well-known lenders could constrain supply of HECMs. Furthermore, several FHA policy changes to the HECM program may have affected the number of loan originations. Following the existing literature, we hypothesized that HECM loan originations could be affected by several demand- and supply-related factors that could be represented by demographic and socioeconomic characteristics, housing market conditions, product features, program policy changes, and behavioral and structural factors. Model Specification The general specification of the model we used, which is a quasi-reduced form of the net effect of demand and supply factors on HECM take-up rates, is as follows: Yit = θ + α + γ + Xit β + εit. Y is the dependent variable, the take-up rate, representing the ratio of HECM originations to eligible senior homeowners in state (i) in year (t). An eligible senior homeowner is an owner-occupied householder aged 65 or older. Both α and γ control, respectively, for state-specific (but time-invariant) and year-specific (but state-invariant) observable and unobserved factors. They help to minimize omitted variable bias that could be caused by excluding time-invariant or state-invariant variables. The latter, which are year fixed-effects (that is, variables that change over time but are constant across the states), would pick up average differences in take-up rates over the years. These factors would include changes in HECM program policies and market conditions over time, such as the exit of large, well-known HECM lenders or investors. In general, using the year fixed-effects precluded the estimation of the impact of variables that are state-invariant (for example, interest rates).The state fixed-effects are used to control for average differences in take-up rates across the states (that is, variables that differ across the states but are constant over time). These effects would include regulatory variations across states. The vector X captures measured variables represented by demographic and socioeconomic characteristics, housing market conditions, and product features that vary across states and over time. Given that the time-invariant and state-invariant factors would be accounted for by the state fixed-effects and year fixed-effects, respectively, the measured variables capture how changes in these variables within states (that is, over time) could affect take-up rates. θ is the constant term. ε, the regression error term, represents random and other unobserved factors that could vary across the states and over time, such as random changes in risk behavior of HECM borrowers and lenders. It also captures errors due to misspecification and measurement. Data Sources The data sources for our analysis are as follows: Census Bureau. The data include demographic, socioeconomic, and housing characteristics in geographic areas. The data are from the Integrated Public Use Microdata Series National Historical Geographic Information System (IPUMS NHGIS) for 2000; 1-year American Community Survey data from the American FactFinder for 2005–2009, and 1-year American Community Survey data from IPUMS NHGIS for 2010–2016. We interpolated the data for 2001– 2004 using all data available for the other years: 2000 and 2005– 2016. All the data are for seniors aged 65 years or older and at the state level. Federal Housing Finance Agency. House price indexes at the state level, 2000–2016. Federal Reserve Bank of New York. Federal Reserve Bank of New York Consumer Credit Panel/Equifax: Mortgage debt balances of seniors 62 years or older, state level, 2003–2016. Survey of Consumer Finances: Triennial data on family net worth, national level, 2000–2016. Federal Reserve Bank of St. Louis’s Federal Reserve Economic Data. Consumer price index for all urban consumers, national level, 2000–2016. Effective federal funds rate, national level, 2000–2016. Federal Housing Administration (FHA). HECM loan-level data from the Single Family Data Warehouse, available yearly, 2000–2017. The data include when the loan was endorsed by FHA, property location, appraised home value, and maximum claim amount. Factors That Could Affect HECM Take-Up Rates The list of potential explanatory variables we used in the model is provided below. The data are measured at the state level and are available from 2000 through 2016 (unless indicated otherwise). Also, the variables are for senior householders, aged 65 or older (unless indicated otherwise). All monetary values are in 2016 dollars using the Consumer Price Index for All Urban Consumers. The data sources are indicated in brackets (see the data sources above for details). Demographic and socioeconomic characteristics [Census Bureau]. Fraction 75 years or older in occupied housing units. Fraction of senior householders who are married or those who are unmarried females. Fraction African American or Hispanic. Fraction of individuals 65 years or older with high school education or some college education, or with college, graduate, or professional degree. Fraction in the labor force: the ratio of the labor force (the employed and the unemployed) to civilian noninstitutionalized adult population (65 years or older). Fraction in poverty. Median household income (natural logarithm). Ratio of family net worth of individuals 65 years or older to house value. Net worth is measured as the difference between families’ gross assets and liabilities using triennial data at the national level . House value is measured as the ratio of aggregate house value to number of owner-occupied housing units. Housing market conditions. House price changes : o House price growth: 5-year intervals prior to the observation. o House price volatility: standard deviation of annual house price percent change in the 5 years prior to the observation. Effective federal funds rate (percent). [Federal Reserve Bank of Home equity per senior homeowner (natural logarithm), 1-year lag. Home equity is measured as the aggregate house value of owner-occupied housing units minus total mortgage debt. Total mortgage debt comprises aggregate mortgage, home equity loan, and home equity line of credit balances of individuals 62 or older (2003–2016). [Census Bureau; Federal Reserve Bank of New York/Equifax] Ratio of individuals aged 62 or older with home equity loan to senior homeowners, 1-year lag (2003–2016). [Federal Reserve Bank of New York/Equifax; Census Bureau] Ratio of individuals aged 62 or older with home equity line of credit to senior homeowners, 1-year lag (2003–2016). [Federal Reserve Bank of New York/Equifax; Census Bureau] Fraction of owner-occupied housing units with ratio of selected monthly housing costs to household income greater than or equal to 35 percent. Product features. FHA loan limit: proportion of HECM loans in a state and year for which the appraised home value is more than the maximum claim amount; that is, the FHA loan limit is binding. The maximum claim amount equals the minimum of the appraised home value and the FHA loan limit. Although we did not directly include other variables that could affect HECM take-up rates in our model partly due to lack of data, we included year fixed-effects and state-fixed effects to minimize omitted variables problem associated with state-invariant variables and time-invariant variables, respectively. These included several FHA policy changes to the HECM program and behavioral and structural factors, as discussed earlier in this report. We used a state as the geographic area instead of a smaller area, such as ZIP code. The data on HECM originations are available at the household (or family) level from FHA. However, the factors used in the model (demographic and socioeconomic characteristics and housing market conditions) are generally available at the state level or at the ZIP code level from the Census Bureau and other sources. There are advantages and disadvantages to using state-level or ZIP-code-level data. Given the low HECM take-up rates (see fig. 7 earlier in this report), using ZIP-code data would generally imply very low, if not zero, take-up rates across a large number of ZIP codes, which would make it harder to identify effects from our model. Also, not all of the data for the factors used in the model are available for every ZIP code with a HECM origination—including the home equity extraction variables—which would lead to exclusion of some areas, resulting in potential sample-selection bias. On the other hand, using ZIP code-level data could allow for more heterogeneity in certain states, and certain variables such as house price changes when measured at the ZIP code level could be closer to what the homeowner experiences. We decided to use state-level data because of our concern for potential sample-selection bias and the quality of data at the ZIP code level, although using state-level data could limit heterogeneity in the data across geographic areas. Description of Estimation Methodology and Results We estimated panel data of state-year observations of the model specified above using fixed-effects estimation. Because of data limitations with some of the key variables—home equity and home equity extraction via loans or lines of credit—and because we used a 1-year lag of these variables, we estimated the model from 2004 through 2016. We also excluded the District of Columbia, which was an outlier, with a take- up rate that was 4.5 times the national average. The list of the variables we used and the estimation results are provided in tables 6 and 7, respectively, at the end of this appendix. The standard fixed-effects estimates are reported in column 1 (the base model) of table 7. We also report fixed-effects estimates that account for spatial and temporal dependence in columns 2 through 4—column 2 estimates the base model, column 3 excludes the variables for home equity extraction from the base model, and column 4 excludes the year fixed-effects from the base model. We focused on these estimates because spatial correlations may be present as states are likely to be subject to both observable and unobservable common disturbances, and failure to account for these would yield inconsistent estimates of the standard errors. Factors Associated with HECM Take-Up Rates Our econometric estimates indicated that several demographic and socioeconomic characteristics and housing conditions are associated with take-up rates, using data across states from 2004 through 2016. The results discussed below, which are based primarily on the estimates in column 2 of table 7, are statistically significant at the 10, 5, or 1 percent levels or lower. Because the fixed-effects technique controls for the effects of both observable and unobservable factors that vary across states (but are time-invariant), the estimates of the measured effects are for only within-state variations and the results are interpreted accordingly. House price changes. The interaction term for house price growth and house price volatility is positive and significant at the 1 percent level. This implies that within states, take-up rates were higher when house price growth was large and when there was a history of house price volatility compared to either relatively low house price appreciation or stable house prices. This result is consistent with senior homeowners using reverse mortgages to insure against house price declines, which is supported by the positive and significant effects of the house price volatility by itself. On the other hand, the weak significance of house price growth by itself (at the 10 percent level) provides only modest support for senior homeowners using reverse mortgages purely to extract home equity. Home equity. Within states, take-up rates were higher when home equity of senior homeowners was high, significant at the 1 percent level. Fractions of senior homeowners with a home equity loan or home equity line of credit. Within states, take-up rates were higher when the fractions of senior homeowners with a home equity loan or home equity line of credit were high, significant at the 1 percent and 10 percent levels, respectively. Because these loans were outstanding as of the prior year, it is likely that borrowers used HECMs to pay them off. Fraction of owner-occupied housing units with ratio of housing costs to household income greater than or equal to 35 percent. Within states, take-up rates were higher when the ratio of housing costs to household income was high, significant at the 1 percent level. Fractions of seniors with high school or college education. Within states, take-up rates were higher when the fractions of seniors with high school or college education were high, significant at the 1 percent and 10 percent levels, respectively. Median household income. Within states, take-up rates were higher when incomes of senior households were high, significant at the 5 percent level. Fraction of senior households who were married. Within states, take-up rates were lower when the fraction of married senior households was high, significant at the 10 percent level. Fraction of homes in states with binding FHA loan limit. Although the effect was generally not statistically significant, the effect of the FHA loan limit on take-up rates was negative. Robustness Tests, Caveats, and Limitations of Our Econometric Analysis We estimated other specifications of our model to test the robustness and reasonableness of our results. The alternative specifications, described below, yielded estimates similar to those of our original model. We estimated the model excluding the variables for home equity loans and home equity lines of credit, which are alternative channels of home equity extraction, because they could be endogenous (see column 3 of table 7). We estimated the model excluding the year fixed-effects (see column 4 of table 7). We estimated the model using the number of senior housing units (instead of senior homeowners) within a state to normalize the number of HECMs in order to account for nonhomeowners who might become homeowners. We note the following caveats and limitations of our study: We were not able to include some factors that could affect HECM take-up rates, including FHA program policy changes and behavioral and structural factors previously discussed in this report. Some of our estimates could be different if we used areas smaller than a state as the units of observation, such as ZIP codes or counties. The estimates represent the average effects for all states and for all periods we analyzed, but the effects could differ for specific states or specific periods. Our analysis pertains to the period that we analyzed and may not be generalizable to other periods. Selected Studies To help develop our HECM take-up rate model, we consulted the following studies. 1. Banks, James, Richard Blundell, Zoe Oldfield, and James P. Smith. “Housing Price Volatility and Downsizing in Later Life.” National Bureau of Economic Research Working Paper 13496. Cambridge, Mass.: National Bureau of Economic Research, October 2007. Accessed April 30, 2019. http://www.nber.org/papers/w13496. 2. Chatterjee, Swarn. “Reverse Mortgage Participation in the United States: Evidence from a National Study.” International Journal of Financial Studies, vol. 4, no. 5 (2016): pp. 1–10. 3. Consumer Financial Protection Bureau. Reverse Mortgages: Report to Congress. Washington, D.C.: June 28, 2012. 4. Davidoff, Thomas. Reverse Mortgage Demographics and Collateral Performance. February 25, 2014. Accessed November 19, 2018. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2399942. 5. Davidoff, Thomas. “Supply Constraints Are Not Valid Instrumental Variables for Home Prices Because They Are Correlated With Many Demand Factors.” Critical Finance Review, vol. 5, no. 2 (2016): pp. 177–206. 6. Davidoff, Thomas, Patrick Gerhard, and Thomas Post. Reverse Mortgages: What Homeowners (Don’t) Know and How It Matters. October 24, 2016. Accessed November 19, 2018, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2528944. 7. Driscoll, John C., and Aart C. Kraay. “Consistent Covariance Matrix Estimation with Spatially Dependent Panel Data.” Review of Economics and Statistics, vol. 80 (1998): pp. 549–560. 8. Golding, Edward, and Laurie Goodman, “To Better Assess the Risk of FHA Programs, Separate Reverse and Forward Mortgages.” Urban Wire (blog), Urban Institute. November 29, 2017. Accessed August 14, 2019. https://www.urban.org/urban-wire/better-assess-risk-fha- programs-separate-reverse-and-forward-mortgages. 9. Goodman, Laurie, Karan Kaul, and Jun Zhu. What the 2016 Survey of Consumer Finances Tells Us about Senior Homeowners. Washington, D.C.: Urban Institute, November 2017. 10. Haurin, Donald, Chao Ma, Stephanie Moulton, Maximilian Schmeiser, Jason Seligman, and Wei Shi. “Spatial Variation in Reverse Mortgages Usage: House Price Dynamics and Consumer Selection.” Journal of Real Estate Finance and Economics, vol. 53 (2016): pp. 392–417. 11. Integrated Financial Engineering, Inc. “Appendix E: HECM Demand Model” in HECM Demand Model Actuarial Review of the Federal Housing Administration Mutual Mortgage Insurance Fund HECM Loans For Fiscal Year 2015. Prepared at the request of the Department of Housing and Urban Development. November 16, 2015. 12. Kaul, Karan, and Laurie Goodman. Seniors’ Access to Home Equity: Identifying Existing Mechanisms and Impediments to Broader Adoption. Washington, D.C.: Urban Institute, February 2017. 13. Lucas, Deborah. “Hacking Reverse Mortgages.” (Working paper, October 26, 2015). Accessed June 14, 2019. http://mitsloan.mit.edu/shared/ods/documents/?DocumentID=4596. 14. Mayer, Christopher J., and Katerina V. Simons. “Reverse Mortgages and the Liquidity of Housing Wealth.” Journal of the American Real Estate and Urban Economics Association, vol. 22, no. 2 (1994): pp. 235–255. 15. Mummolo, Jonathan, and Erik Peterson. “Improving the Interpretation of Fixed Effects Regression Results.” Political Science Research and Methods, vol. 6 (2018): pp. 1–7. 16. Moulton, Stephanie, Donald R. Haurin, and Wei Shi. “An Analysis of Default Risk in the Home Equity Conversion Mortgage (HECM) Program.” Journal of Urban Economics, vol. 90 (2015): pp. 17–34. 17. Moulton, Stephanie, Cazilia Loibl, and Donald Haurin. “Reverse Mortgage Motivations and Outcomes: Insights from Survey Data.” Cityscape: A Journal of Policy Development and Research, vol. 19, no. 1 (2017): pp. 73–97. 18. Moulton, Stephanie, Samuel Dodini, Donald Haurin, and Maximilian Schmeiser. “Seniors’ Home Equity Extraction: Credit Constraints and Borrowing Channels.” May 20, 2019. Accessed August 12, 2019. https://ssrn.com/abstract=2727204. 19. Nakajima, Makoto, and Irina A. Telyukova. “Reverse Mortgage Loans: A Quantitative Analysis.” The Journal of Finance, vol. 72, no. 2 (2017): pp. 911–949. 20. Redfoot, Donald L., Ken Scholen, and S. Kathi Brown. Reverse Mortgages: Niche Product or Mainstream Solution? Report on the 2006 AARP National Survey of Reverse Mortgage Shoppers. Washington, D.C.: December 2007. 21. Shan, Hui. “Reversing the Trend: The Recent Expansion of the Reverse Mortgage Market.” Real Estate Economics, vol. 39, no. 4 (2011): pp. 743–768. 22. Warshawsky, Mark J. “Retire on the House: The Possible Use of Reverse Mortgages to Enhance Retirement Security.” The Journal of Retirement, vol. 5, no. 3 (2018): pp. 10–31. Appendix III: Reported Home Equity Conversion Mortgage Termination Reasons Termination reason Death Total Appendix IV: Reported Home Equity Conversion Mortgage Terminations and Defaults Appendix V: Comments from the Department of Housing and Urban Development Appendix VI: Comments from the Consumer Financial Protection Bureau Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Westley (Assistant Director), Beth Faraguna (Analyst in Charge), Steven Campbell, William Chatlos, Holly Hobbs, John Karikari, Matthew Levie, Risto Laboski, Marc Molino, Jennifer Schwartz, Tyler Spunaugle, and Khristi Wilkins made key contributions to this report.
Why GAO Did This Study Reverse mortgages allow seniors to convert part of their home equity into payments from a lender while still living in their homes. Most reverse mortgages are made under FHA's HECM program, which insures lenders against losses on these loans. HECMs terminate when a borrower repays or refinances the loan or the loan becomes due because the borrower died, moved, or defaulted. Defaults occur when borrowers fail to meet mortgage conditions such as paying property taxes. These borrowers risk foreclosure and loss of their homes. FHA allows HECM servicers to offer borrowers foreclosure prevention options. Most HECM servicers are supervised by CFPB. GAO was asked to review HECM loan outcomes and servicing and related federal oversight efforts. Among other objectives, this report examines (1) what FHA data show about HECM terminations and the use of foreclosure prevention options, (2) the extent to which FHA assesses and monitors the HECM portfolio, and (3) the extent to which FHA and CFPB oversee HECM servicers. GAO analyzed FHA loan data and FHA and CFPB documents on HECM servicer oversight. GAO also interviewed agency officials, the five largest HECM servicers (representing 99 percent of the market), and legal aid groups representing HECM borrowers. What GAO Found The vast majority of reverse mortgages are made under the Federal Housing Administration's (FHA) Home Equity Conversion Mortgage (HECM) program. In recent years, a growing percentage of HECMs insured by FHA have ended because borrowers defaulted on their loans. While death of the borrower is the most commonly reported reason why HECMs terminate, the percentage of terminations due to borrower defaults increased from 2 percent in fiscal year 2014 to 18 percent in fiscal year 2018 (see figure). Most HECM defaults are due to borrowers not meeting occupancy requirements or failing to pay property charges, such as property taxes or homeowners insurance. Since 2015, FHA has allowed HECM servicers to put borrowers who are behind on property charges onto repayment plans to help prevent foreclosures, but as of fiscal year-end 2018, only about 22 percent of these borrowers had received this option. FHA's monitoring, performance assessment, and reporting for the HECM program have weaknesses. FHA loan data do not currently capture the reason for about 30 percent of HECM terminations (see figure). FHA also has not established comprehensive performance indicators for the HECM portfolio and has not regularly tracked key performance metrics, such as reasons for HECM terminations and the number of distressed borrowers who have received foreclosure prevention options. Additionally, FHA has not developed internal reports to comprehensively monitor patterns and trends in loan outcomes. As a result, FHA does not know how well the HECM program is serving its purpose of helping meet the financial needs of elderly homeowners. FHA has not conducted on-site reviews of HECM servicers since fiscal year 2013 and has not benefited from oversight efforts by the Consumer Financial Protection Bureau (CFPB). FHA officials said they planned to resume the reviews in fiscal year 2020, starting with three servicers that account for most of the market. However, as of August 2019, FHA had not developed updated review procedures and did not have a risk-based method for prioritizing reviews. CFPB conducts examinations of reverse mortgage servicers but does not provide the results to FHA because the agencies do not have an agreement for sharing confidential supervisory information. Without better oversight and information sharing, FHA lacks assurance that servicers are following requirements, including those designed to help protect borrowers. What GAO Recommends GAO makes eight recommendations to FHA to, among other things, improve its monitoring and assessment of the HECM portfolio and oversight of HECM servicers, and one recommendation to CFPB to share HECM servicer examination information with FHA. FHA and CFPB generally agreed with the recommendations.